European Symposium on Computer Aided Process Engineering-13, 36th European Symposium of the Working Party on Computer Aided Process Engineering [1 ed.] 978-0-444-51368-7

This book contains papers presented at the 13th European Symposium on Computer Aided Process Engineering (ESCAPE-13). Th

254 80 64MB

English Pages 1-1155 [1175] Year 2003

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Content:
Preface
Page v
Andrzej Kraslawski, Ilkka Turunen

International scientific committee
Page vi

Computer aided biochemical process engineering Original Research Article
Pages 1-10
I.D.L. Bogle

A two-stage optimisation approach to the design of water-using systems in process plants Original Research Article
Pages 11-16
Solomon Abebe, Zhigang Shang, Antonis Kokossis

Generation and screening of retrofit alternatives using a systematic indicator-based retrofit analysis method Original Research Article
Pages 17-22
Niels Kau Andersen, Nuria Coll, Niels Jensen, Rafiqul Gani, Eric Uerdingen, Ulrich Fischer, Konrad Hungerbühler

A comparison of flowsheet solving strategies using interval global optimisation methods Original Research Article
Pages 23-28
S. Balendra, I.D.L. Bogle

An integration of design data and mathematical models in chemical process design Original Research Article
Pages 29-34
B. Bayer, L. von Wedel, W. Marquardt

A production planning strategic framework for batch plants Original Research Article
Pages 35-40
Frédéric Bérard, Catherine Azzaro-Pantel, Luc Pibouleau, Serge Domenech

Managing financial risk in scheduling of batch plants Original Research Article
Pages 41-46
Anna Bonfill, Jordi Cantón, Miguel Bagajewicz, Antonio Espuña, Luis Puigjaner

A network model for the design of agile plants Original Research Article
Pages 47-52
A. Borissova, M. Fairweather, G.E. Goltz

A vision of computer aids for the design of agile production plants Original Research Article
Pages 53-58
A. Borissova, M. Fairweather, G.E. Goltz

Synthesis of integrated distillation systems Original Research Article
Pages 59-64
José A. Caballero, Juan A. Reyes-Labarta, Ignacio E. Grossmann

A continuous-time approach to multiproduct pipeline scheduling Original Research Article
Pages 65-70
Diego C. Cafaro, Jaime Cerdá

Optimal grade transition campaign scheduling in a gas-phase polyolefin FBR using mixed integer dynamic optimization Original Research Article
Pages 71-76
C. Chatzidoukas, C. Kiparissides, J.D. Perkins, E.N. Pistikopoulos

Environmentally-benign transition metal catalyst design using optimization techniques Original Research Article
Pages 77-82
Sunitha Chavali, Terri Huismann, Bao Lin, David C. Miller, Kyle V. Camarda

Complete separation system synthesis of fractional crystallization processes Original Research Article
Pages 83-88
L.A. Cisternas, J.Y. Cueto, R.E. Swaney

Mathematical modelling and design of an advanced once-through heat recovery steam generator Original Research Article
Pages 89-94
Marie-Noëlle Dumont, Georges Heyen

Synthesis and optimisation of the recovery route for residual products Original Research Article
Pages 95-100
Joaquim Duque, Ana Paula F.D. Barbósa-Póvoa, Augusto Q. Novais

A new modeling approach for future challenges in process and product design Original Research Article
Pages 101-106
Mario Richard Eden, Sten Bay Jørgensen, Rafiqul Gani

Solving an MINLP problem including partial differential algebraic constraints using branch and bound and cutting plane techniques Original Research Article
Pages 107-112
Stefan Emet, Tapio Westerlund

Selection of MINLP model of distillation column synthesis by case-based reasoning Original Research Article
Pages 113-118
Tivadar Farkas, Yuri Avramenko, Andreej Kraslawski, Zoltan Lelkes, Lars Nyström

Discrete model and visualization interface for water distribution network design Original Research Article
Pages 119-124
Eric S Fraga, Lazaros G Papageorgiou, Rama Sharma

Optimal design of mineral flotation circuits Original Research Article
Pages 125-130
E.D. Gálvez, M.F. Zavala, J.A. Magna, L.A. Cisternas

An MINLP model for the conceptual design of a carbothermic aluminium reactor Original Research Article
Pages 131-136
Dimitrios I. Gerogiorgis, B. Erik Ydstie

Towards the identification of optimal solvents for long chain alkanes with the SAFT equation of state Original Research Article
Pages 137-142
Apostolos Giovanoglou, Claire S. Adjiman, Amparo Galindo, George Jackson

Combined optimisation and process integration techniques for the synthesis of fuel cells systems Original Research Article
Pages 143-148
Julien Godat, François Marechal

Optimal design and operation of batch ultrafiltration systems Original Research Article
Pages 149-154
Antonio Guadix, Eva Sørensen, Lazaros G. Papageorgiou, Emilia M. Guadix

Process intensification through the combined use of process simulation and miniplant technology Original Research Article
Pages 155-160
Dr. Frank Heimann

A constraint approach for rescheduling batch processing plants including pipeless plants Original Research Article
Pages 161-166
W. Huang, P.W.H. Chung

Integrated MINLP synthesis of overall process flowsheets by a combined synthesis/ analysis approach Original Research Article
Pages 167-172
N. Iršič Bedenik, B. Pahor, Z. Kravanja

Computer aided design of styrene batch suspension polymerization reactors Original Research Article
Pages 173-178
C. Kotoulas, P. Pladis, E. Papadopoulos, C. Kiparissides

Waste heat integration between processes III: Mixed integer nonlinear programming model Original Research Article
Pages 179-184
Anita Kovač Kralj, Peter Glavič

Integration of process modelling and life cycle inventory. Case study: i-pentane purification process from Naphtha Original Research Article
Pages 185-190
L. Kulay, L. Jiménez, F. Castells, R. Bañares-Alcántara, G.A. Silva

Superstructure optimization of the olefin separation process Original Research Article
Pages 191-196
Sangbum Lee, Jeffery S. Logsdon, Michael J. Foral, Ignacio E. Grossmann

Batch extractive distillation with intermediate boiling entrainer Original Research Article
Pages 197-202
Z. Lelkes, E. Rev, C. Steger, V. Varga, Z. Fonyo, L. Horvath

Short-cut design of batch extractive distillation using MINLP Original Research Article
Pages 203-208
Z. Lelkes, Z. Szitkai, T. Farkas, E. Rev, Z. Fonyo

A conflict-based approach for process synthesis with wastes minimization Original Research Article
Pages 209-214
Xiao-Ning Li, Ben-Guang Rong, Andrzej Kraslawski, Lars Nyström

A new continuous-time state task network formulation for short term scheduling of multipurpose batch plants Original Research Article
Pages 215-220
Christos T. Maravelias, Ignacio E. Grossmann

Life cycle analysis of a solar thermal system with thermochemical storage process Original Research Article
Pages 221-226
Nur Aini Masruroh, Bo Li, Jiri Klemeš

Hybrid synthesis method for mass exchange networks Original Research Article
Pages 227-232
Andrew K. Msiza, Duncan M. Fraser

Multiperiod synthesis and operational planning of utility systems with environmental concerns Original Research Article
Pages 233-238
A.P. Oliveira Francisco, H.A. Matos

Sizing intermediate storage with stochastic equipment failures under general operation conditions Original Research Article
Pages 239-244
Éva Orbán-Mihálykó, Béla. G. Lakatos

Synthesis, design and operational modelling of batch processes: An integrated approach Original Research Article
Pages 245-250
Irene Papaeconomou, Sten Bay Jørgensen, Rafiqul Gani, Joan Cordiner

Modelling, design and commissioning of a sustainable process for VOCs recovery from spray paint booths Original Research Article
Pages 251-256
Sauro Pierucci, Danilo Bombardi, Antonello Concu, Giuseppe Lugli

Comparison between STN, m-STN and RTN for the design of multipurpose batch plants Original Research Article
Pages 257-262
Tânia Pinto, Ana Paula F.D. Barbósa-Póvoa, Augusto Q. Novais

Generalized modular framework for the representation of Petlyuk distillation columns Original Research Article
Pages 263-268
P. Proios, E.N. Pistikopoulos

A multi-modelling approach for the retrofit of processes Original Research Article
Pages 269-274
A. Rodríguez-Martínez, I. López-Arévalo, R. Bañares-Alcántara, A. Aldea

Synthesis of partially thermally coupled column configurations for multicomponent distillations Original Research Article
Pages 275-280
Ben-Guang Rong, Andrzej Kraslawski, Ilkka Turunen

A multicriteria process synthesis approach to the design of sustainable and economic utility systems Original Research Article
Pages 281-286
Zhigang Shang, Antonis Kokossis

A decision support database for inherently safer design Original Research Article
Pages 287-292
R. Srinivasan, K.C. Chia, A-M. Heikkila, J. Schabel

Using design prototypes to build an ontology for automated process design Original Research Article
Pages 293-298
I.D. Stalker, E.S. Fraga, L von Wedel, A Yang

Engineer computer interaction for automated process design in COGents Original Research Article
Pages 299-304
I.D. Stalker, R.A. Stalker Firth, E.S. Fraga

Developing a methanol-based industrial cluster Original Research Article
Pages 305-310
Rob M. Stikkelman, Paulien M. Herder, Remmert van der Wal, David Schor

Risk premium and robustness in design optimization of simplified TMP plant Original Research Article
Pages 311-316
Satu Sundqvist, Elina Pajula, Risto Ritala

Process design as part of a concurrent plant design project Original Research Article
Pages 317-322
Timo L. Syrjänen, Jaakko Pöyry Oy

A New MINLP model for mass exchange network synthesis Original Research Article
Pages 323-328
Z. Szitkai, T. Farkas, Z. Kravanja, Z. Lelkes, E. Rev, Z. Fonyo

A knowledge based system for the documentation of research concerning physical and chemical processes-system design and case studies for application Original Research Article
Pages 329-334
M. Weiten, G. Wozny

A semi heuristic MINLP algorithm for production scheduling Original Research Article
Pages 335-340
Mehmet Yuceer, Ilknur Atasoy, Ridvan Berber

Roles of ontology in automated process safety analysis Original Research Article
Pages 341-346
Chunhua Zhao, Mani Bhushan, Venkat Venkatasubramanian

Operator support system for multi product processes-application to polyethylene production Original Research Article
Pages 347-352
J. Abonyi, P. Arva, S. Nemeth, Cs. Vincze, B. Bodolai, Zs. Dobosné Horváth, G. Nagy, M. Németh

Combination of measurements as controlled variables for self-optimizing control Original Research Article
Pages 353-358
Vidar Alstad, Sigurd Skogestad

Integrating budgeting models into APS systems in batch chemical industries Original Research Article
Pages 359-364
Mariana Badell, Javier Romero, Luis Puigjaner

A system for support and training of personnel working in the electrochemical treatment of metallic surfaces Original Research Article
Pages 365-370
Athanassios F. Batzias, Fragiskos A. Batzias

Sensor-placement for dynamic processes Original Research Article
Pages 371-376
C. Benqlilou, M.J. Bagajewicz, A. Espuña, L. Puigjaner

Chaotic oscillations in a system of two parallel reactors with recirculation of mass Original Research Article
Pages 377-382
Marek Berezowski, Daniel Dubaj

Control structure selection for unstable processes using Hankel singular value Original Research Article
Pages 383-388
Yi Cao, Prabikumar Saha

Neural networks based model predictive control of the drying process Original Research Article
Pages 389-394
Mircea V. Cristea, Raluca Roman, Şerban P. Agachi

Real-time optimization systems based on grey-box neural models Original Research Article
Pages 395-400
F.A. Cubillos, E.L. Lima

Change point detection for quality monitoring of chemical processes Original Research Article
Pages 401-406
Belmiro P.M. Duarte, Pedro M. Saraiva

Selecting appropriate control variables for a heat integrated distillation system with prefractionator Original Research Article
Pages 407-412
Hilde K. Engelien, Sigurd Skogestad

A holistic framework for supply chain management Original Research Article
Pages 413-418
A. Espuña, M.T. Rodrigues, L. Gimeno, L. Puigjaner

Management of financial and consumer satisfaction risks in supply chain design Original Research Article
Pages 419-424
G. Guillén, F.D. Mele, M. Bagajewicz, A. Espuña, L. Puigjaner

Operator training and operator support using multiphase pipeline models and dynamic process simulation: Sub-sea production and on-shore processing Original Research Article
Pages 425-430
Morten Hyllseth, David Cameron

Unstable behaviour of plants with recycle Original Research Article
Pages 431-436
Anton A. Kiss, Costin S. Bildea, Alexandre C. Dimian, Piet D. Iedema

Development of an intelligent multivariable filtering system based on the rule-based method Original Research Article
Pages 437-442
S.P. Kwon, Y.H. Kim, J. Cho, E.S. Yoon

Multiple-fault diagnosis using dynamic PLS built on qualitative relations Original Research Article
Pages 443-448
Gibaek Lee, En Sup Yoon

Integration of design and control for energy integrated distillation Original Research Article
Pages 449-454
Hongwen Li, Rafiqul Gani, Sten Bay Jørgensen

Process monitoring based on wavelet packet principal component analysis Original Research Article
Pages 455-460
Li Xiuxi, Yu Qian, Junfeng Wang

Information criterion for determination time window length of dynamic PCA for process monitoring Original Research Article
Pages 461-466
Xiuxi Li, Yu Qian, Junfeng Wang, S Joe Qin

Tendency model-based improvement of the slave loop in cascade temperature control of batch process units Original Research Article
Pages 467-472
János Madár, Ferenc Szeifert, Lajos Nagy, Tibor Chován, János Abonyi

Consistent malfunction diagnosis inside control loops using signed directed graphs Original Research Article
Pages 473-478
Mano Ram Maurya, Raghunathan Rengaswamy, Venkat Venkatasubramanian

Financial risk control in a discrete event supply chain Original Research Article
Pages 479-484
Fernando D. Mele, Miguel Bagajewicz, Antonio Espuña, Luis Puigjaner

Control application study based on PROCEL Original Research Article
Pages 485-490
Q.F. Meng, J.M. Nougués, M.J. Bagajewicz, L. Puigjaner

Challenges in controllability investigations of chemical processes Original Research Article
Pages 491-496
P. Mizsey, M. Emtir, L. Racz, A. Lengyel, A. Kraslawski, Z. Fonyo

Analysis of linear dynamic systems of low rank Original Research Article
Pages 497-502
Satu-Pia Reinikainen, Agnar Höskuldsson

Data based classification of roaster bed stability Original Research Article
Pages 503-508
Björn Saxen, Jens Nyberg

A two-layered optimisation-based control strategy for multi-echelon supply chain networks Original Research Article
Pages 509-514
P. Seferlis, N.F. Giannelos

Dynamic control of a Petlyuk column via proportional-integral action with dynamic estimation of uncertainties Original Research Article
Pages 515-520
Juan Gabriel Segovia-Hernández, Salvador Hernández, Ricardo Femat, Arturo Jiménez

Dynamic study of thermally coupled distillation sequences using proportional-integral controllers Original Research Article
Pages 521-526
Juan Gabriel Segovia-Hernández, Salvador Hernández, Vicente Rico-Ramírez, Arturo Jiménez

Metastable control of cooling crystallisation Original Research Article
Pages 527-532
T.T.L. Vu, J.A. Hourigan, R.W. Sleigh, M.H. Ang, M.O. Tade

Regional knowledge analysis of artificial neural network models and a robust model predictive control architecture Original Research Article
Pages 533-538
Chia Huang Yen, Po-Feng Tsai, Shi-Shang Jang

Optimisation of automotive catalytic converter warm-up: Tackling by guidance of reactor modelling Original Research Article
Pages 539-544
J. Ahola, J. Kangas, T. Maunula, J. Tanskanen

Gas-liquid and liquid-liquid system modeling using population balances for local mass transfer Original Research Article
Pages 545-549
Ville Alopaeus, Kari I. Keskinen, Jukka Koskinen, Joakim Majander

Robust optimization of a reactive semibatch distillation process under uncertainty Original Research Article
Pages 551-556
H. Arellano-Garcia, W. Martini, M. Wendt, P. Li, G. Wozny

Solution of the population balance equation for liquid-liquid extraction columns using a generalized fixed-pivot and central difference schemes Original Research Article
Pages 557-562
Menwer M. Attarakih, Hans-Jörg Bart, Naim M. Faqir

Identification of multicomponent mass transfer by means of an incremental approach Original Research Article
Pages 563-568
André Bardow, Wolfgang Marquardt

Development of the US EPA's metal finishing facility pollution prevention tool Original Research Article
Pages 569-574
William Barrett, Paul Harten

Modelling and simulation of kinetics and operation for the TAME synthesis by catalytic distillation Original Research Article
Pages 575-580
Grigore Bozga, Gheorghe Bumbac, Valentin Plesu, Ilie Muja, Corneliu Dan Popescu

Reduction of a chemical kinetic scheme for carbon monoxide-hydrogen oxidation Original Research Article
Pages 581-586
R.B. Brad, M. Fairweather, J.F. Griffiths, A.S. Tomlin

A procedure for constructing optimal regression models in conjunction with a web-based stepwise regression library Original Research Article
Pages 587-592
N. Brauner, M. Shacham

Recommend Papers

European Symposium on Computer Aided Process Engineering-13, 36th European Symposium of the Working Party on Computer Aided Process Engineering [1 ed.]
 978-0-444-51368-7

  • Commentary
  • 50507
  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

EUROPEAN SYMPOSIUM ON COMPUTER AIDED PROCESS ENGINEERING -13

COMPUTER-AIDED CHEMICAL ENGINEERING Advisory Editor: R. Gani Volume 1: Volume 2: Volume 3: Volume 4: Volume 5:

Volume 6: Volume 7: Volume 8: Volume 9: Volume 10: Volume 11: Volume 12: Volume 13: Volume 14:

Distillation Design in Practice (L.M. Rose) The Art of Chemical Process Design (G.L. Wells and L.M. Rose) Computer Programming Examples for Chemical Engineers (G. Ross) Analysis and Synthesis of Chemical Process Systems (K. Hartmann and K. Kaplick) Studies in Computer-Aided Modelling. Design and Operation Part A: Unite Operations (I. PallaiandZ. Fonyo, Editors) Part B: Systems (I. Pallai and G.E. Veress, Editors) Neural Networks for Chemical Engineers (A.B. Bulsari, Editor) Material and Energy Balancing in the Process Industries - From Microscopic Balances to Large Plants (V.V Veverka and F Madron) European Symposium on Computer Aided Process Engineering-10 (S. Pierucci, Editor) European Symposium on Computer Aided Process Engineering-11 (R. GanI and S.B. Jorgensen, Editors) European Symposium on Computer Aided Process Engineering-12 (J. Grievink and J. van Schijndel, Editors) Software Architectures and Toolsfor Computer Aided Process Engineering (B. Braunschweig and R. Gani, Editors) Computer Aided Molecular Design: Theory and Practice (L.E.K. Achenie, R. Gani and V. Venkatasubramanian, Editors) Integrated Design and Simulation of Chemical Processes (A.C. Dimian) European Symposium on Computer Aided Process Engineering -13 (A. Kraslawski and I. Turunen, Editors)

COMPUTER-AIDED CHEMICAL ENGINEERING, 14 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQP

EUROPEAN SYMPOSIUM ON COMPUTER AIDED PROCESS ENGINEERING -13 36th European Symposium of the Working Party on Computer Aided Process Engineering ESCAPE-13,1-4 June, 2003, Lappeenranta, Finland

Edited by zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Andrzej Kraslawski Ilkka Turunen Lappeenranta University of Technology Lappeenranta Finland

2003 ELSEVIER Amsterdam - Boston - London - New York - Oxford - Paris San Diego - San Francisco - Singapore - Sydney -Toiiyo

ELSEVIER SCIENCE B.V. Sara Burgerhartstraat 25 P.O. Box211,1000 AE Amsterdam, The Netherlands © 2003 Elsevier Science B.V. All rights reserved. This work is protected under copyright by Elsevier Science, and the following terms and conditions apply to its use: Photocopying Single photocopies of single chapters may be made for personal use as allowed by national copyright laws. Permission of the Publisher and payment of a fee is required for all other photocopying, including multiple or systematic copying, copying for advertising or promotional purposes, resale, and all forms of document delivery. Special rates are available for educational institutions that wish to make photocopies for non-profit educational classroom use. Permissions may be sought directly from Elsevier Science & Technology Rights Department in Oxford, UK: phone: (-1-44) 1865 843830, fax: (-1-44) 1865 853333, e-mail: [email protected]. You may also complete your request on-line via the Elsevier Science homepage (http://www.elsevier.com), by selecting 'Customer support' and then 'Obtaining Permissions'.

In the USA, users may clear permissions and make payments through the Copyright Clearance Center, Inc., 222 Rosewood Drive, (+1) (978) 7504744, and in the UK through the Copyright Licensing Danvers, MA 01923, USA; phone: (-hi) (978) 7508400, fax:zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHG Agency Rapid Clearance Service (CLARCS), 90 Tottenham Court Road, London WIP OLP, UK; phone: (-H44) 207 631 5555; fax: (-1-44) 207 631 5500. Other countries may have a local reprographic rights agency for payments. Derivative Works Tables of contents may be reproduced for internal circulation, but permission of Elsevier Science is required for external resale or distribution of such material. Permission of the Publisher is required for all other derivative works, including compilations and translations. Electronic Storage or Usage Permission of the Publisher is required to store or use electronically any material contained in this work, including any chapter or part of a chapter. Except as outlined above, no part of this work may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without prior written permission of the Publisher. Address permissions requests to: Elsevier Science Global Rights Department, at the fax and e-mail addresses noted above. Notice No responsibility is assumed by the Publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should be made.

First edition 2003 Library of Congress Cataloging in Publication Data A catalog record from the Library of Congress has been applied for. British Library Cataloguing in Publication Data A catalogue record from the British Library has been applied for.

ISBN: 0-444-51368-X ISSN: 1570-7946 (Series) © The paper used in this publication meets the requirements of ANSI/NISO Z39.48-1992 (Permanence of Paper). Printed in Hungary.

Preface zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPON

This book contains papers presented at the 13th European Symposium on Computer Aided Process Engineering (ESCAPE-13) held in Lappeenranta, Finland from thezyxwvutsrqponmlkji V^ to 4* June 2003. The symposia on Computer Aided Process Engineering (CAPE) have been promoted by the Working Party on CAPE of the European Federation of Chemical Engineering (EFCE) since 1968. The most recent symposia were organised in The Hague, The Netherlands 2002, (ESCAPE-12), Kolding, Denmark, 2001 (ESCAPE-U), and Florence, Italy, 2000 (ESCAPE-10) The series of ESCAPE symposia assist in bringing together scientists, students and engineers from academia and industry, which are active in research and application of CAPE. The objective of ESCAPE-13 is to highlight the use of computers and information technology tools on five specific themes: 7. Process Design, 2. Process Control and Dynamics, 3. Modelling, Simulation and Optimisation, 4. Applications in Pulp and Paper Industry, 5. Applications in Biotechnology. The main theme for ESCAPE-13 is Expanding Application Field of CAPE Methods and Tools. It means extending CAPE approach, mainly used in chemical industry, into different sectors of process industry and promoting CAPE applications aiming at generation of new businesses and technologies. This book includes 190 papers selected from 391 submitted abstracts. All papers have been reviewed by 33 members of the international scientific committee. The selection process involved review of abstracts, manuscripts and final acceptance of the revised manuscripts. We are very grateful to the members of international scientific committee for their comments and recommendations. This book is the fourth ESCAPE Symposium Proceedings included in the series on Computer Aided Chemical Engineering. We hope that, as the previous Proceedings, it will contribute to the progress in computer aided process and product engineering.

Andrzej Kraslawski Ilkka Turunen

VI

International Scientific Committee Andrzej Kraslawski Ilkka Turunen

J. Aittamaa A. Barbosa-Povoa L. T. Biegler D. Bogle B. Braunschweig K. Edelmann Z. Fonyo R. Gani U.Gren P. Glavie J. Grievink I. E. Grossmann L. Hammarstrom G. Heyen A. Isakson S. B. J0rgensen B. Kalitventzeff

(Finland) (Portugal) (USA) (United Kingdom) (France) (Finland) (Hungary) (Denmark) (Sweden) (Slovenia) (The Netherlands) (USA) (Finland) (Belgium) (Sweden) (Denmark) (Belgium)

zyxwvutsrqponmlkjihgfedcbaZYX

(Finland, co-chairman) (Finland, co-chairman)

S. Karrila J-M. Le Lann K. Leiviska W. Marquardt J. Paloschi C. Pantelides T. Perris S. Pierucci H. Pingen L. Puigjaner Y. Qian H. Schmidt-Traub S. Skogestad P. Taskinen S. de Wolf T. Westerlund

(USA) (France) (Finland) (Germany) (United Kingdom) (United Kingdom) (United Kingdom) (Italy) (The Netherlands) (Spain) (P. R. China) (Germany) (Norway) (Finland) (The Netherlands) (Finland)

National Organising Committee Andrzej Kraslawski Ilkka Turunen

(Finland, co-chairman) (Finland, co-chairman)

J. Aittamaa M. Hurme M. Karlsson K. Leiviska P. Piiroinen R. Ritala P. Taskinen A. Vuori T. Westerlund

Helsinki University of Technology Helsinki University of Technology Metso Corporation Oulu University Danisco Sugar Oy Keskuslaboratorio Oy Outokumpu Research Oy Kemira Oyj Abo Akademi University

vu zyxwvutsrqpo

Contents Keynote Paper Bogle, I.D.L. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Computer A ided Biochemical Process Engineering 1

Contributed Papers Process Design Abebe, S., Shang, Z., Kokossis, A. A Two-stage Optimisation Approach to the Design of Water-using Systems in Process Plants 11 Andersen, N.K., Coll, N., Jensen, N., Gani, R., Uerdingen, E., Fischer, U., Hungerbiihler, K. Generation and Screening of Retrofit Alternatives Using a Systematic IndicatorBased Retrofit Analysis Method 17 Balendra, S., Bogle, I.D.L. A Comparison of Flowsheet Solving Strategies Using Interval Global Optimisation Methods 23 Bayer, B., von Wedel, L., Marquardt, W. An Integration of Design Data and Mathematical Models in Chemical Process Design 29 Berard, F., Azzaro-Pantel, C , Pibouleau, L., Domenech, S. A Production Planning Strategic Framework for Batch Plants 35 Bonfill, A., Canton, J., Bagajewicz, M., Espuna, A., Puigjaner, L. Managing Financial Risk in Scheduling of Batch Plants 41 Borissova, A., Fairweather, M., Goltz, G.E. A Network Model for the Design of Agile Plants 47 Borissova, A., Fairweather, M., Goltz, G.E. A Vision of Computer Aids for the Design of Agile Production Plants 53 Caballero, J.A., Reyes-Labarta, J.A., Grossmann, I.E. Synthesis of Integrated Distillation Systems 59 Cafaro, D.C., Cerda, J. A Continuous-Time Approach to Multiproduct Pipeline Scheduling 65 Chatzidoukas, C , Kiparissides, C , Perkins, J.D., Pistikopoulos, E.N. Optimal Grade Transition Campaign Scheduling in a Gas-Phase Polyolefin FBR Using Mixed Integer Dynamic Optimization 11 Chavali, S., Huismann, T., Lin, B., Miller, D.C., Camarda, K.V. Environmentally-Benign Transition Metal Catalyst Design using Optimization Techniques 11 Cisternas, L.A., Cueto, J.Y., Swaney, R.E. Complete Separation System Synthesis of Fractional Crystallization Processes 83

VIU zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Dumont, M.-N., Heyen, G. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Mathematical Modelling and Design of an Advanced Once-Through Heat Recovery Steam Generator 89 Duque, J., Barbosa-Povoa, A.P.F.D., Novais, A.Q. Synthesis and Optimisation of the Recovery Route for Residual Products 95 Eden, M.R., J0rgensen, S.B., Gani, R. A New Modeling Approach for Future Challenges in Process and Product Design 101 Emet, S., Westerlund, T. Solving an MINLP Problem Including Partial Differential Algebraic Constraints Using Branch and Bound and Cutting Plane Techniques 107 Farkas, T., Avramenko, Y., Kraslawski, A., Lelkes, Z., Nystrom, L. Selection of MINLP Model of Distillation Column Synthesis by Case-Based Reasoning 113 Fraga, E.S., Papageorgiou L.G., Sharma, R. Discrete Model and Visualization Interface for Water Distribution Network Design 119 Galvez, E.D., Zavala, M.F., Magna, J.A., Cisternas, L.A. Optimal Design of Mineral Flotation Circuits 125 Gerogiorgis, D.I., Ydstie, B.E. An MINLP Model for the Conceptual Design of a Carbothermic Aluminium Reactor 131 Giovanoglou, A., Adjiman, C.S., Galindo, A., Jackson, G. Towards the Identification of Optimal Solvents for Long Chain Alkanes with the SAFT Equation of State 137 Godat, J., Marechal, F. Combined Optimisation and Process Integration Techniques for the Synthesis of Fuel Cells Systems 143 Guadix, A., S0rensen, E., Papageorgiou, L.G., Guadix, E.M. Optimal Design and Operation of Batch Ultrafiltration Systems 149 Heimann, F. Process Intensification through the Combined Use of Process Simulation and Miniplant Technology 155 Huang, W., Chung, P.W.H. A Constraint Approach for Rescheduling Batch Processing Plants Including Pipeless Plants 161 Irsic Bedenik, N., Pahor, B., Kravanja, Z. Integrated MINLP Synthesis of Overall Process Flowsheets by a Combined Synthesis / Analysis Approach 167 Kotoulas, C., Pladis, P., Papadopoulos, E., Kiparissides, C. Computer Aided Design ofStyrene Batch Suspension Polymerization Reactors 173 Kovae Kralj, A., Glavic, P. Waste Heat Integration Between Processes III: Mixed Integer Nonlinear Programming Model 179 Kulay, L., Jimenez, L., Castells, F., Bafiares-Alcantara, R., Silva, G.A. Integration of Process Modelling and Life Cycle Inventory. Case Study: iPentane Purification Process from Naphtha 185

IX zyxwvutsrqpo

Lee, S., Logsdon, J.S., Foral, M.J., Grossmann, I.E. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQP Superstructure Optimization of the Olefin Separation Process 191 Lelkes, Z., Rev, E., Steger, C , Varga, V., Fonyo, Z., Horvath, L. Batch Extractive Distillation with Intermediate Boiling Entrainer 197 Lelkes, Z., Szitkai, Z., Farkas, T., Rev, E., Fonyo, Z. Short-cut Design of Batch Extractive Distillation using MINLP 203 Li, X.-N., Rong, B.-G., Kjaslawski, A., Nystrom, L. A Conflict-Based Approach for Process Synthesis with Wastes Minimization 209 Maravelias, C.T., Grossmann, LE. A New Continuous-Time State Task Network Formulation for Short Term Scheduling of Multipurpose Batch Plants 215 Masruroh, N.A., Li, B., Klemes, J. Life Cycle Analysis of a Solar Thermal System with Thermochemical Storage Process 221 Msiza, A.K., Eraser, D.M. Hybrid Synthesis Method For Mass Exchange Networks 227 Oliveira Francisco, A.P., Matos, H.A. Multiperiod Synthesis and Operational Planning of Utility Systems with Environmental Concerns 233 Orban-Mihalyko, E., Lakatos, B.G. Sizing Intermediate Storage with Stochastic Equipment Failures under General Operation Conditions 239 Papaeconomou, I., J0rgensen, S.B., Gani, R., Cordiner, J. Synthesis, Design and Operational Modelling of Batch Processes: An Integrated Approach 245 Pierucci, S., Bombardi, D., Concu, A., Lugli, G. Modelling, Design and Commissioning of a Sustainable Process for VOCs Recovery from Spray Paint Booths 251 Pinto, T., Barbosa-Povoa, A.P.F.D., Novais, A.Q. Comparison Between STN, m-STN and RTNfor the Design of Multipurpose Batch Plants 257 Proios, P., Pistikopoulos, E.N. Generalized Modular Framework for the Representation ofPetlyuk Distillation Columns 263 Rodriguez-Martinez, A., Lopez-Arevalo, I., Banares-Alcantara, R., Aldea, A. A Multi-Modelling Approach for the Retrofit of Processes 269 Rong, B.-G., Kraslawski, A., Turunen, I. Synthesis of Partially Thermally Coupled Column Configurations for Multicomponent Distillations 275 Shang, Z., Kokossis, A. A Multicriteria Process Synthesis Approach to the Design of Sustainable and Economic Utility Systems 281 Srinivasan, R., Chia, K.C., Heikkila, A.-M., Schabel, J. A Decision Support Database for Inherently Safer Design 287 Stalker, I.D., Fraga, E.S., von Wedel, L., Yang, A. Using Design Prototypes to Build an Ontology for Automated Process Design 293 Stalker, I.D., Stalker Firth, R.A., Fraga, E.S. Engineer Computer Interaction for Automated Process Design in COGents 299

Stikkelman, R.M., Herder, P.M., van der Wal, R., Schor, D. zyxwvutsrqponmlkjihgfedcbaZYXWVU Developing a Methanol-Based Industrial Cluster 305 Sundqvist, S., Pajula, E., Ritala, R. Risk Premium and Robustness in Design Optimization of a simplified TMP plant 311 Syrjanen, T.L. Process Design as Part of a Concurrent Plant Design Project 317 Szitkai, Z., Farkas, T., Kravanja, Z., Lelkes, Z., Rev, E., Fonyo, Z. A New MINLP Model for Mass Exchange Network Synthesis 323 Weiten, M., Wozny, G. A Knowledge Based System for the Documentation of Research Concerning Physical and Chemical Processes - System Design and Case Studies for Application 329 Yuceer, M., Atasoy, I., Berber, R. A Semi Heuristic MINLP Algorithm for Production Scheduling 335 Zhao, Ch., Bhushan, M., Venkatasubramanian, V. Roles of Ontology in Automated Process Safety Analysis 341

Process Control and Dynamics Abonyi, J., Arva, P., Nemeth, S., Vincze, Cs., Bodolai, B., Dobosne Horvath, Zs., Nagy, G., Nemeth, M. Operator Support System for Multi Product Processes - Application to Polyethylene Production Alstad, v., Skogestad, S. Combination of Measurements as Controlled Variables for Self-Optimizing Control Badell, M., Romero, J., Puigjaner, L. Integrating Budgeting Models into APS Systems in Batch Chemical Industries Batzias, A.F., Batzias, F.A. A System for Support and Training of Personnel Working in the Electrochemical Treatment of Metallic Surfaces BenqUlou, C , Bagajewicz, M.J., Espufia, A., Puigjaner, L. Sensor-Placement for Dynamic Processes Berezowski, M., Dubaj, D. Chaotic Oscillations in a System of Two Parallel Reactors with Recirculation of Mass Cao, Y., Saha, P. Control Structure Selection for Unstable Processes Using Hankel Singular Value Cristea, M.V., Roman, R., Agachi, S.P. Neural Networks Based Model Predictive Control of the Drying Process Cubillos, F.A., Lima, E.L. Real-Time Optimization Systems Based On Grey-Box Neural Models Duarte, B.P.M., Saraiva, P.M. Change Point Detection for Quality Monitoring of Chemical Processes Engelien, H.K., Skogestad, S. Selecting Appropriate Control Variables for a Heat Integrated Distillation System with Prefractionator Espufia, A., Rodrigues, M.T., Gimeno, L., Puigjaner, L. A Holistic Framework for Supply Chain Management

347

353 359

365 371

377 383 389 395 401

407 413

XI zyxwvutsrqpo

Guillen, G., Mele, F.D., Bagajewicz, M., Espuna, A., Puigjaner, L. zyxwvutsrqponmlkjihgfedcbaZ Management of Financial and Consumer Satisfaction Risks in Supply Chain Design 419 Hyllseth, M., Cameron, D., Havre, K. Operator Training and Operator Support using Multiphase Pipeline Models and Dynamic Process Simulation: Sub-Sea Production and On-Shore Processing 425 Kiss, A.A., Bildea, C.S., Dimian, A.C., ledema, P.D. Unstable Behaviour of Plants with Recycle 431 Kwon, S.P., Kim, Y.H., Cho, J., Yoon, E.S. Development of an Intelligent Multivariable Filtering System based on the RuleBased Method ^2>1 Lee, G., Yoon, E.S. Multiple-Fault Diagnosis Using Dynamic PLS Built on Qualitative Relations 443 Li, H., Gani, R., J0rgensen, S.B. Integration of Design and Control for Energy Integrated Distillation 449 Li, X.X., Qian, Y., Wang, J. Process Monitoring Based on Wavelet Packet Principal Component Analysis 455 Li, X.X., Qian, Y., Wang, J., Qin, S.J. Information Criterion for Determination Time Window Length of Dynamic PCA for Process Monitoring 461 Madar, J., Szeifert, P., Nagy, L., Chovan, T., Abonyi, J. Tendency Model-based Improvement of the Slave Loop in Cascade Temperature Control of Batch Process Units 467 Maurya, M.R., Rengaswamy, R., Venkatasubramanian, V. Consistent Malfunction Diagnosis Inside Control Loops Using Signed Directed Graphs 473 Mele, F.D., Bagajewicz, M., Espuna, A., Puigjaner, L. Financial Risk Control in a Discrete Event Supply Chain 479 Meng, Q.F., Nougues, J.M., Bagajewicz, M.J., Puigjaner, L. Control Application Study Based on PROCEL 485 Mizsey, P., Emtir, M., Racz, L., Lengyel, A., Kraslawski, A., Fonyo, Z. Challenges in Controllability Investigations of Chemical Processes 491 Reinikainen, S.-P., Hoskuldsson, A. Analysis of Linear Dynamic Systems of Low Rank 497 Saxen, B., Nyberg, J. Data Based Classification of Roaster Bed Stability 503 Seferlis, P., Giannelos, N.F. A Two-Layered Optimisation-Based Control Strategy for Multi-Echelon Supply Chain Networks 509 Segovia-Hernandez, J.G., Hernandez, S., Femat, R., Jimenez, A. Dynamic Control of a Petlyuk Column via Proportional-Integral Action with Dynamic Estimation of Uncertainties 515 Segovia-Hernandez, J.G., Hernandez, S., Rico-Ramirez, V., Jimenez, A. Dynamic Study of Thermally Coupled Distillation Sequences Using Proportional - Integral Controllers 521 Vu, T.T.L., Hourigan, J.A., Sleigh, R.W., Ang, M.H., Tade, M.O. Metastable Control of Cooling Crystallisation 527

xu zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Yen, Ch.H., Tsai, P.-F., Jang, S.S. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE Regional Knowledge Analysis of Artificial Neural Network Models and a Robust Model Predictive Control Architecture 533

Modelling, Simulation and Optimisation Ahola, J., Kangas, j ! , Maunula, T., Tanskanen, J. Optimisation of Automotive Catalytic Converter Warm-Up: Tackling by Guidance of Reactor Modelling Alopaeus, V., Keskinen, K.I., Koskinen, J., Majander, J. Gas-Liquid and Liquid-Liquid System Modeling Using Population Balances for Local Mass Transfer Arellano-Garcia, H., Martini, W., Wendt, M., Li, P., Wozny, G. Robust Optimization of a Reactive Semibatch Distillation Process under Uncertainty Attarakih, M.M., Bart, H.-J., Faqir, N.M. Solution of the Population Balance Equation for Liquid-Liquid Extraction Columns using a Generalized Fixed-Pivot and Central Difference Schemes Bardow, A., Marquardt, W. Identification of Multicomponent Mass Transfer by Means of an Incremental Approach Barrett, W., Harten, P. Development of the US EPA's Metal Finishing Facility Pollution Prevention Tool Bozga, G., Bumbac, G., Plesu, V., Muja, I., Popescu, CD. Modelling and Simulation of Kinetics and Operation for the TAME Synthesis by Catalytic Distillation Brad, R.B., Fairweather, M., Griffiths, J.F., Tomlin, A.S. Reduction of a Chemical Kinetic Scheme for Carbon Monoxide-Hydrogen Oxidation Brauner, N., Shacham, M. A Procedure for Constructing Optimal Regression Models in Conjunction with a Web-based Stepwise Regression Library Chatzidoukas, C., Perkins, J.D., Pistikopoulos, E.N., Kiparissides, C. Dynamic Simulation of the Borstar® Multistage Olefin Polymerization Process Cheng, H.N., Qian, Y., Li, X.X., Li, H. Agent-Oriented Modelling and Integration of Process Operation Systems Citir, C., Aktas, Z., Berber, R. Off-line Image Analysis for Froth Flotation of Coal Coimbra, M.d.C., Sereno, C , Rodrigues, A. Moving Finite Element Method: Applications to Science and Engineering Problems Dalai, N.M., Malik, R.K. Solution Multiplicity in Multicomponent Distillation. A Computational Study Dave, DJ., Zhang, N. Multiobjective Optimisation of Fluid Catalytic Cracker Unit Using Genetic Algorithms

539

545

551

557

563

569

575

581

587 593 599 605

611 617

623

Xlll zyxwvutsrqpon

Demicoli, D., Stichlmair, J. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Novel Operational Strategy for the Separation of Ternary Mixtures via Cyclic Operation of a Batch Distillation Column with Side Withdrawal 629 Dietzsch, L., Fischer, I., Machefer, S., Ladwig, H.-J. Modelling and Optimisation of a Semihatch Polymerisation Process 635 Elgue, S., Cabassud, M., Prat, L., Le Lann, J.M., Cezerac, J. A Global Approach for the Optimisation of Batch Reaction-Separation Processes 641 Gopinathan, N., Fairweather, M., Jia, X. Computational Modelling of Packed Bed Systems 647 Hadj-Kali, M., Gerbaud, V., Joulia, X., Boutin, A., Ungerer, P., Mijoule, C , Roques, J. Application of molecular simulation in the Gibbs Ensemble to Predict LiquidVapor Equilibrium Curves of Acetonitrile 653 Hallas, I.e., S0rensen, E. Simulation of Supported Liquid Membranes in Hollow Fibre Configuration 659 Haug-Warberg, T. On the Principles of Thermodynamic Modeling 665 Heinonen, J., Pettersson, F. Short-Term Scheduling in Batch Plants: A generic Approach with Evolutionary Computation 671 Hinnela, J., Saxen, H. Model of Burden Distribution in Operating Blast Furnaces 611 Hugo, A., Ciumei, C , Buxton, A., Pistikopoulos, E.N. Environmental Impact Minimisation through Material Substitution: A MultiObjective Optimisation Approach 683 Inglez de Souza, E.T., Maciel Filho, R., Victorino, I.R.S. Genetic Algorithms as an Optimisation Toll for Rotary Kiln Incineration Process 689 Kasiri, N., Hosseini, A.R., Moghadam, M. Dynamic Simulation of an Ammonia Synthesis Reactor 695 Katare, S., Caruthers, J., Delgass, W.N., Venkatasubramanian, V. Reaction Modeling Suite: A Rational, Intelligent and Automated Framework for Modeling Surface Reactions and Catalyst Design 701 Kim, Y.H., Ryu, M.J., Han, E., Kwon, S.-P., Yoon, E.S. Computer Aided Prediction of Thermal Hazard for Decomposition Processes 101 Kloker, M., Kenig, E., Gorak, A., Fraczek, K., Salacki, W., Orlikowski, W. Experimental and Theoretical Studies of the TAME Synthesis by Reactive Distillation 713 Ko5i, P., Marek, M., Kubicek, M. Oscillatory Behaviour in Mathematical Model ofTWC with Microkinetics and Internal Diffusion 719 Kohout, M., Vanickova, T., Schreiber, I., Kubicek, M. Methods of Analysis of Complex Dynamics in Reaction-Diffusion-Convection Models 725 Korpi, M., Toivonen, H., Saxen B. Modelling and Identification of the Feed Preparation Process of a Copper Flash Smelter 731

XIV

Koskinen, J., Pattikangas, T., Manninen, M., Alopaeus, V., Keskinen, K.L, Koskinen, K., Majander, J. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA CFD Modelling of Drag Reduction Effects in Pipe Flows 131 Kreis, P., Gorak, A. Modelling and Simulation of a Combined Membrane/Distillation Process 743 Lacks, DJ. Consequences ofOn-Line Optimization in Highly Nonlinear Chemical Processes 749 Lakner, R., Hangos, K.M., Cameron, I.T. Construction of Minimal Models for Control Purposes 755 Lievo, P., Almark, M., Purola, V.-M., Pyhalahti, A., Aittamaa, J. Miniplant - Effective Tool in Process Development and Design 761 Lim, Y.-L, Christensen, S., J0rgensen, S.B. A Generalized Adsorption Rate Model Based on the Limiting-Component Constraint in Ion-Exchange Chromatographic Separation for Multicomponent Systems 767 Miettinen, T., Laakkonen, M., Aittamaa, J. Comparison of Various Flow Visualisation Techniques in a Gas-Liquid Mixed Tank 773 Montastruc, L., Azzaro-Pantel, C , Davin, A., Pibouleau, L., Cabassud, M., Domenech, S. A Hybrid Optimization Technique for Improvement of P-Recovery in a Pellet Reactor 779 Mori, Y., Partanen, J., Louhi-Kultanen, M., Kallas, J. Modelling of Crystal Growth in Multicomponent Solutions 785 Mota, J.P.B. Towards the Atomistic Description of Equilibrium-Based Separation Processes. I. Isothermal Stirred-Tank Adsorber 791 Mota, J.P.B., Rodrigo, AJ.S., Esteves, I.A.A.C., Rostam-Abadi, M. Dynamic Modelling of an Adsorption Storage Tank using a Hybrid Approach Combining Computational Fluid Dynamics and Process Simulation 797 Mu, F., Venkatasubramanian, V. Online HAZOP Analysis for Abnormal Event Management of Batch Process 803 Mueller, C , Brink, A., Hupa, M. Analysis of Combustion Processes Using Computational Fluid Dynamics - A Tool and Its Application 809 Novakovic, K., Martin, E.B., Morris, A.J. Modelling of the Free Radical Polymerization ofStyrene with Benzoyl Peroxide as Initiator 815 Oliveira, R. Combining First Principles Modelling and Artificial Neural Networks: a General Framework 821 Oreski, S., Zupan, J., Glavic, P. Classifying and Proposing Phase Equilibrium Methods with Trained Kohonen Neural Network 827 Paloschi, J.R. An Initialisation Algorithm to Solve Systems of Nonlinear Equations Arising from Process Simulation Problems 833

XV zyxwvutsrqpo

Peres, J., Oliveira, R., Feyo de Azevedo, S. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK Modelling Cells Reaction Kinetics with Artificial Neural Networks: A Comparison of Three Network A rchitectures 839 Perret, J., Thery, R., Hetreux, G., Le Lann, J.M. Object-Oriented Components for Dynamic Hybrid Simulation of a Reactive Distillation Process 845 Ponce-Ortega, J.M., Rico-Ramirez, V., Hernandez-Castro, S. Using the HSS Technique for Improving the Efficiency of the Stochastic Decomposition Algorithm 851 Pongracz, B., Szederkenyi, G., Hangos, K.M. The Effect of Algebraic Equations on the Stability of Process Systems Modelled by Differential Algebraic Equations 857 Pons, M. The CAPE-OPEN Interface Specification for Reactions Package 863 Poth, N., Brusis, D., Stichlmair, J. Rigorous Optimization of Reactive Distillation in GAMS with the Use of External Functions 869 Preisig, H.A., Westerweele, M. Effect of Time-Scale Assumptions on Process Models and Their Reconciliation 875 Repke, J.-U., Villain, O., Wozny, G. A Nonequilibrium Model for Three-Phase Distillation in a Packed Column: Modelling and Experiments 881 Roth, S., Loffler, H.-U., Wozny, G. Connecting Complex Simulations to the Internet - an Example from the Rolling Mill Industry 887 Rouzineau, D., Meyer, M., Prevost, M. Non Equilibrium Model and Experimental Validation for Reactive Distillation 893 Salgado; P.A.C., Afonso, P.A.F.N.A. Hierarchical Fuzzy Modelling by Rules Clustering. A Pilot Plant Reactor Application 899 Salmi, T., Warna, J., Mikkola, J.-P., Aumo, J., Ronnholm, M., Kuusisto, J. Residence Time Distributions From CFD In Monolith Reactors - Combination of Avant-Garde and Classical Modelling 905 Schneider, P.A., Sheehan, M.E., Brown, S.T. Modelling the Dynamics of Solids Transport in Flighted Rotary Dryers 911 Sequeira, S.E., Herrera, M., Graells, M., Puigjaner, L. On-Line Process Optimisation: Parameter Tuning for the Real Time Evolution (RTF) Approach 917 Shimizu, Y., Tanaka, Y., Kawada, A. Multi-Objective Optimization System MOON^ on the Internet 923 Singare, S., Bildea, C.S., Grievink, J. Reduced Order Dynamic Models of Reactive Absorption Processes 929 Skouras, S., Skogestad, S. Separation ofAzeotropic Mixtures in Closed Batch Distillation Arrangements 935 Smolianski, A., Haario, H., Luukka, P. Numerical Bubble Dynamics 941 Soares, R. de P., Secchi, A.R. 947 EMSO: A New Environment for Modelling, Simulation and Optimisation

XVI zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Thullie, J., Kurpas, M. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA New Concept of Cold Feed Injection in RFR 953 Tiitinen, J. Numerical Modeling of a OK Rotor-Stator Mixing Device 959 Urbas, L., Gauss, B., Hausmanns, Ch., Wozny, G. Teaching Modelling of Chemical Processes in Higher Education using MultiMedia 965 van Wissen, M.E., Turk, A.L., Bildea, C.S., Verwater-Lukszo, Z. Modeling of a Batch Process Based upon Safety Constraints 971 Virkki-Hatakka, T., Rong, B.-G., Cziner, K., Hurme, M., Kraslawski, A., Turunen, I. Modeling at Different Stages of Process Life-Cycle 977 Yang, G., Louhi-Kultanen, M., Kallas, J. The CFD Simulation of Temperature Control in a Batch Mixing Tank 983 Zilinskas, J., Bogle, I.D.L. On the Generalization of a Random Interval Method 989

Applications in Pulp and Paper Industry Alexandridis, A., Sarimveis, H., Bafas, G. Adaptive Control of Continuous Pulp Digesters Based on Radial Basis Function Neural Network Models 995 Brown, D., Marechal, F., Heyen, G., Paris, J. Application of Data Reconciliation to the Simulation of System Closure Options in a Paper Deinking Process 1001 Costa, A.O.S., Biscaia Jr., E.G., Lima, E.L. Mathematical Description of the Kraft Recovery Boiler Furnace 1007 de Vaal, P.L., Sandrock, C. Implementation of a Model Based Controller on a Batch Pulp Digester for Improved Control 1013 Ghaffari, Sh., Romagnoli, J.A. Steady State and Dynamic Behaviour of Kraft Recovery Boiler 1019 Harrison, R.P., Stuart, P.R. Processing ofThermo-Mechanical Pulping Data to Enhance PC A and PLS 1025 Jemstrom, P., Westerlund, T., Isaksson, J. A Decomposition Strategy for Solving Multi-Product, Multi-Purpose Scheduling Problems in the Paper Converting Industry 1031 Masudy, M. Utilization of Dynamic Simulation at Tembec Specialty Cellulose Mill 1037 Pettersson, P., Soderman, J. Synthesis of Heat Recovery Systems in Paper Machines with Varying Design Parameters 1043 Rolandi, P.A., Romagnoli, J.A. 1049 Smart Enterprise for Pulp and Paper: Digester Modeling and Validation Silva, CM., Biscaia Jr., E. C. Multiobjective Optimization of a Continuous Pulp Digester 1055 Soderman, J., Pettersson, F. Searching for Enhanced Energy Systems with Process Integration in Pulp and Paper Industries 1061

XVll

Virta, M.T., Wang, H., Roberts, J.C. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF The Performance Optimisation and Control for the Wet End System of a Fluting and Liner Board Mill 1067

Applications in Biotechnology Acufia, G., Cubillos, F., Molin, P., Ferret, E., Perez-Correa, R. On-line Estimation of Bed Water Content and Temperature in a SSC Bioreactor Using a Modular Neural Network model Eusebio, M.FJ., Barreiros, A.M., Fortunate, R., Reis, M.A.M., Crespo, J.G., Mota, J.P.B. On-line Monitoring and Control of a Biological Denitrification Process for Drinking-Water Treatment Horner, D.J., Bansal, P.S. The Role of CAPE in the Development of Pharmaceutical Products Kristensen, N.R., Madsen, H., J0rgensen, S.B. Developing Phenomena Models from Experimental data Levis, A.A., Papageorgiou, L.G. Multi-Site Capacity Planning for the Pharmaceutical Industry Using Mathematical Programming Li, Q., Hua, B. A Multiagent-based System Model of Supply Chain Management for Traditional Chinese Medicine Industry Lim, A.Ch., Farid, S., Washbrook, J., Titchener-Hooker, N.J. A Tool for Modelling the Impact of Regulatory Compliance Activities on the Biomanufacturing Industry Manca, D., Rovaglio, M., Colombo, I. Modeling the Polymer Coating in Microencapsulated Active Principles Marcoulaki, E.G., Batzias, F.A. Extractant Design for Enhanced Biofuel Production through Fermentation of Cellulosic Wastes Sarkar, D., Modak, J.M. Optimisation of Fed-Batch Bioreactors Using Genetic Algorithms: Two Control Variables van Winden, W.A., Verheijen, P.J.T., Heijnen, J.J. Efficient Modeling of C-Labeling Distributions in Microorganisms Wang, F.-Sh. Fuzzy Goal Attainment Problem of a Beer Fermentation Process Using Hybrid Differential Evolution Wongso, F., Hidajat, K., Ray, A.K. Application ofMultiobjective Optimization in the Design ofChiral Drug Separators based on SMB Technology

1145

Author Index

1151

1073

1079 1085 1091

1097

1103

1109 1115

1121

1127 1133

1139

This Page Intentionally Left Blank

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM

Computer Aided Biochemical Process Engineering I.D.L.Bogle Dept of Chemical Engineering, University College London, Torrington Place, London, WCIE 7JE [email protected]

Abstract The growth of the biochemical industries is heating up in Europe after not meeting the initial expectations. CAPE tools have made some impact and progress on computer aided synthesis and design of biochemical processes is demonstrated on a process for the production of a hormone. Systems thinking is being recognised by the life science community and to gain genuinely optimal process solutions it is necessary to design right through from product and function to metabolism and manufacturing process. The opportunities for CAPE experts to contribute in the explosion of interest in the Life Sciences is strong if we think of the 'Process' in CAPE as any process involving physical or (bio-)chemical change.

1. Introduction The biochemical process industries have been in the news, and often headlines, for many years now. There has been a significant impact on Chemical Engineering research in scientific fundamentals but not such a great impact on process design. In the early nineties the European Commission (EC communication 19 April 1991) was predicting that the sales of biotechnology derived products would be between €26 and €41 billion by the year 2000, a three-fold increase on sales in 1985. A recent review of global biotechnology by Ernst and Young (2002) shows that revenues in 2000 in Europe were around €10 billion (M$9,872) but had risen by 39% for 2001 with a figure of around €13 billion (M$ 13,733). So while the sector has not delivered the full promise it is clearly a growing one. Globally the USA dominates the sector with product sales of $28.5 billion in 2001 ($25.7 billion in 2000). A key difference between the USA and Europe is that in the USA the revenues for public companies dominates while in Europe private companies produce nearly half the revenues. For public companies the rest of the world currently contribute only 5.6% of revenues. The European biotechnology industry 'is now starting to play a central role on the global stage ... enabled by a dramatically increased flow of funds into the industry' (Ernst and Young). According to the Ernst and Young report 'the European biotechnology sector is characterized by a high proportion of small early stage companies' so one of the constraining factors is a strain on private equity resources. However the number of companies that have raised more than 20 million euros has risen from 3 in 1998 to 23 in 2001. Also resistance to the industry has been stronger in Europe and this has resulted in tighter controls on both manufacturing and research and development. Regulatory compliance is a key issue in biochemical processes because the regulatory authorities demand well defined operating procedures to be adhered to once approved for a

particular product and process. This has significance for the CAPE community since to have a role it must be early in the development process when there is freedom to make design and operational decisions.

Much of the (Bio-)Chemical Engineering design activity aids large scale manufacturing. The modeling effort for biochemical processes is significant because of the specific characteristics of the production (typically fermentation of cells) and separation (often specific chromatographic techniques) operations. However as we will show in this paper there has been significant success in design and operations optimization and that with the continuing improvement in the understanding of metabolic systems and with progress being made elsewhere in facilitating first principles modeling there is considerable scope for improvement and take up of contributions from our community to assist in the development of the industry. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONML

2. Product and Process One of the characteristics of the industry is the rapid generation of new products. This is set to increase particularly for medicinal products since systematic generation based on genome and proteome information are bound to flow from the mapping of the building blocks of living matter in the genes and proteins. Many databases are now Web accessible. The most important driver for CAPE based tools is to be able to provide manufacturing solutions very rapidly based on incomplete information. However tools can also seek to provide guidance for the correct sort of data needed from the laboratory. And also they can seek to provide guidance on the appropriate products for the market place and for manufacturability. The biotechnology sector covers a wide range of products including medicinal products, foodstuffs, specialty chemical products, and land remediation. There is a role for engineers to play in systematic identification of product. The generation of appropriate medicinal treatment based on pharmacological knowledge and modeling once seemed fanciful but with better understanding of functional relationships and with the introduction of gene therapy treatments, where treatment is matched to the specific genetic information of a patient, this may well become commonplace (Bailey). It would also be appropriate for engineers to be specifying systems in which the identification of the product is tied closely in with the function as well as the manufacturability of the product and generating the product and process in a seamless manner. The same may also be true for specialty chemical products produced biologically and of course there is progress in engineering approaches to this problem which would be directly applicable (see for example Moggridge and Cussler). Foodstuffs manufacture should also be thought of in an integrated manner considering the functionality required of the product - taste, mouthfeel, nutritional content - in the specification of product and process. Perhaps this last area is the most difficult because of the difficulty in characterizing much of the functionality in a form that is amenable to quantitative treatment. Biological products are normally produced in fermentation processes. The product is expressed in some type of organism and can i) be excreted and collected, ii) remain inside the cell as a soluble product and the cell must be broken open and the product extracted from the broth, or iii) remain as an insoluble product where again the cell must be broken and particulate recovery methods are also required. There are many choices to be made in the production. Many host organisms are used to express the product:

common bakers yeast,zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Escherichia coli a common bacterium, Aspergillus niger a fungal bacterium, and mammalian cells are commonly used. If the product is expressed naturally by the organism (such as ethanol by yeast) then there are many choices of strain of the organism to be made, each strain having different levels of expression of the product. But with the advent of genetic engineering it became possible to modify a host organism such that it can produce non-native products. So now the choice is of both host and strain, constrained by limitations of the genetic engineering procedures. It is also possible now to define or modify the expression system within the organisms to optimise production. The whole area of Metabolic Engineering is ripe for greater input for Chemical Engineers and from the CAPE community and Stephanopoulos provides many valuable insights. The topic deserves a review paper of its own but it is worthwhile touching on some of the contributions of our community to this problem. The metabolism is usually represented as a network of well defined chemical reactions although this itself is a simplification as there are other links through enzymatic catalysts present and in the consumption and production of ATP, the main energy transport agent. The CAPE community has much experience of the simulation and optimisation of networks. In the late 60s a general non-linear representation of integrated metabolic systems was first suggested (Savageau) using a power law approximation for the kinetics. Using this formulation it is possible to identify rate controlling steps and bottlenecks, and to optimise the production of individual metabolites. This could be effected practically by changing the environment around the cells or by genetic modification. The optimisation problem can be transformed into a linear progranmiing problem (Voit, Regan et al.) or as an MILP problem (Hatzimanikatis et al) where network structure can also be modified, for example by deactivating paths. The extent to which uncertainty can be directly incorporated has also been tackled (Petkov and Maranas) addressing how likely it is to achieve a particular metabolic objective without exceeding physiological constraints that have been defined probabilistically. Clearly this is a very fertile area but one that must be done in close collaboration with biological scientists since there are many practical issues which limit the modifications that can be made to any metabolic system. It is also possible to assist in the exploration of more fundamental biological problems which the metabolic pathway holds the key to. One example is that pathway proximity gives a measure of how well connected each metabolite is, thus providing us with an objective criterion for determining the most important intermediates of metabolism and this can be formulated as an LP problem (Simeonides et al.). We can expect to see significant opportunities for our community in the area of aiding biological understanding in the future. The ability to manufacture a new product also depends on the ability to purify the product. So the choices can also extend to expressing a product which can easily be separated and then if necessary chemically modified to produce the final usefiil product. One example of this will be discussed later in the paper. So in the manufacture of a new product there are many choices open to the engineers and opportunities to provide CAPE tools to facilitate these thought processes. Some of these choices are shown in fig 1: choices about genetic and metabolic networks, about host organism and type of expression system, and of manufacturing process design.

along with the criteria that influence them: product function and effectiveness (often related to purity) as well as cost, safety and environmental impact (now also with genetic engineering implications). First principles modeling of the entire design process is decades away so the integration of data - both the employment of existing data and the ability to assist in highlighting important data that if collected will significantly enhance the decision making process is critical. Culturally, close interactions with experimentalists is also essential since there is still a significant divide between the life science and the engineering communities with considerable scepticism of the role of computational techniques. However recently there is a much wider recognition of the need for quantitative methods in biology (Chicurel) and so we can be confident of a more receptive response to engineering methods in the future. zyxwvutsrqponmlkjihgfedcbaZYXWVUT

Fig 1. Product and process decision making. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM

3. Process Synthesis We know that it is important to take a systems view of process design. Ideally this also includes all the production and separation aspects and to allow simultaneous treatment of structural and continuous decisions. It should also permit the use of various criteria which must be juggled - economic, technical, environmental and so on. In the following sections I summarise some work we have done on synthesis and design of biochemical manufacturing processes tackling some of these aspects. We are still a long way from comprehensive solutions. In this section I discuss approaches to considering structural flowsheet choices and in the following about choices where the flowsheet has been established and operating conditions are being optimised. The synthesis task here involves a wide range of alternative unit operations and recycle is rare. The problem is often simplified by application of rough guidelines or heuristics, which can be used alone or to simplify the computational task. Leser et al. presented the use of a rule based expert system for use in biochemical process synthesis. In practice an enormous number of heuristics are in common use. Much can be achieved in the early stages of conceptual process design using simple models that encapsulate the key mechanism driving the separation processes. Critical choices between alternatives can be made without having to develop complete simulations. The number of possible configurations increases exponentially as the number of types of separators to be considered increases. The total number of configurations for most practical problems is so large that an exhaustive search is computationally not practical. Clearly because of this explosion it is necessary to use a computationally simple evaluation scheme combined with simple heuristics.

JakslandzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA et al (1995) developed a property based approach to synthesis of chemical processes which is based on utilising driving forces to characterise the effectiveness of alternative unit operations. Each separation process exploits specific property differences to facilitate purification of the various components in a stream. The key driving force, and corresponding key property, utilised by each technology is identified. Table 1 summarizes properties of several downstream purification operations used in biochemical processes (proposed values of feasibility indices can be found in Steffens et al. 2000a). The approach relies on estimates of the physical properties of the components in the system. While the possibility of predicting all the properties of biochemical systems is still a long way off recent developments in the field for large molecules (polymers) and electrolytic solutions provide encouragement. An extensive UNIFAC group and parameter database of Hansen et al. (1991) was applied to describe activity coefficients of amino acids and peptides using the UNIFAC model. It was demonstrated that the group definition is not appropriate for peptides and therefore proteins. There is considerable research activity going on but it is expected that for the time being the synthesis procedure will be based on measured properties for the system in question and where information is not available to use data from the most similar system. It will of course be necessary to build up a database of data for relevant products and processes but it is hoped that the synthesis procedure will help to guide the experimentation process. Table 1. Separation Processes and key properties (x is particle or molecular diameter that can be handled). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Unit operation Centrifugation Sedimentation Conventional filtration Rotary Drum filtration Microfiltration Microfiltration Ultrafiltration Diafiltration Precipitation Two liq phase separation Ion exchange Affinity chromatography Gel chromatography Hydrophobic Int. Chr.

Physical Property Density Density Particle size Particle size Particle size Molecular size Molecular size Molecular size Solubility Partition coefft Charge density Biospecific attraction Lx)g(mol. wt.) Hydrophobicity

Phase S/L S/L S/L S/L S/L I7L L/L L/L L/L UL L/L IVL UL 17L

Notes 0.110^m x>10 ^im 0.05 Tfn{p,b)

yo,b

Tfn{o,b) = Tin{o + \,b) ^X(b,p)-i^Q)^^ Figure 3. a) best sequence of tasks obtained, b) Best arrangement in actual columns. It is possible start an iterative procedure in which the previous result is an upper bound, and study the possibility of getting other configurations. Although it has not been presented here it is also possible consider the integration of some columns in a single shell with it is likely to produce some reductions in the investment cost. Note that each integration reduced by 4 the number of thermodynamically equivalent configurations. Note also that there are a good number of configurations with similar performance. The proposed procedure is flexible and it allows study in a easy way other configurations. Some final remarks: In the solution presented the vapor flows always from columns at higher to lower pressures in order to facilitate the operational problems associated to vapor transfer between columns. However, if the number of interconnections among columns makes difficult the control it is possible to implement some of the alternatives like those proposed by Agrawal (2000,2001) for reducing the flow transfer among

64

columns, or if possible try some integration of columns. This lasts points can be implemented at any of the two levels depending on the influence in the total cost. zyxwvutsrqponmlk

4. References Agrawal, R., 1996; Synthesis of distillation column configurations for multicomponent distillation. Ind. Eng. Chem. Res. 35, 1059-1071. Agrawal, R., 2000. Thermally coupled distillation with reduced number of intercolumn vapor transfers. AIChE J. 46(11) 2198-2210. Agrawal, R., 2001; Multicomponent distillation columns with partitions and multiple reboilers and condensers . Ind. Eng. Chem. Res. 40. 4258-4266. Caballero, J.A.; Grossmann, I.E.; 2001; generalized Disjunctive programming model for the optimal synthesis of thermally linked distillation columns. Ind. Eng. Chem. Res. (40) 10,2260-2274. Caballero, J.A.; Grossmann, I.E.; 2002; Logic based methods for generating and optimizing thermally coupled distillation systems. Procedings ESCAPE 12, J. Grievnik, and J van Schijndel (Editors), 169-174. Douglas, J.M., 1988, Conceptual design of chemical processes. McGraw-Hill Chemical Engineering Series. Glinos, K, Malone, P., 1988; Optimality regions for complex columns alternatives in distillation systems. Trans IchemE, Part A. Chem. Eng. Res. Des. 66, 229. Halvorsen, I.J., 2001; Minimum energy requirements in complex distillation arrangements.; Ph.D. Thesis, under supervision of S. Skogestad. Norwegian Institute of Science and Technology. Kaibel, G.; 1987; Distillation columns with vertical partitions. Chem. Eng. Tech. 10. 92 Petlyuk, F.B; Platonov, V.M. and Slavinskii, D.M.; 1965. Thermodynamically optimal method of separating multicomponent mixtures. Int. Chem. Eng. 5, 555. Rong. B; Kraslawski, A.; 2001. Procedings ESCAPE 12, J. Grievnik, and J van Schijndel (Editors) (10) 319-324. Sargent, R.W.H. and Gaminibandara, K., 1976. Introduction: approaches to chemical process synthesis. In Optimization in Action (Dixon, L.C. ed) Academic Press, London. Schultz, M.A.; Steward, D.G.; Harris, J.M.; Rosenblum, S.P.; Shakur, M.S.; O'Brien, D. 2002; Reduce cost with dividing-wall columns. Chem. Eng. Prog. May, 6470. Triantafillou, C and Smith, R., 1992; The design and optimization of fully thermally coupled distillation columns. Trans IchemE, Part A. Chem. Eng. Res. Des. 70, 118. Wright, R.O., 1949 US Patent 2,471, 134. Yeomans, H.; Grossmann, I.E., 1999; A systematic modeling framework for of superstructure optimization in process synthesis. Comp. Chem. Eng. 23-709.

5. Acknowledgements Financial support provided by the "Ministerio de Ciencia y Tecnologia", under project PPQ2002-01734 is gratefully acknowledged.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

65

A Continuous-Time Approach to Multiproduct Pipeline Scheduling Diego C. Cafaro and Jaime Cerda INTEC (UNL - CONICET) Guemes 3450 - 3000 Santa Fe - ARGENTINA E-mail: [email protected]

Abstract Product distribution planning is a critical stage in oil refinery operation. Pipelines provide the most reliable mode for delivering huge amounts of petroleum products through long distances at low operating cost. In this paper, the short-term scheduling of a multiproduct pipeline receiving a number of liquid products from a single refinery source to distribute them among several depots has been studied. The pipeline operation usually implies to accomplish a sequence of product pumping runs of suitable length in order to meet customer demands at the promised dates while satisfying all operational constraints. This work introduces a novel MILP mathematical formulation that neither uses time discretization nor division of the pipeline into a number of single-product packs. In this way, a more rigorous problem representation ensuring the optimality of the proposed schedule has been developed. Moreover, a severe reduction in binary variables and CPU time with regards to previous approaches was also achieved. A realworld case study involving a single pipeline, four oil products and five distribution depots was successfully solved.

1. Introduction Product distribution planning has become an important industrial issue especially for oil refineries continually delivering large amounts of products to several destinations. Pipelines represent the most reliable and cost-effective way of transporting such volumes of oil derivatives for large distances. This paper is concerned with a distribution system consisting of a single multiproduct pipeline receiving different products from an oil refinery to distribute them among several depots connected to local consumer markets. Because of liquid incompressibility, transfers of material from the pipeline to depots for fulfilling consumer demands necessarily occur simultaneously with the injection of new runs of products into the pipeline. By tracking the runs as they move along the pipeline, variations in their sizes and coordinates with time due to the injection of new runs and the allocation of products to depots can be determined. A few papers on the scheduling of a single pipeline transporting products from an oil refinery to multiple destinations have recently been published. Sasikumar et al. (1997) presented a knowledge-based heuristic search system that generates good monthly pumping schedules for large-scale problems. In turn, Rejowski and Pinto (2001) developed a pair of large-size MILP discrete time scheduling models by first dividing the pipeline into a number of single-product packs of equal and unequal sizes, respectively. This work presents a novel MILP continuous-time scheduling approach that accounts for pumping run sequencing constraints, forbidden sequences, mass balances, tank loading and

66

unloading operations and permissible levels, feasibility conditions for transferring material from pipeline runs to depots, product demands and due dates. The problem objective is to minimize pumping, inventory and transition costs while satisfying all problem constraints. The latter cost accounts for material losses and interface reprocessing costs at the depots due to product contamination between consecutive runs. zyxwvutsrqp

2. Problem Deflnition Given: (a) the multiproduct pipeline structure (the number of depots and the distance between every depot and the oil refinery); (b) available tanks at every depot (capacity and assigned product); (c) product demands to be satisfied at every depot at the end of the horizon; (d) the sequence of runs inside the pipeline and their initial volumes at the horizon starting time; (e) the scheduled product output at the refinery during the time horizon (product, amount and production time interval); (f) initial inventory levels in refinery and depot tanks; (g) maximum injection rate in the pipeline, supply rate from the pipeline to depots and delivery rate from depots to local markets and (h) the length of the time horizon. The problem goal is to establish the optimal sequence of pumping runs, the run lengths and the product assigned to each one in order to: (1) satisfying every product demand at each depot in a timely fashion; (2) keeping the inventory level in refinery and depot tanks within the permissible range all the time and (3) minimizing the sum of pumping, transition and inventory carrying costs. To do that, variations in sizes and coordinates of new/old runs as they move along the pipeline as well as the evolution of inventory levels in refinery and depot tanks with time are to be tracked.

3. Mathematical Formulation 3.1. Run sequencing constraints Every new pumping runzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA ief^^ should never start before completing the previous one and the subsequent changeover operation. C,-L,>

C,., + T,_, * (>>,_,,. + y,, -1)

where, L , < C , < / i ^

V/€ I'^;s,s'e

S

Vie/""

(1) (2)

3.2. Relationship between the volume and the length of a new pumping run yb„^*L,,s)*l^ i^je J (10)

3.8. Overall balance around the pipeline during a new pumping run ier^^ The overall volume transferred from runs iel to depots jeJ while pumping a new run Vef^"^ should be equal to the volume injected in the pipeline during run /'. XSD,,/''

^seSJeJ,,

U

(24)

ID min,,. < ID,."'' < ID max,, S,J

S,J

i'e 1"^

S,J

Vi eS.jeJ,, ^ J

i'€ I""" zyxwvutsrqponmlkjihgfedcbaZYXWVUT S '

3.13. Initial conditions Old runs ief^^ have been chronologically arranged by decreasing F/*, where F,^ stands for the upper coordinate of run ief^"^ at time t=0. Moreover, the initial volumes of old runs (W^/, ief^^) and the product to which each one was assigned are all problem data. Then, ^i,r-i = ^ ' ' '

V/G r'\r =

first(r''')

(25)

3.14. Problem objective function The problem goal is to minimize the total operating cost including pumping costs, the cost of reprocessing interface volumes and the cost of carrying product inventory in refinery and depot tanks.

69 zyxwvutsrq \

f

zyxwvutsrqponmlkjihgfedcbaZYXW

d') s€S jsj[

iel i'slnew

iel i>I

J

1 cardil"'" )1fs

d') zyxwvutsrqponmlkjihgfe jeJs

[i'slnew

J

\i'elnew

4. Results and Discussion

The proposed MBLP approach will be illustrated by solving a large-scale multiproduct pipeline-scheduling problem first introduced by Rejowski and Pinto (2001). It consists of an oil refinery that must distribute four products among five depots through one pipeline. Problem data are included in Table 1. Pumping and inventory costs as well as the interface volumes can be found in Rejowski and Pinto (2001). There is initially a sequence of five old runs inside the pipeline containing products (P1,P2,P1,P2,P1) with volumes, in 10^ m \ of (75,25,125,175,75), respectively. The optimal solution was found in 25 s on a Pentium III PC (933 MHz) with ILOG/CPLEX. This represents a threeorder-of-magnitude time saving with regards to the model of Rejowski and Pinto (2001). Figure 1 shows the optimal sequence of new pumping runs as well as the evolution of sizes and coordinates of new/old product campaigns as they move along the pipeline. Four new runs involving products (P2, P3, P2, P4) have been performed. zyxwvutsrqp

Run Time

o

interval [hi

Volume [10 " m ^

Figure 1 - Optimal sequence ofpumping runs

5. Conclusions A new continuous approach to the scheduling of a single multiproduct pipeline has been presented. By adopting a continuous representation in both time and volume, a more rigorous problem representation and a severe reduction in binary variables and CPU time have simultaneously been achieved.

70 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Table 1 - Problem Data (in l(f m^) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIH Depots Refinery Dl D2 D3 D4 D5 Prod. Level 90 90 90 90 90 Min 270 PI 400 400 400 400 400 Max 1200 190 230 200 240 190 Initial 500 Min 90 90 90 90 90 270 P2 400 400 400 400 400 1200 Max 180 210 180 180 180 520 Initial Min 10 10 10 10 10 50 P3 Max 350 70 70 70 70 70 Initial 50 65 60 60 60 210 Min 90 90 90 90 90 270 P4 400 400 400 400 400 Max 1200 Initial 120 140 190 190 170 515 Location from 100 200 300 400 475 Refinery [lOW]

Depots D3 D4

]

Dl

Prod. PI

P2

P3

P4

Demand Pumping Cost [$/m^] Demand Pumping Cost [$/m^] Demand Pumping Cost [$/m^] Demand Pumping Cost [$/m^]

D2

D5

100 110 120 120 150 3.5

4.5

5.5

6.0

6.9

70

90

100

80

100

3.6

4.6

5.6

6.2

7.3

60

40

40

0

20

4.8

5.7

6.8

7.9

8.9

60

50

50

50

50

3.7

4.7

5.7

6.1

7.0

6. Nomenclature

(a) Sets zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA set of derivative oil products S r^ set of old pumping runs inside the pipeline set of depots along the pipeUne / zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ at the start of the time horizon set of scheduled production runs in the O r"^ set of new pumping runs to be potentially refinery during the time horizon executed during the time horizon (b) Parameters volume of the interface between runs hmax horizon length iPsy containing products s and s' Pj volumetric coordinate of depot j along size of the refinery production run o the pipeline Bo starting/finishing time of the refinery Ooybo vb pumping rate production run o qd^ overall demand of product s to be initial inventory of product s at the satisfied by depot j refinery Vm maximum supply rate to the local initial inventory of product s at depotj market (c) Variables zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA volume of run i at time C,' denoting that product s is contained in run i J'M volume of product injected in the pipeline whenever ^'i^s = 1 Qi while pumping the new run is T^ denoting that a portion of run i is volume of product s injected in the pipeline transferred to depoty while pumping run V while pumping the new run i denoting that run i ends after the refinery ZM»,. volume of run i transferred from the production run o has started Du zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO pipeHne to depoty while pumping run V denoting that run i begins after the refinery production run o has ended DSJ volume of product s transferred from run i to depoty while pumping run V C j , Lti completion time/initial length of the new pumping run iG V^ upper coordinate of run i along the pipeUne Ft at time C,'

7. References zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA ILOG OPL Studio 2.1 User's Manual, 1999, ILOG S.A. France. ^nd Rejowski, R., Pinto, J.M., 2001, Paper PIO, Proceedings of T". Pan American Workshop on Process Systems Engineering, Guaruja-Sao Paulo, Brazil. Sasikumar, M., Prakash, P.Patil S.M., Ramani, S., 1997, Knowledge-Based Systems 10, 169.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

71

Optimal Grade Transition Campaign Scheduling in a GasPhase Poly olefin FBR Using Mixed Integer Dynamic Optimization C.Chatzidoukas^'^, C.Kiparissides^ J.D.Perkins^, E.N.Pistikopoulos^'* ^ Department of Chemical Engineering and Chemical Process Engineering Research Institute, Aristotle University of Thessaloniki, PO Box 472,54006 University City, Thessaloniki, Greece. ^ Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College, London SW7 2BY, UK

Abstract Transitions between different polymer grades seem to be a frequent operating profile for polymerization processes under the current market requirements for product with diverse specifications. A broader view of the plant operating profile without focusing only on a single changeover between two polymer grades raises the problem of the optimal sequence of transitions between a certain number of grades. An integrated approach to the optimal production scheduling in parallel with the optimal transition profiles is carried out using Mixed Integer Dynamic Optimization (MIDO) techniques. A remarkable improvement on process economics is observed in terms of the off-spec production and the overall transition time.

1. Introduction Polymerization processes have adopted a character of continuous multiproduct plants, in response to the current demand for polymers. Precisely, the variability observed in the polymer market demand, in terms of product quality specifications, calls upon frequent grade transition policies on the polymerization plants, with legitimate consequences on process economics, due to the regular "necessary" disturbances from steady-state operating conditions. Therefore, the issue of how to operate such process as continuous multiproduct plants, in a global polymer industry environment with intense competitive pressures, emerges nastily. The products produced during a transition are off-spec, since they do not meet the specifications of either the initial or the final product, and consequently must normally be sent to waste treatment facilities. This problem combined with the usual long residence time (-10^ sec) and therefore long transition time of continuous polymerization reactors, results in an exceptionally large amount of off-spec product and consequently in a serious treatment and product loss problem. In order to develop an economically viable operating profile for the process, under the sequential

To whom correspondence should be addressed. Tel: (44) (0) 20 7594 6620, Fax: (44) (0) 20 7594 6606, E-mail: [email protected]

72 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

production mode of different grades, an integrated approach to a multilevel process synthesis problem is required, taking into consideration, process design, process control, optimisation of transient operation and production scheduling as interacted subproblems. In the present study a unified approach is attempted considering process control, process operation and production planning issues. The problem is examined in relation to a Ziegler-Natta catalytic gas-phase ethylene-1-butene copolymerization fluidised bed reactor (FBR) operating in a grade transition campaign. The process design is totally defined a priori since the polymerization system employed to carry out this research is a unit of an industrial polymer process. A comprehensive kinetic mechanism describing the catalytic olefin polymerization in conjunction with a detailed model of the FBR and a heat exchanger has been developed to simulate the process dynamics and the polymer molecular properties (Chatzidoukas et al., 2002). This model provides the platform for the study of process control, dynamic optimisation of the transient operation between a number of grades and the optimal sequence of transitions. A mixed integer dynamic optimization (MIDO) algorithm enables the above issues to be dealt with simultaneously avoiding the exhaustive calculation of the dynamic optimal profiles for all possible binary transitions, which might be prohibitive from computational point of view when a large number of polymer grades is considered. zyxwvutsrqponmlkjihgfedcbaZYXWVU

2. Problem Deflnition Even though a year-time is a typical time scale for the operating life of an industrial polymer plants, with the observed fluctuations in the market demand a production scheduling on this basis would be hazardous. Therefore, it is expected that in an annual period polymer plants run several production campaigns and production planning for each one would be more efficient. A short-term campaign involving a single batch of four polymer grades (A, B, C, D) has been selected as representative case study in order to illustrate the concepts of our approach to the dimensions of the problem. Each of the four polymer grades is produced at once, without adopting a cyclic mode for the process operation. Therefore the process starts from an initial operating point and does not return to that point at the end of the campaign and hence, the timely satisfaction of the customer orders should be settled on this base. Furthermore, the campaign is studied separately from a previous and a next one, in the sense that the production sequence is determined independently, neglecting how the final grade of this campaign might affect the sequence of a next campaign as a starting point. Similarly, the starting point (Init) of the current campaign is considered as a given final grade of the previous one. Since it is expected that the Init point will affect the sequence of transitions, its polymer properties have been selected on purpose to lie in between the polymer properties of the four desired polymer grades. Furthermore, the polymer properties of the four grades have been chosen in such a way that a monotonous change (either increase or decrease) when moving from one grade to another during the campaign is impossible for the three polymer properties simultaneously. This renders the problem more complicated, reducing the possibility of applying heuristic rules for the selection of the transition sequence.

73 zyxwvutsrqp

A simplification assumption, particularly for the formulation of the performance criterion, is that the process between the transitions is running at steady state, operating under perfect control, eliminating any disturbance and hence preventing any deviation from on-spec production. With this assumption, the production periods between the transition periods do not need to be considered in this study and the performance index accounts only for the transition periods. Melt index, polymer density and polydispersity are the molecular polymer properties (PP: MI, p, PD) identifying each polymer grade. The four operating points corresponding to the four polymer grades have been found under steady state maximization of monomer conversion with respect to all the available model inputs, so that the process operates with the maximum monomer conversion when running in a production period. In the framework of the integrated approach attempted in this study, selection and tuning of the regulatory and supervisory close-loop feedback controllers is required. From Figure 1, showing a schematic representation of a gas-phase catalytic olefin polymerization FBR, one can identify nine possible manipulated variables: The monomer and comonomer mass flow rates (Fmonb Fmon2) in the make-up stream; the hydrogen, nitrogen and catalyst mass flow rates (FH2, FN2, Feat); the mass flow rate of the bleed stream (Fbieed); the mass flow rates of the recycle and product removal streams (Free, Fout)» and the mass flow rate of the coolant water stream to the heat exchanger (Fwater)- I^ practice, instead of manipulating the comonomer mass flow rate, Fnion2» the ratio of the comonomer to the monomer inflow rate in the make up stream (Ratio = Fmonz/Fmoni) IS Selected as manipulated variable. The structure derived from a relative gain array (RGA) analysis is applied for the range of all the transitions of the campaign and is responsible for holding the reactor's bed level (h), temperature (T), pressure (P) and production rate (Rp) under control in a multiple input-multiple output configuration of PI feedback controllers. Table 1 describes the pairings of the control scheme defining also the manipulated variables. The last two manipulated variables are used by the optimizer to track polymer properties during a transition to their desired values corresponding to each grade. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDC

3. Mathematical Representation The total transition time and the total off-spec production during the campaign are the criteria that should be incorporated in the objective function for the evaluation of the candidate alternatives. Binary variables are used to model the potential assignment of different grades to time slots in the total horizon. The time horizon for the campaign is divided into 20 time slots (5 intervals for each transition). Two integer variables are employed for each polymer grade, one showing when this grade is a starting point (Yix) of a binary transition and one showing when this grade is a desired- final point (Y2x) of each transition. Since the time topology of the Init operating point is known and constant for all the potential campaigns and besides it cannot be a desired grade, only one binary variable is ascribed to this one, which though is known during all the time slots. Hence a total number of eight 0-1 variables are required to describe the fimely distribution of the 4 desired grades. The mathematical formulation of the combined operational and scheduling problem it can be stated as:

74 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

V

Cyctone

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Rec>cie Stream,zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHG TaMc 1. Bcst pairiugs of controlled and manipulated variables. \ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA / Manipulated Controlled Bleed Stream,

FraO

o

Fluidized bed (fydiDgen

Cbnpressor

feed, F i e

Bed height (h)—^Product withdrawal (Fout) Temperature (T)^^Coolant feed rate (F^) Pressure (P)

2

Nitrogen feed.

>-Nitrogen feed rate (FNZ) Monomer make up Production zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Inflow rate (Rp) feed rate (F„,oni) ->-Comonomer ratio (Ratio) Density (p) tfeat exchanger

product

RemDval,

T

Pure product Cocatalyst Feed

Melt Index (MI)->iIydrogen feed rate (FH2)

Make-up feed, FB™I,F,«2

Figure 1. Gas-phase polymerization FBR unit.

ethylene

h 3

Minimise

O^j = w 1 J 5^ (PPj _transition_dev) dt + w 2 offspec

U(t),YlX(t),Y2X(t)

(1)

0 i=l

where the normalized square transition deviation of the polymer properties PP from their desired values is defined as: PP; transition dev =

K(t)-Y., ^ ,

PP_A - Y,3,, PP_B - Y,c,, PP_C - Y,!,, PP_D)P Yw,.PPJnit + V,PP_A-Y,^.,PP_A + V,PP_B-Y,3,PP_B + Y,e,,PP_C-Y,e,PP_C + Y,o,,PP_D-Y,D,PP_D

subject to: x(t) = f(x(t),x(t),u(t),t)

(2)

y(t) = h(x(t),u(t),t)

(3)

x(to) = x„

(4)

0 < g(x(t), u(t), y(t), Y,x,„ Y,x,„ t)

(5)

where, x, u, y are the state, control and output vectors. A number of inequahty constraints described in Eqn. (5) stem from the definition of the problem and the need for feasible process operation. Precisely, end point constraints have been imposed on selected process variables to guarantee that each transition ends up at the desired steadystate optimal operating point. Finally, constraints on the binary variables were also imposed to make sure the production of all the polymer grades in a sequential transition mode.

75 zyxwvutsrqp

4. Solution Algorithm-Results zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF

The combined structural and operational nature of the problem where both continuous and discrete decisions are involved, is addressed using a Mixed Integer Dynamic Optimization (MIDO) algorithm (Mohideen et al., 1996; Algor and Barton, 1999; Bansal et al., 2002). Under this approach, the problem is iteratively decomposed into two interacted levels-subproblems, in consistency with the hierarchical framework applied to scheduling problems. (Mah, 1990; Rowe, 1997); an upper level (Primal problem) where the operating profile is determined under dynamic optimisation techniques, and a lower level (Master problem) where candidate production sequences are developed. The dual information and the value of the objective function transferred from the Primal to the Master problem, which is solved as a mixed integer linear programming (MILP) problem, are employed to update the candidate production structure until the solutions of the two subproblems converge within a tolerance. The flow rate of hydrogen feed stream, the ratio of comonomer to monomer flow rate in the make-up feed stream and the binary variables constitute the set of the time varying controls, while the controller parameters as well as the length of each time interval are time-invariant decisions. The conmiercial modelling-simulation package gPROMS® in conjunction with the gOPT® optimization interface (Process Systems Enterprise Ltd, Lx)ndon) are used for the integration of the DAE system and the dynamic optimization of the grade transition problem. Additionally, the GAMS/CPLEX solver is used for the solution of the MILP problems resulting form the Master problem. Four iterations between the primal and master sub-problems were adequate for the MIDO algorithm to locate the optimal solution. Table 2 presents the sequence Init->C->A-^B-^D as the optimal production schedule. It also illustrates the remaining three production sequences derived during the four iterations of the algorithm. A comparison between them in terms of the time horizon, the objective function and the total amount of the off-spec product, reveals the excellence of the optimal sequence which results in a 16% reduction of the off-spec product compared to the worst scenario. Figures 2-4 display the optimal profiles for PD, MI and p during the transition campaign. It is noticed that the MIDO algorithm advocates as optimal production planning a sequence with monotonous change in terms of polymer density and polydispersity, however a simultaneous monotonous change is impossible for MI. zyxwvutsrqponmlkji Table 2. Comparison of the proposed sequences. Sequences _ _ _ _ _

Time Objective Off-spec horizon Function Product _ _

I n i t ^ C ^ A ^ B ^ D 27.16 hr 166.28 Init^D-^A->C-^B 30.63 hr 221.038 Init^D->B^A->C 34.13 hr 185.963

4.8-

Polymer Polydispersity 1

4.6-

c

1":

j^2tn

132 tn 181 to 153 tn

't



S

3.8-

r^>^ I

GradeD

GradeB

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ

3.63.4-

\ _ l

GradeC

3.2-' 0

5

10

15

20

25

TimeOir)

Figi ire 2: Optimal PD profile under the optimal production planning.

3(

76

0.25-

1

Polymer Ml |

Grade D

Polymer Density (gr/cm') I

0.20-

0.15-

0.10Initial point

Vll

Grade C

0.05-

/ GradeB ^ ^ _ _ ^ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA X Grade A 0.00-zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA '

1

'

1

-1

1

1

15 Time(hr)

Figure 3.: Optimal MI profile under the optimal production planning.

Tiine(hr)

Figure 4: Optimal density profile under the optimal production planning.

5. Conclusions The production sequence in a gas-phase olefin polymerization plant running a grade transition campaign between four polymer grades has been studied in parallel with the optimal transition profile to switch the process from one grade to another. Both the optimal production scheduling and operating profiles for the optimal transition between the polymer grades have been found using a Mixed Integer Dynamic Optimization algorithm. Reduction of the off-spec production and total transition time during the campaign highlights the economic benefits for the polymerization plant resulted from the integrated approach to the problem.

6. References Algor, R.J., Barton, P.I., 1999, Computers. Chem. Eng., 23, 567. Bansal, V., Perkins, J. D. and Pistikopoulos, E. N., 2002, Ind. Eng. Chem. Res. 41, 760. Chatzidoukas, C , Perkins, J.D.; Pistikopoulos, E.N. and Kiparissides, C , 2002, Submitted in Chemical Engineering Science. Mah, Richard S.H., Chemical process structures and information flows. Howard Brenner (Eds.), Butterworths Series in Chemical Engineering, United States of America, 1990, Chap. 6. Mohideen, M.J., Perkins, J.D., Pistikopoulos, E.N., 1996, AIChE J., 42, 2251. Rowe A.D., 1997, PhD Thesis, Imperial College, University of London.

7. Acknowledgements The authors gratefully acknowledge the financial support provided for this work by DGXII of EU the GROWTH Project "PolyPROMS" GlRD-CT-2000-00422.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

77

Environmentally-Benign Transition Metal Catalyst Design using Optimization Techniques Sunitha Chavali\ Terri Huismann\ Bao Lin^, David C. Miller^, and Kyle V. Camarda^ ^Department of Chemical and Petroleum Engineering, The University of Kansas 1530 W. 15^ Street, 4006 Learned Hall, Lawrence, KS 66045 USA ^Department of Chemical Engineering, Rose-Hulman Institute of Technology 5500 Wabash Avenue, Terre Haute, IN 47803 USA

Abstract Transition metal catalysts play a crucial role in many industrial applications, including the manufacture of lubricants, smoke suppressants, corrosion inhibitors and pigments. The development of novel catalysts is commonly performed using a trial-and-error approach which is costly and time-consuming. The application of computer-aided molecular design (CAMD) to this problem has the potential to greatly decrease the time and effort required to improve current catalytic materials in terms of their efficacy and biological effects. This work applies an optimization approach to design environmentally benign homogeneous catalysts, specifically those which contain transition metal centers. Two main tasks must be achieved in order to perform the molecular design of a novel catalyst: biological and chemical properties must be estimated directly from the molecular structure, and the resulting optimization problem must be solved in a reasonable time. In this work, connectivity indices are used for the first time to predict the physical properties of a homogeneous catalyst. The existence of multiple oxidation states for transition metals requires a reformulation of the original equations for these indices. Once connectivity index descriptors have been defined for transition metal catalysts, structure-property correlations are then developed based on regression analysis using literature data for various properties of interest. These structure-property correlations are then used within an optimization framework to design novel homogeneous catalyst structures for use in a given application. The use of connectivity indices which define the topology of the molecule within the formulation guarantees that a complete molecular structure is obtained when the global optimum is found. The problem is then reformulated to create a mixed-integer linear program. To solve the resulting optimization problem, two methods are used: Tabu search (a stochastic method), and outer approximation, a deterministic approach. The solution methods are compared using an example involving the design of an environmentally-benign homogeneous catalysts containing molybdenum.

1. Introduction Transition metal catalysts play a crucial role in many industrial applications, including the manufacture of lubricants, smoke suppressants, corrosion inhibitors and pigments. The development of novel catalysts is conmionly performed using a trial-and-error approach which is costly and time-consuming. The application of computer-aided molecular design

78

(CAMD) to this problem has the potential to greatly decrease the time and effort required to improve current catalytic materials in terms of their efficacy and biological effects. This work applies an optimization approach to design environmentally benign homogeneous catalysts, specifically those which contain transition metal centers. The use of optimization techniques coupled with molecular design, along with property estimation methods allows the determination of candidate molecules matching a set of target properties. For example, it has now been reported (Hairston, 1998) that a computational algorithm has been successfully implemented in order to design a new pharmaceutical which fights cancer. This work employszyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA connectivity indices, which are numerical values which describe the electronic structure of a molecule, to characterize the molecule and to correlate its internal structure with physical properties of interest. Kier and Hall (1976) report correlations between connectivity indices and many key properties of organic compounds, such as density, solubility, and toxicity. The correlations to compute the physical properties are then combined with structural constraints and reformulated into an MINLP, is then solved via various methods to generate a list of near-optimal molecular structures. Raman and Maranas (1998) first employed connectivity indices within an optimization framework, and Camarda and Maranas (1999) used connectivity indices to design polymers which prespecified values of specific properties. An application of connectivity indices to the computational molecular design of pharmaceuticals was described by Siddhaye et al. (2000). In earlier molecular design work, group contribution methods were used to estimate the values of physical properties, as in Gani, et al. (1989), Venkatasubramanian et al. (1995), and Maranas (1996). The connectivity indices, however, have the advantage that they take into account the internal molecular structure of a compound. The property predictions generated from these indices are thus more accurate then those from group contributions, and furthermore, when a molecular design problem is solved using these indices, a complete molecular structure results, and no secondary problem must be solved to recover the final molecular structure. zyxwvutsrqpo

2. Property Prediction via Connectivity Indices The basis for many computational property estimation algorithms is a decomposition of a molecule into smaller units. Topological indices are defined over a set of basic groups, where a basic group is defined as a single non-hydrogen atom in a given valency state bonded to some number of hydrogen atoms. Table 1 gives the basic groups used in this work, along with the atomic connectivity indices for each type of group. In this Table, the 5 values are the simple atomic connectivity indices for each basic group, and refer to the number of bonds which can be formed by a group with other groups. The 5^ values are atomic valence connectivity indices, which describe the electronic structure of each basic group, including lone-pair electrons and electronegativity. For basic groups involving carbon, oxygen, and halogen atoms, the definitions of these indices are from the work of Bicerano (1996). However, these indices assume the nonhydrogen atom can only have one valency state. For transition metals which can assume multiple valency states, the definition of S" must be extended. We have defined S" based on the number of electrons participating

79

in the bonding, instead of those present in the outer shell. The resulting values for 5^ for molybdenum groups are listed in Table 1, along with values for other groups from Bicerano (1996). Note that atomic connectivity indices can be defined for any basic group, and the small table of groups used here is merely for illustrative purposes. zyxwvutsrqponmlkjihgfedcbaZYXWVU Table 1: Basic Groups and their Atomic Connectivity Indices.

-CH3 -CH2-

8 1 2

5^ 1 2

-CH
Mo
Mo
o = 35.81-44.06^-0.2227V -5.748^;^ +0.0522V +31.38';^-0.037V +I5.9l{^zj + 0.0236(V)' -4.203C;rT -0.0022s{'z'J

+0.l592{'zJ

+ 0.0006(';ir7 " 3 7 . 1 8 ^ ^

Since connectivity indices are defined in a very general way, they are capable of describing any molecule, and thus correlations based on them tend to be widely applicable and fairly accurate over a wide range of compounds. Using such correlations, an optimization problem has been formulated which has as its optimal solution a molecule which most closely matches a set of target property values for a molybdenum catalyst.

80 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

3. Problem Formulation zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA The optimization problem which determines the best molecule for a given application uses an objective function which minimizes the difference between the target property values and the estimated values of the candidate molecule. This can be written as

min ^ = A^V

target zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA P„-/l p scale

where R is the set of all targeted properties, P,„ is the estimated value of property m, Pj"^^^^ is a scale factor used to weight the importance of one property relative to another, and P^^^^^ is the target value for property m. The molecule is represented mathematically using two sets of binary variables: a partitioned adjacency matrix with elements a(ij,k) which are one if basic groups / andy are bonded with a A;^''-multiplicity bond, and zero otherwise. In the example presented here, the basic groups can only form single bonds, and thus the index k will be dropped. This matrix is partitioned such that specific rows are preassigned to different basic groups, so that it can be determined a priori what 5i and 5i^ values should be used for each basic group / in the molecule. Since we do not know how many of each type of group will occur in the fmal optimal molecule, the partitioned adjacency matrix will have many rows which do not correspond to a basic group. The binary variable w, is set to one if the ith group in the adjacency matrix exists in the molecule, and is zero otherwise. In order to store the existence of a triplet in the molecule (to compute \), a new binary variable y(ij,l) is defined. An element y(i,j,l) is equal to one if group / is bonded to group j , and group j is bonded to group /. These three sets of variables provide sufficient information to compute the connectivity indices, and thus estimate molecular properties. These data structures are then included within the equations for the connectivity indices to allow the structure of the molecule to be used to estimate physical properties. Along with these definitions, property correlations using the connectivity indices must also be included in the overall formulation. Finally, structural feasibility constraints are needed to ensure that a chemically feasible molecule is derived. In order to guarantee that all the groups in the molecule are bonded together as a single unit, we include the constraints from a network flow problem into the formulation. A feasible solution to a network flow problem across the bonds of a molecule is a necessary and sufficient condition for connectedness, and the constraints required are linear and introduce no new integer variables. Other constraints include bounds on the variables and property values. The problem written in this form is an MINLP, which then must be solved to obtain the desired structures.

4. Solution Methods In this work, two types of solution methods have been tested: the deterministic method known as outer approximation (Duran and Grossman, 1986), and the stochastic algorithm Tabu search (Glover, 1986,1997). While outer approximation guarantees that the global optimum will be found within a finite number of steps for a convex MINLP, the formulation

81

as listed here is nonconvex. Linear reformulations of the equations forzyxwvutsrqponmlkjihgfedcbaZYX y and the objective function have been implemented which leave the property constraints as the only nonlinear equations. The Tabu search algorithm is a meta-heuristic approach which guides a local search procedure and is capable of escaping local minima. Many issues must be addressed when applying Tabu search to molecular design problems. The Tabu search avoids local minima by storing a memory list of previous solutions, and the length of these lists must be set. Furthermore, strategies for determining when a more thorough search of a local region is needed must also be determined. A discussion of these issues is given in Lin (2002). Other applications of Tabu search within chemical engineering are described in Lin and Miller (2000, 2001). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

5. Example

The example presented here produces a potential molecular structure for a homogeneous molybdenum catalyst for epoxidation reactions. The possible basic groups in the molecule are those listed in Table 1, and the maximum number of basic groups allowed in the molecule is 15. A target value was set for the density, and all structural feasibility constraints were employed. The problem was solved using outer approximation, accessed through the GAMS modeling language on a Sun Ultra 10 workstation. A resource limit of 20 minutes was set, and no guaranteed optimal solution was found. The best integer solution found is shown in Figure 1. The value of the density for this molecule is 4382 Kg/m^, which is far away from the target value of 4173 Kg/m^. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO

S^aOHzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONML ^^^ OH

I/

CH3--M0—O

I/

Mo

CI I

NH^

I

OH

CI

Figure 1: Candidate catalyst molecule found using DICOPT. When Tabu search is applied to this example, near-optimal structures are found in a much shorter time. 100 runs of the code were made, each of 90 seconds duration. The best structure found in most of the runs (80%) is presented in Figure 2. This structure has a density of 4172 Kg/m^, which deviates only slightly from the target. Note that many nearoptimal structures were also found, which can be combined into a list which a catalyst designer could use to choose a candidate for synthesis and experimental testing. OH OH

I ; CI—CH2—Mo—CI OH

Figure 2: Candidate catalyst molecule found by Tabu search.

82 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

6. Conclusions This work has focused on the use of optimization techniques within a molecular design application to derive novel catalyst structures. The use of connectivity indices to relate internal molecular structure to physical properties of interest provides an efficient way to both estimate property values and recover a complete description of the new molecule after an optimization problem is solved. The optimization problem has been formulated as an MINLP, and the fact that the problem has been formulated in a manner which is not computationally expensive to solve (using Tabu search) gives rise to the possibility that the synthesis route for such a molecule could be derived and evaluated along with the physical properties of that molecule. Further work will include such synthesis analysis, as well as the inclusion of a much larger set of physical properties and basic groups from which to build molecules, and will work toward the design of mixtures and the prediction of mixture properties via connectivity indices.

7. References Bicerano, J., 1996, Prediction of Polymer Properties, Marcel Dekker, New York. Camarda, K.V. and Maranas, CD., 1999, Ind. Eng. Chem. Res., 38,1884. Duran, M.A. and Grossmann, I.E., 1986, Math. Prog., 36, 307. Gani, R., Tzouvars, N., Rasmussen, P. and Fredenslund, A., 1989, Fluid Phase Equil., 47, 133. Glover, F., 1986, Comp. and Op. Res., 5, 533. Glover, F. and Laguna, M., 1997, Tabu Search, Kluwer Academic Publishers, Boston. Hairston, D.W., 1998, Chem. Eng., Sept., 30. Kier, L.B. and Hall, L.H., 1976, Academic Press, New York. Lin, B., Miller, D.C., 2000, AIChE Annual Meeting, Los Angeles, CA. Lin, B., Miller, D.C., 2001, AIChE Annual Meeting, Reno, NV. Lin, B., 2002, Ph.D. Thesis, Michigan Technological University. Maranas, CD., 1996, Ind. Eng. Chem. Res., 35, 3403. Raman, V.S. and Maranas, CD., 1998, Comput. Chem. Eng. 22,747. Siddhaye, S., Camarda, K.V., Topp, E. and Southard, M.Z., 2000, Comp. Chem. Eng., 24, 701. Venkatasubramanian, V., Chan K. and Caruthers, J.M., 1994, Comp. Chem. Eng., 18, 9, 833.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

83

Complete Separation System Synthesis of Fractional Crystallization Processes L.A. Cisternas^^\ J.Y. Cueto^^^ and R.E. Swaney^^^ (1) Dept. of Chemical Engineering, Universidad de Antofagasta, Antofagasta, Chile (2) Dept. of Chemical Eng., University of Wisconsin-Madison, Madison, WI, USA

Abstract A methodology is presented for the synthesis of fractional crystallization processes. The methodology is based on the construction of four networks. The first network is based on the identification of feasible thermodynamic states. In this network the nodes correspond to multiple saturation points, solute intermediate, process feeds and end products. The second network is used to represent the variety of tasks that can be performed at each multiple saturation point. These tasks include cooling crystallization, evaporative crystallization, reactive crystallization, dissolution, and leaching. Heat integration is included using a heat exchanger network which can be regarded as a transhipment problem. The last network is used to represent filtration and cake washing alternatives. The cake wash and task networks are modelled using disjunctive programming and then converted into a mixed integer program. The method is illustrated through the design of a salt separation example.

1. Introduction There are two major approaches for the synthesis of crystallization-based separation. In one approach, the phase equilibrium diagram is used for the identification of separation schemes (For example Cisternas and Rudd, 1993; Berry et al., 1997). While these procedures are easy to understand, they are relatively simple to implement only for simple cases. For more complex systems, such as multicomponent systems and multiple temperatures of operation, the procedure is difficult to implement because the graphical representation is complex and because there are many alternatives to study. The second strategy is based on simultaneous optimization using mathematical programming based on a network flow model between feasible thermodynamic states (Cisternas and Swaney, 1998; Cisternas, 1999; Cisternas et al. 2001; Cisternas et al. 2003). In crystallization and leaching operations, filtration, washing and drying are often required downstream to obtain the product specifications. For example, usually filter cake must be washed to remove residual mother liquor, either because the solute is valuable or because the cake is required in a semiclean or pure form. These issues have been discussed by Chang and Ng (1998), who utilized heuristic for design purposes. The objective of this study is to address these issues using mathematical programming. This work constitutes part of our overall effort on the synthesis of fractional crystallization processes. Drying is not included in this method because, as it normally does not involve a recycle stream, the dryer can be considered as a stand-alone operation.

84

2. Model Development

2.1. Networks for fractional crystallization zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK The model proposed in this paper is composed of four networks: (1) the thermodynamic state network, (2) the task network, (3) the heat integration network, and (4) the cake wash network. The first three networks have been described in our previous works; therefore, emphasis here is given to the cake wash network. The first network is based on the detection of feasible thermodynamic states. Using equilibrium data for a candidate set of potential operating point temperatures, a thermodynamic state network flow model is create to represent the set of potential separation flowsheet structures that can result. This representation was presented by Cisternas and Swaney(1998) for two solutes systems, by Cisternas(1999) for multicomponent systems, and by Cisternas et al.(2003) for metathetical salt. Figure 1 shows the thermodynamic state network representation for a two solute system at two temperatures. The structure contains feeds, two multiple saturation points, and products. The second network, which is also shown in Figure 1, is the task network (Cisternas et al. 2001). Each multiple saturation state can be used for different tasks depending on the condition/characteristic of the input and output streams. For example, if solvent is added to an equilibrium state, the task can be: (1) a leaching step, if the feed is solid; (2) a cooling crystallization step, if the feed is a solution with a higher temperature; or (3) a reactive crystallization step, if the feed is a crystalline material that decomposes at this temperature or in the solution fed to this state (for example, the decomposition of carnallite to form potassium chloride). The third network, a heat exchange network, can be regarded as a transhipment problem as in Papoulias and Grossmann (1983). This transhipment problem can be formulated as a linear programming problem. In this representation hot streams and cold streams corresponds to the arcs in the thermodynamic state network. The fourth network is the cake wash network. Cake washing can be accomplished by two methods: (a) The cake may be washed prior to removal from the filter by flushing it with washing liquour. This can be done with both batch and continuous filters, (b) the cake may be removed from the filter and then washed in a mixer. The wash suspension obtained may then be separated with the filter. Figure 2 shows both alternatives for y^.i. Figure 2 shows only one stage, removing the residual mother liquor of concentrationzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPON but washing may be performed in one or several stages on either batch or continuous filters. In this work countercurrent washing is not considered. As a result, the first stage provides the most concentrated solution and the last stage provides the least. If operation states are near-equilibrium states, then mother liquor concentration in the cake is substantially that of a saturated solution at the final temperature in the process. 2.2. Mathematical formulation Having derived the networks for the separation problem, a mathematical programming formulation is presented for each network to select the optimum flowsheet alternative of the separation sequence.

85 zyxwvutsrqpo

Figure 1. Thermodynamic state network and task network zyxwvutsrqponmlkjihgfedcbaZYXWVUT zr,

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJI zw.

ypw

ywe *

ymWf / zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA WASHING c

FILTER RESLURRY zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA ymr. zyxwvutsrqponmlkjihgfedcbaZYXWV \ rve

17 Figure 2. Cake wash network for stage e. The mathematical formulation for the thermodynamic state network is the same as that developed by Cisternas(1999) and Cisternas et al. (2003). Here a brief description is given. First, the set of thermodynamic state nodes will be defined as: S=(s, all nodes in the system^. This includes feeds, products, multiple saturation points or operation points, and intermediate solute products. The components, solutes and solvents, will be denoted by the set /= {i}. The arcs, which denote streams between nodes, will be denoted by L={1}. Each stream / is associated with the positive variable mass flow rate w/ and the parameter x^ giving the fixed composition of each component in the stream. The constraints that apply are: (a) Mass balance for each component around multiple saturation and intermediate product nodes. se SJ^JE I /65'""(5)

(1)

leLqnS°"'{s)

where Lq is the subset of L of solid stream product, hi is the mass ratio of residual liquid retained in the cake pores to the solid product /, and xyj is the concentration of the mother liquid in equilibrium with solid product /. Also S'^(s) and S^'*^(s) are the sets of

86 input and output streams to node s. (b) Specification for feeds flow x^. = C^. , where se S^Je Ip{s) and C^^, is the desired flow rates of specie rates ^ w, •zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA leF{s)

i in feed s. The heat integration network follows the approach presented by Papoulias and Grossmann(1983). First, it is considered that there is a set K={k} of temperature intervals that are based on the inlet temperatures of the process streams, highest and lowest stream temperatures, and of the intermediate utilities whose inlet temperatures fall within the range of temperatures of the process streams. The only constraints that apply are heat balances around each temperature interval k: R. -^.-. -lQ:^lQn=l OTGVjt

/iGf/jfe

vv, (C^AT)f, - X w, (C^AT)l l^Hk

keK

(2)

/eCjt

where Q,/ , Qn^ and Rk are positive variables that represent heat load of hot utility m, heat load of cold utility n, and heat residual exiting interval k, respectively. (CpAjfik and (CpATfik are known parameters that represent the heat content per unit mass of hot stream leH^ and cold stream leC^ in interval k. //^ Cjt, Vk and Uk are the hot stream, cold stream, hot utility and cold utility set respectively in interval k. A task network is constructed for each multiple saturation point node s. The mathematical formulation, which is close to that in Cisternas et al. (2001), includes mass and energy balance, logic relations to select the task based on input/output stream properties, and cost evaluations. The formulation use disjunctive progranmiing. A cake wash network is constructed for each solid stream product leLq. Let E(l)={e} define the set of washing/reslurry stages in the solid stream product / e Lq. The variables are defined as follows: y^e^ is the concentration of species i in the residual mother liquor of the solid stream / at the output of wash/reslurry stage e. z^ej and riej are the input and output concentration in the washing liquid for the solid stream /, at stage e. ypwie,h ypr^e.h y^^he.h yf^n.e.h ^^uh zr^e.h f^i,e,h and rr/^,, are the concentration of the internal streams in stage e (see figure 2). The wash efficiency parameter, £w/ ^,, for specie i in solid stream / at the stage e can be defined as £w,^,. = {ymwi^. -ypy^iej)/i^i,e,i ~>'P^/,e.,) ^ r le Lq.ee E{1), ie I. The first two constraints in eq. (3) bellow are the efficiency constraint for the wash and reslurry/filter steps. Note that the efficiency for perfect mixing in the wash mixer is equal to 1. The last two constrains in Eq. (3) are the mass balances for specie / at the stage e of washing solid stream /,. Eri^ej rr^ei - Er^^, ypr^^, - ymr,^. + ypr^^. =0

leLq.eE £(/), / e /

87 zyxwvutsrqpo nwi^e and nrie are parameters that represent the mass ratio of wash liquid to residual liquor in the cake used in wash and reslurry/filter steps respectively. There ratios are referred to as the wash ratio or number of displacements. This network requires the use of discrete variables, ywi^ and yrie to represent the choices of wash, or reslurry/filter, or neither, for each solid product stream I e Lq ai stage e. The corresponding logical relations are: _ y^Le

-^y^i,e

-^yn,e

yn,e

yi,e,i =

y^^i,e,i

yp^i,e,i

= yi,e-i,i

yi,e,i = yi,e,i y^\e,i

y^fiei

=0

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA ypn,e.i = yi,e-u ytnWi^.

ypn,e.i=^

yp^i,e,i

=

-'yn.e

yi,e,i =

ymr,^.=0 ^^l,e,i

-^y^i,e

y^^i,e,i

- ^

yp^i,e,i

= 0

=0 V

V \e,i

ypn,e,i = 0

= 0

(4)

^^l,e,i=^

^^l,e,i = 0 ^n,ej = Zi,e,i

^^le,i=^

Q^l,e=^^lA



^^l,ej ~ ^l,e,i

Gw,, =0

Qfie =nnehizyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE w'

Qn,e=o Cke=Cfw Cvi^ = Cvw

Qw,^=0

Cf,.=Cfr QWf^

Cv.

=CvrQr.

Qn,e=o Cfl,e=^

Cv,,=0 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQP

This logical relation is rewritten as mixed-integer linear equations. The concentration of the last stage / m u s t satisfy the impurity level /L/„ this isy f. hi< IL^^ for le Lqje I. The objective function is to minimize the venture cost. The following equation can be used as an objective function, min X J^iFC,, +VC„ ^c^Ql +cfC,J)+ ^ C ^ G : + l^c„Ql + S S ( C / , , +Cv,,J S&SM teTis)

meV

neU

(5)

leLq e

Eq. (5) represents the total cost given by the investment and utility cost. In this way, the objective function in Eq. (5), subject to constraints in Equations 1 to 4, defines a mixed integer linear progranmiing problem. The numerical solution to the MILP problem can be obtained with standard algorithms. In Eq. (5) Qts^, QJ, VQS and FQs are the heat loads of crystallization or dissolution, the heat loads of evaporation, and the variable costs and fixed costs for the equipment associated with task t of multiple saturation point s.

3. Illustrative Example This example considers the production of potassium chloride from 100,000 ton/year of sylvinite (47.7% KCl, 52.3% NaCl). Data are given in Cisternas et al. (2001). The solution found is shown in figure 3. The problem formulation in 293 equations and 239 variables (27 binary variables) was solved using 0SL2 (GAMS). The optimal solution

divides the feed into two parts. A sensitivity analysis shows that product impurity level and residual liquid retained level in the cake can affect the solution and cost by 20%.

LliAClirNC. at lOOX

iiquui

UNIT

wash liquid

1

WASH

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB UNIT

1

RESLURRY

KCl Cake

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHG

Figure 4. Solution for example.

4. Conclusions The objective of this paper has been to present a method for determining the desired process flowsheet for fractional crystallization processes including cake washing. To achieve this goal, a systematic model was introduced consisting of four networks: the thermodynamic state network, the heat integration network, the task network, and the cake wash network. Once the representation is specified, the problem is modelled as a MILP problem. From the example, we can conclude that the model can be useful in the design and study of fractional crystallization processes. Result from the example indicates that product impurity level and the level of residual liquid retained in the cake can affect the optimal solution.

5. References Berry, D.A., Dye, S.R., Ng, K.M., 1997, AIChE J., 43, 91. Chang, W.C, Ng, K.M., 1998, AIChE J., 44, 2240. Cisternas, L.A., Rudd, D.F., 1993, Ind. Eng. Chem. Res., 32, 1993. Cisternas, L.A., Swaney, R.E., 1998, Ind. Eng. Chem., 37, 2761. Cisternas, L.A., 1999, AIChE J., 45, 1477. Cisternas, L.A., Guerrero, C.P., Swaney, R.E., 2001, Comp. & Chem. Engng., 25, 595. Cisternas, L.A., Torres, M.A., Godoy, M.J., Swaney, R.E., 2003, AIChE J., In press. Papoulias, S.A., Grossmann, I.E., 1983, Comp. & Chem. Engng., 707. Turkay, M., Grossmann, I.E., 1996, Ind. Eng. Chem. Research, 35, 2611.

6. Acknowledgment The authors wish to thank CONICYT for financial support (Fondecyt project 1020892).

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. Ail rights reserved.

89

Mathematical Modelling and Design of an Advanced Once-Through Heat Recovery Steam Generator Marie-Noelle Dumont, Georges Heyen LASSC, University of Liege, Sart Tilman B6A, B-4000 Liege (Belgium) Tel:+32 4 366 35 23 Fax: +32 4 366 35 25 E-mail: [email protected]

Abstract The once-through heat recovery steam generator design is ideally matched to very high temperature and pressure, well into the supercritical range. Moreover this type of boiler is structurally simpler than a conventional one, since no drum is required. A specific mathematical model has been developed. Thermodynamic model has been implemented to suit very high pressure (up to 240 bar), sub- and supercritical steam properties. We illustrate the model use with a 180 bar once-through boiler (0TB).

1. Introduction Nowadays combined cycle (CC) power plants become a good choice to produce energy, because of their high efficiency and the use of low carbon content fuels (e.g. natural gas) that limits the greenhouse gases production to the minimum. CC plants couple a Brayton cycle with a Rankine cycle. The hot exhaust gas, available at the output of the gas turbine (Brayton cycle) is used to produce high-pressure steam for the Rankine cycle. The element, where the steam heating takes place, is the heat recovery steam generator (HRSG). High efficiency in CC (up to 58%) has been reached mainly for two reasons: • Improvements in the gas turbine technology (i.e. higher inlet temperature); • Improvement in the HRSG design We are interested in the second point. The introduction of several pressure levels with reheat in the steam cycle in the HRSG allows recovering more energy from the exhaust gas. Exergy losses decrease, due to a better matching of the gas curve with the water/steam curve in the heat exchange diagram (Dechamps,1998). Going to supercritical pressure with the 0TB technology is another way to better match those curves and thus improve the CC efficiency. New improvements are announced in near future to reach efficiency as high as 60%. In the present work we propose a mathematical model for the simulation and design of the once-through boiler. It is not possible to use empirical equations used for the simulation of each part of the traditional boiler. General equations have to be used for each tube of the boiler. Moreover there is a more significant evolution of the water/steam flow pattern type due to the complete water vaporization inside the tubes (in a conventional boiler, the circulation flow is adjusted to reach a vapor fraction between 20% and 40% in the tubes and the vapor is separated in the drum). Changes of flow pattern induce a modification in the evaluation of the internal heat transfer coefficient as well as in the pressure drop formulation. The right equation has to be selected dynamically according to the flow conditions prevailing in the tube.

90

The uniform distribution of water among parallel tubes of the same geometry subjected to equal heating is not ensured from the outset but depends on the pressure drop in the tubes. The disappearance of the drum introduces a different understanding of the boiler's behavior. Effect of the various two-phase flow patterns have to be mathematically controlled. The stability criteria has changed. zyxwvutsrqponmlkjihgfedcbaZYXWV

2. Mathematical Model

2.1. Heat transfer zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 2.1.1. Water Mathematical models for traditional boilers are usually based on empirical equations corresponding to each part of the boiler : the economizer, the boiler and the superheater. Those three parts of boiler are clearly separated thus it is not difficult to choose the right equation. In a once-through boiler this separation is not so clear. We have first to estimate the flow pattern in the tube then to choose the equation to be used. "Liquid single phase" and "vapor single phase" are easily located with temperature and pressure data. According to Gnielinski (1993) the equation 1 applies for turbulent and hydrodynamically developed flow. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIH

_ ^ * ^ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJ (1) ^ ' ^(i,821og^QRe-l,64) 1+ 1 2 , 7 7 ( ^ ( p r 2 / 3 _ i j (^/8)(Re^-1000)Pr

Arw=-

^

^





During vaporization different flow patterns can be observed, for which the rate of heat transfer also differs. In stratified-wavy flow pattern incomplete wetting has an effect on the heat transfer coefficient. A reduction could appear for this type of flow pattern. Computing conditions where a change in flow pattern occurs is useful. A method to establish a flow pattern map in horizontal tube for given pressure and flow conditions is clearly exposed by Steiner (1993). This method has been used in this study. The different flow pattern in the vaporisation zone of the OTB are given in figure 1. The heat transfer coefficient is estimated from numerous data. It is a combination of convective heat transfer coefficient and nucleate boiling heat transfer coefficient. How Pattern Diagram for Horizontal Row (VDI (1993)) Row in tubes with 5.06t^ and 5.166t/h

1 1i

riiistS nular

lE

\vapor

The acceleration term is defined with equation 13 where a is the volume fraction of vapor (void fraction). It is reconmiended to discretize the tube in several short sections to obtain more accurate results (figure 4).

AF

o--r

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (13) :G^* zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA vap

^

'

liq

2.2.2. Fumes The pressure drop in a tube bundle is given by equation 14. In this case the number of rows (NR) plays an important role in the pressure drop evaluation. The coefficient f is more difficult to compute from generalized correlations. The easiest way is once more to ask the finned tubes manufacturer to obtain accurate correlation.

AP =

f'P'V

•N

(14)

R

3. Stability Stability calculation is necessary for the control of water distribution over parallel tubes of the same form and subjected to equal heating in forced circulation HRSG and particularly in OTB. The stability can be described with the stability coefficient S. HRSG manufacturers try to keep the stability coefficient in the range (0.7-2). In OTB design inlet restrictions have been installed to increase single-phase friction in order to stabilize the boiler. Based on the 7C-criterion (Li Rizhu, Ju Huaiming, 2002) defined as , the design has been realized with n about 2. This number has to

7r =

be reduced in near future when all various flow instabilities would be identified.

( (

relative change \ in pressure drop/

S =

with

relative change \ in flow rate /

d(M) M

Mass flow M

S>0 stable S < 0 unstable zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB Figure 5: Stability example.

94 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

4. An Example Results have been obtained for an OTB of pilot plant size (42 rows). WATER (10.25 t/h; Tin=44°C; Tout=500°C) FUMES (72.5 t/h; Tin=592°C; Tout=197°C). In VALIBelsim software in which the simulation model has been implemented, the simulation of the OTB needs 42 modules, one for each row of tubes. Since VALI implements an numerical procedures to solve large sets of non-linear equations, all model equations are solved simultaneously. The graphical user interface allows easy modification of the tube connections and the modelling of multiple pass bundles.

5. Conclusions and Future Work

The mathematical model of the once-through boiler has been used to better understand the behaviour of the boiler. Future mathematical developments have still to be done to refme the stability criteria and improve the OTB design. Automatic generation of alternative bundle layouts in the graphical user interface is also foreseen. zyxwvutsrqponmlkjihgfedcb

6. Nomenclature A

total area of outer surface (m^)

Prandl number Pr = — - — Pr bare tube outside surface area Ab A fin outside surface area (m^) Afo exchanged heat (kW) Q Ai zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA inside surface area (m^) ADO

di

AP f G H NR

Nu P

Reynolds number

Re free area of tube outer surface T temperature (K) mean area of homogeneous tube wall fluid velocity (m/s) V specific heat capacity at constant vapor mass fraction X pressure (J/kg/K) component flow rate (kg/s) Xi tube intemal diameter (m) pressure drop (bar) heat transfer coefficient a pressure drop coefficient (kW/mVK) massflux(kg/m^/s) local heat transfer coefficient oc(z) enthalpyflow(kW) thermal conductivity (W/m/K) X number of rows in the bundle zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDC density (kg/m^) P al dynamic viscosity (Pa.s) or Tl Nusselt number Nu A (kg/m/s) pressure (bar) fin efficiency

7. References Dechamps, P.J. 1998, Advanced combined cycle alternatives with the latest gas turbines, ASME J. Engrg. Gas Turbines Power 120, 350-35. Gnielinski, V. 1993, VDI heat atlas, GA,GB, VDI-Verlag, DUsseldorf, Germany. Li Rizhu, Ju Huaiming, 2002, Structural design and two-phase flow stability test for the steam generator. Nuclear Engineering and Design 218, 179-187. Steiner, D. 1993, VDI heat atlas, VDI-Verlag, HBB, Dusseldorf, Germany.

8. Acknowledgements This work was financially supported by CMI Utility boilers (Belgium).

European Symposium on Computer Aided Process Engineering- 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

95

Synthesis and Optimisation of the Recovery Route for Residual Products Joaquim Duque^, Ana Paula F. D. Barbosa-Povoa^^ and Augusto Q. Novais^ ^DMS, INETI, Est. do Pago do Lumiar, 1649-038 Lisboa, Portugal ^CEG-IST, DEG, I. S.T., Av. Rovisco Pais, 1049-101 Lisboa, Portugal

Abstract The present work describes an optimisation model for the management of the recovery of residual products originated at industrial plants. The mSTN (maximal State Task Network) representation is used as the modelling framework for the general proposed network superstructure where all possible process transformations, storage, transports and auxiliary operations are accounted for. This is combined with the evaluation of a set of environmental impacts (EI), quantified by metrics (for air, water pollution, etc.) through the minimum environment impact analysis (MEIM) methodology and associated with waste generation at utility production and transportation levels. The final model is described as a MILP, which, once solved, is able to suggest the optimal processing and transport routes, while optimising a given objective function and meeting the design and environmental constraints. A motivating example based on the recovery of the sludge obtained from Aluminium surface finishing plants is explored. This aims at maximizing the quantity of sludge processed and reflects the trade-off between the cost for its disposal, processing, transport and storage, while accounting for the limits imposed in the environment pollutants associated.

1. Introduction Increased awareness over the effects of industrial activities on the environment is leading to the need of providing alternative ways of reducing the negative environmental impacts (Pistikopoulos et al., 1994). In the process industry this problem is highly complex and the potential environmental risks involved forces process manufacturers to undertake special care not only on its production impacts but also on waste disposal, steeping costs and soil occupation. Most of the works looking into these problems addressed the case of designing a plant such that a minimisation of waste produced was obtained (Linninger et al, 1994, Stefanis et al, 1997). A further possible environmental solution, if viable, is the reuse of those waste materials as resources, after total or partial pollutant content removal. In this paper, we explore this problem and propose a model for the synthesis and optimisation of a general recovery route for residual products. The modelling of the general network route is based on the mSTN representation (Barbosa-Povoa, 1994) where all the possible processing, storage and transport operations are considered. A metric for the diverse environmental effects involved is used which is based on the generalisation of the MEI methodology as presented in Stefanis et al (1997).

' Author to whom correspondence should be addressed, e-mail: [email protected], tel:+ 351 1 841 77 29

96

The model is generic in scope and leads both to the optimal network structure and to the associated operation. The former results from the synthesis of the processing steps, while the latter is described by the complete resource time allocation (i.e. processing transport and storage scheduling). A motivating example based on the sludge recovery from Aluminium surface finishing plants is presented with an objective function that maximizes the profit for the proposed network, over a given time horizon. The maximization of the quantity of sludge processed is obtained and reflects the trade-off between the cost for its disposal before and after processing, while accounting for production and transport environment impacts and guaranteeing limits imposed on the environment pollutants. zyxwvutsrqponmlkjihgfedc

2. Problem DeHnition and Characteristics The problem of reducing the environmental impact of pollutant products as addresses in this work can be defined as follows: Given: A recovery network superstructure (STN/Flowsheet) characterized by: • All the possible transformations, their process durations, suitable unit locations, capacities, utilities consumption, materials involved and wastes generated. • All waste producers, their location and the quantity of wastes produced along with their pollutants content. • All the reuses and landfill disposals, their locations, utility consumption, capacities and, for the reuse, the wastes generated. • All the possible transport routes, associated suitable transports and duration. Cost data for: • Equipment (processing, transport and storage units). • Reuses and landfill disposal. • Operations and utilities. Operational data for: • Type of operation (Cyclic single campaign mode or short-term operation) • Time horizon/Cyclic time Environmental data (see table 1 Pistikopoulos et al., 1994) • Maximum acceptable concentration limits (CTAM, CTWM) • Long term effect potentials (ex. GWI, SODI) Determine: • The optimal network structure (processing operations, storage locations and transfer routes of materials) • The optimal operating strategies (scheduling of operations, storage profiles and transfer occurrences). So as to optimise an economic or environmental performance. The former can be defined as a maximum plant profit or a minimum capital expenditure accounting for the environmental impacts involved and their imposed limits; the latter can be the minimisation of the environmental impacts where all operational and structural network restrictions as well as cost limits are considered. As mentioned before the mSTN representation is used to model the general network superstructure. This is coupled with a generalization of the MEI methodology so as to account for the waste generation at utility production and transportation levels. For the transport task the environmental impact is calculated based on the fuel oil consumption, therefore at a utility level.

97 Due to the characteristics of the model proposed where the recovery of pollutant products is addressed the system frontier for the environmental impacts is defined at the raw materials level including any type of utilities used. The model has the particularity of considering all possible concurrent transportations and transformations for the same operation (different instances within the superstructure) as well as all raw material producers and re-users. The pollutant is added up for all the different types of waste. The limits on the total waste production and global environment impacts are introduced in the form of waste and pollution vectors, added to the model as additional restrictions. Those limits derive directly from legal regulations for the pollutants considered. The model also considers the possibility of imposing limits on the fmal product amounts required - associated with possible auxiliary operations/removals - as well as on the amount of pollutant materials (raw materials) that should be processed - due to environmental impacts. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Table 1. Time dependent environment impact indicators. zyxwvutsrqponmlkjihgfedcbaZYXWVUT CTAM ( Critical Air Mass, [kg air/h]) CTWM ( Critical Water Mass, [kgwater/h])

^ polutant emission mass at interval t (kg pollutant/ h) standard limit valu e (kg pollutant/ kg air) C7WM =

polutant emission mass at interval t (kg pollutant/ h) standard limit valu e (kg pollutant/ kg water)

SMD ( Solid Mass Disposal,zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA SMD = mass of solids disposed at interval t (kg/h) [kgAi])

^a^t^ CO^I^r^)^^™^^^ ^

^^^ " ^^^^ ^^ P^"- ^^'^^^^^^^t (^^^) ^ ^ ^ ^ (kg C02/kg poll.)

POI ( Photochemical Oxidation Inpact, C2H4 [kg/h]) SODI ( Stratospheric Ozone Depletion Inpact [kg/h])

POl = Mass of poll at interval t (kg poll /h) x POCP (kg Ethylene/kg poll) SODI = Mass of poll at interval t (kg poll /h) x SODP (kg CFCl 1/kg poll)

3. Recovery Route Example In order to illustrate the use of the model proposed, a motivating example based on the optimisation of a recovery route for Al-rich sludge is presented. The anodization and lacquering of aluminium are processes that generate significant amounts of waste in the form of Al-rich sludge. As an economical alternative to disposal, this sludge can be treated and employed as coagulant and flocculant for the treatment of industrial and municipal effluents, or used as agricultural and landfill material. As the surface treatment plant location does not coincide, in general, with the locations for water treatment or landfill location, suitable transports are needed. Based on these characteristics the recovery route network associated with this problem can be described as follows: Given the raw materials differences on pollutant content, two different general types corresponding to state SI and S2 are considered. State SI sludge needs to be processed by task Tl for a two hours period, originating a high pollutant material (S3) that is nonstorable, in the proportions of 1 to 0.97, input and output mass units respectively, and producing a 3% mass units of waste (WTl). State S2 sludge is submitted to a non-aluminium pollutant removal task T2 during two hours, originating a storable intermediate state S4 with the proportions of 2 to 0.98 and originating a 2% waste (WT2), in mass units. This S4 intermediate state is suitable for use as coagulant and flocculant for the treatment of industrial and municipal effluents.

98

The intermediate materials S3 and S4 at respectively 0.6 and 0.4 (mass units) proportions, are then submitted to an aluminium removal task, T3, going on for four hours and originating the final product S5, with the proportions of 1 to 0.99 and originating a 1% waste, in mass units (WT3). This state is stable and has a low pollutant level allowing for its agricultural disposal or the use as a landfill material. Finally the rich aluminium sludge S4 is used for the treatment of industrial and municipal effluents (T4), at a different geographical location, thus requiring a transportation task (Trl) which takes 1 hour of duration. Task 4, leads to the final product S6 and lasts for two hours and has the proportions of 1 input to 0.98 output and originates a 2% waste (WT4) (in mass units). An 8000 tonnes consumption is guaranteed for each final product S5 an S6 to be synthesised from SI and S2, over a production time horizon of 1000 hours with a periodic operation of 10 hours. The STN and the superstructure for the recovery route example are depicted respectively in 0 and 0. The equipment characteristics are presented in Table 2 (raw materials and product storage are unlimited) while impact factors are in Table 3. zyxwvutsrqponmlkjihg

Fig. 1. STN Network recovery route. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJI

UA

I

fl *t>

*h c5

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQP

*p

la

1c

Lc

KPciflW

V5

n

2a

C13M

V6

C2

V2

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 1b zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA V4

Fig. 2. Recovery route superstructure.

99

The example was solved using the GAMS/CPLEX (v7.0) software running in a Pentium III-E at 863.9 Mhz. The model is characterised by 2046 equations, 1307 variables, of which 144 are discrete, and takes an execution time of 0.110 CPU seconds. The final optimal plant structure is presented in Figure 3 with the corresponding operation depicted in Figure 4. The final recovery route (Figure 3) is characterised by 3 processing steps (in unit lb, Ic and 2a) an intermediate storage location (V4) and a transport route (transport 1, trl). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Table 2. Unit characteristics. Costs Capacity (tonne) Max. Variable (cu/.kg) Suitab. Min. Fixed (10' c.u.) Units 0.5 Unit la (la) TleT2 150 20 50 0.5 TleT2 150 20 Unit lb (lb) 50 1 200 30 Unitlc(lc) T3 50 1 T4 200 30 Unit 2a (2a) 50 0.05 200 0.5 Transport 1 T5 50 S4 0.1 Vessel4 (V4) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 100 1 10 c.u. - currency units

Table 3. Impact factors. Residues wl_Tl wl_T2 wl_T3 wl_T4 wl_T5 wl_Ul wl_U2 wl_U3 wl_U4

(CTAM) 10 0 1 0 0 0 5 8 2

(CTWM) 0 8 10 8 0 10 0 0 1

GWP 0 0 0.03 0.03 0 0 0.004 0.005 0.08

SMD 0.05 0 0 0 0 0 0 0 0

u

;12

^

I

POCP 0 0 0 0 0 0.05 0 0 0

'

SODP 0 0 0 0 0 0 0.003 0.01 0 zyxwvutsrqponmlk

L

I—o-J V4 [ 1 1

III zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Fig. 3. Optimal network recovery route structure. When comparing the options of disposal or processing the materials sludge it can be seen that a value added of 31280 c.u. is obtained against a cost of disposal of 32070 c.u. The recovery option translates a reduction of 95 % in pollutant material with a maximum environmental impact of (ton/hr) CTAM=1.19, CTWM=5.37, SMD=0.005, GWI=0.012 and in POI= SODI=0.

4. Conclusions A model for the synthesis and optimisation of a general recovery route for residual products is proposed. The modelling of the general network route is made through the

100 use of the mSTN representation. This is coupled with a metric for the various environmental effects involved, based on the generalization of the MEI methodology. The proposed model leads to both the optimal network structure, accounting for processing and storage locations of materials, as well as transport, and to the associated operation. The former resulting from the synthesis of the recovery steps (processing, storage and transport), while the latter is described by the complete resource time allocation (i.e. scheduling) where environmental impacts associated not only to the disposal of the materials but also to the utilities and transports utilisation are accounted for. In this way the model is able to suggest the optimal processing and transport route, while reflecting the trade-off between processing and transport costs and environmental worthiness of the modified residual products. It further allows the analysis of the tradeoff existing between the option of the disposal of materials, with a high negative effect to the environment, or their re-processing, while accounting for all the capital, operational, transportation and environment costs associated. As future developments the model is now being extended to account for the treatment of uncertainty in some model parameters. This being investigated on the availability of residual products as well as on the operational and structural levels of the recovery network. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

^

h-2_R1a

I

49.98 Tl_R1a

o 2a I 81.63 I T5_Tr1

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ

100 SO

Fig. 4. Scheduling for the recovery route network.

5. References Barbosa-Povoa, A.P.F.D., 1994, Detailed Design and Retrofit of Multipurpose Batch Plants, Ph. D. Thesis. Imperial College, University of London, U.K. Linninger, A.A., Shalim, A.A., Stephanopoulos, E., Han, C. and Stephanopoulos, G., 1994, Synthesis and Assessement of Batch Process for Pollution Prevention, AIchemE Symposium Series, Volume on pollution prevention via process and product modifications, 90 (303), 46-58. Pistikopoulos, E.N., Stefanis, S.K. and Livingston, A.G., 1994, A methodology for Minimum Environment Impact Analysis. AIchemE Symposium Series, Volume on pollution prevention via process and product modifications, 90 (303), 139-150. Stefanis, S.K., Livingston, A.G. and Pistikopoulos, E.N., 1997, Environment Impact Considerations in the optimal design and scheduling of batch processes. Computer Chem. Engng, 21, 10, 1073-1094.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

101

A New Modeling Approach for Future Challenges in Process and Product Design Mario Richard Eden, Sten Bay J0rgensen, Rafiqul Gani CAPEC, Computer Aided Process Engineering Center, Department of Chemical Engineering, Technical University of Denmark, DK-2800 Lyngby, Denmark

Abstract In this paper, a new technique for model reduction that is based on rearranging a part of the model representing the constitutive equations is presented. The rearrangement of the constitutive equations leads to the definition of a new set of pseudo-intensive variables, where the component compositions are replaced by reduction parameters in the process model. Since the number of components dominates the size of the traditional model equations, a significant reduction of the model size is obtained through this new technique. Some interesting properties of this new technique is that the model reduction does not introduce any approximations to the model, it does not change the physical location of the process variables and it provides a visualization of the process and operation that otherwise would not be possible. Furthermore by employing the recently introduced principle of reverse problem formulations, the solution of integrated process/product design problem becomes simpler and more flexible.

1. Introduction

As the trend within the chemical engineering design community moves towards the development of integrated solution strategies for simultaneous consideration of process and product design issues, the complexity of the design problem increases significantly. Mathematical programming methods are well known, but may prove rather complex and time consuming for application to large and complex chemical, biochemical and/or pharmaceutical processes. Model analysis can provide the required insights that allows for decomposition of the overall problem into smaller (and simpler) sub-problems as well as extending the application range of the original models. In principle, the model equations representing a chemical process and/or product consist of balance equations, constraint equations and constitutive equations (EdenzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPON et al., 2003). The nonlinearity of the model, in many cases, is attributed to the relationships between the constitutive variables and the intensive variables. The model selected for the constitutive equations usually represents these relationships, therefore it would seem appropriate to investigate how to rearrange or represent the constitutive models.

2. Reverse Problem Formulation Concept By decoupling the constitutive equations from the balance and constraint equations the conventional process/product design problems may be reformulated as two reverse

102

problems. The first reverse problem is the reverse of a simulation problem, where the process model is solved in terms of the constitutive (synthesis/design) variables instead of the process variables, thus providing the synthesis/design targets. The second reverse problem (reverse property prediction) solves the constitutive equations to identify unit operations, operating conditions and/or products by matching the synthesis/design targets. An important feature of the reverse problem formulation is that as long as the design targets are matched, it is not necessary to resolve the balance and constraint equations (EdenzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA et al., 2002). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFED

Procffi^^ Model Balance and Constraint Equations (Mass, Energy, Momentum) Constitutive Equations (Phenomena model - Function of Intensive Variables)

I

Balance and Constraint Equations

i

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA REVERSE SIMULATION / ! Identification of design targets by solution of decoupled model

D E ^ G N TARGETS

S=l

Using an Augmented Property index (AUP) for each stream s, defined as the summation of the dimensionless property operators, the property cluster for property j of stream s is defined: NP

j=l

Cis=—— J' AUR

(3)

The mixture cluster and AUP values can be calculated through the linear mixing rules given by Equations (4) - (5): Ns

Y

-ATTP

CjM.X=EPsqs ' P s - ^ 7 5 ^ s=l

(4)

^^^MIX

AUPM,X=2;'^SAUP,

(5)

s=l

In Equation (4) Ps represents the cluster "composition" of the mixture, i.e. a pseudointensive variable, which is related to the flow fractions (xs) through the AUP values. An inherent benefit of the property clustering approach is that due to the absence of component and unit specifics, any design strategies developed will be generic.

4. Case Study - Recycle Opportunities in Papermaking To illustrate the usefulness of constitutive or property based modeling, a case study of a papermaking facility is presented. Wood chips are chemically cooked in a Kraft digester using white liquor (containing sodium hydroxide and sodium sulfide as main active ingredients). The spent solution (black liquor) is converted back to white liquor via a recovery cycle (evaporation, burning, and causticization). The digested pulp is passed to a bleaching system to produce bleached pulp (fiber). The paper machine employs 100

104

ton/hr of the fibers. As a result of processing flaws and interruptions, a certain amount of partly and completely manufactured paper is rejected. These waste fibers are referred to as broke, which may be partially recycled for papermaking. The reject is passed through a hydro-pulper followed by a hydro-sieve with the net result of producing an underflow, which is burnt, and an overflow of broke, which goes to waste treatment. zyxwvutsrqpo > * » zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA • raper I Pulp Fiber • Product P^per Kraft Digester Bleachtng {Machine • ^ •• Broke Reject * » Black L quor • ' zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA f Chemical

Hydro Sieve

Hydro Pulper

zyxwvutsrqponmlkjihgfe zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPON

Cy cle

Waste M

Figure 2: Schematic representation of pulp and paper process.

The objective of this case study is to identify the potential for recycling the broke back to the paper machine, thus reducing the fresh fiber requirement and maximize the resource utilization. Three primary properties determine the performance of the paper machine and thus consequently the quality of the produced paper (Biermann, 1996): zyxwvutsrqponm Objectionable Material (OM) - undesired species in the fibers (mass fraction) Absorption coefficient (k) - measure of absorptivity of light into paper (m^/g) Reflectivity (Roo) - defined as a reflectance compared to absolute standard (fraction) In order to convert property values from raw property data to cluster values, property operator mixing rules are required (Shelley & El-Halwagi 2002; Eden et al. 2002). The property relationships can be described using the Kubelka-Munk theory (Biermann 1996). According to Brandon (1981), the mixing rules for objectionable material (OM) and absorption coefficient (k) are linear, while a non-linear empirical mixing rule for reflectivity has been developed (Willets 1958). Table 1: Properties of fibers and constraints on paper machine feed. Property

Operator

Fibers

Broke

Paper machine

Reference

OM (mass fraction)

OM

0.000

0.115

0.00-0.02

0.01

k(m'/g)

k

0.00115-0.00125

0.001

Rso

(R.)^'^

1

Flowrate (ton/hr)

0.0012 0.0013 0.82

0.90

0.80-0.90

100

30

100-105

From these values it is apparent that the target for minimum resource consumption of fresh fibers is 70 ton/hr (100-30) if all the broke can be recycled to the paper machine. The problem is visualized by converting the property values to cluster values using Equations (1) - (3). The paper machine constraints are represented as a feasibility region, which is calculated by evaluating all possible parameter combinations of the

105

property values in the intervals given in Table 1. The resulting ternary diagram is shown in Figure 3, where the dotted line represents the feasibility region for the paper machine feed. The relationship between the cluster values and the corresponding AUP values ensures uniqueness when mapping the results back to the property domain. zyxwvutsrqponmlkjihgfedc

Figure 3: Ternary problem representation.

Figure 4: Optimal feed identification.

Since the optimal flowrates of the fibers and the broke are not known, a reverse problem is solved to identify the clustering target corresponding to maximum recycle. In order to minimize the use of fresh fiber, the relative cluster arm for the fiber has to minimized, i.e. the optimum feed mixture will be located on the boundary of the feasibility region for the paper machine. The cluster target values to be matched by mixing the fibers and broke are identified graphically and represented as the intersection of the mixing line and the feasibility region in Figure 4. Using these results the stream fractions can be calculated from Equation (5). The resulting mixture is calculated to consist of 83 ton/hr of fiber and 17 ton/hr of broke. Hence direct recycle does NOT achieve the minimum fiber usage target of 70 ton/hr. Therefore the properties of the broke will have to be altered to match the maximum recycle target. Assuming that the feed mixture point is unchanged, and since the fractional contribution of the fibers and the intercepted broke are 70% and 30% respectively, the cluster "compositions" (Ps) can be calculated from Equation (4). Now the cluster values for the intercepted broke can be readily calculated from Equation (4), and the resulting point is shown on Figure 5. This reverse problem identifies the clustering target, which is converted to a set of property targets: Table 2: Properties of intercepted broke capable of matching maximum recycle target. Property OM (mass fraction) k(m'/g) Roo

Original Broke 0.115 0.0013 0.90

Intercepted Broke 0.067 0.011 0.879

Note that for each mixing point on the boundary of the feasibility region, a clustering target exists for the intercepted broke, so this technique is capable of identifying all the alternative product targets that will solve this particular problem. Solution of the second

106

reverse problem, i.e. identification of the processing steps required for performing the property interception described by Table 2, is not presented in this work. Most processes for altering or fme tuning paper properties are considered proprietary material, however the interception can be performed chemically and/or mechanically (Biermann 1996, Brandon 1981). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHG C2

0.1

0.2

0.3

0.4

0.5

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHG

0.6

0.7

0.8

0.

zyxwvutsrqponmlkjihgfedcbaZYXWVUTS

Figure 4: Identification of property interception targets.

5. Conclusions Decoupling the constitutive equations from the balance and constraint equations, allows for a conventional forward design problem to be reformulated as two reverse problems. First the design targets (constitutive variables) are identified and subsequently the design targets are matched by solving the constitutive equations. By employing recent property clustering techniques a visualization of the constitutive (property) variables is enabled. A case study illustrating the benefits of these methods has been developed.

6. References Biermann, C.J., 1996, Handbook of Pulping and Papermaking, Academic Press. Brandon, C.E., 1981, Pulp and Paper Chemistry and Chemical Technology, 3rd Edition, Volume III, James P. Casey Ed., John Wiley & Sons, New York, NY. Eden, M.R., J0rgensen, S.B., Gani, R. and El-Halwagi, M.M., 2003, Chemical Engineering and Processing (accepted). Gani, R. and Pistikopoulos, E.N., 2002, Fluid Phase Equilibria, 194-197. Michelsen, M.L., 1986, Ind. Eng. Chem. Process. Des. Dev., 25. Shelley, M.D. and El-Halwagi, M.M., Comp. & Chem. Eng., 24 (2000). Willets, W.R., 1958, Paper Loading Materials, TAPPI Monograph Series, 19, Technical Association of the Pulp and Paper Industry, New York, NY.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

107

Solving an MINLP Problem Including Partial Differential Algebraic Constraints Using Branch and Bound and Cutting Plane Techniques Stefan Emet and Tapio Westerlund* Department of Mathematics at Abo Akademi University, Fanriksgatan 3 B, FrN-20500, Abo, Finland (email: [email protected]) * Process Design Laboratory at Abo Akademi University, Biskopsgatan 8, FIN-20500, Abo, Finland (email: [email protected])

Abstract In the present paper a chromatographic separation problem is modeled as a Mixed Integer Nonlinear Programming (MINLP) problem. The problem is solved using the Extended Cutting Plane (ECP) method and a Branch and Bound (BB) method. The dynamics of the chromatographic separation process is modeled as a boundary value problem which is solved, repeatedly within the optimization, using a relatively fast and numerically robust finite difference method. The stability and the robustness of the numerical method was of high priority, since the boundary problem was solved repeatedly throughout the optimization. The obtained results were promising. It was shown that, for different purity requirements the production planning can be done efficiently, such that all the output of a system can be utilized. Using an optimized production plan, it is thus possible to use existing complex systems, or to design new systems, more efficiently and also to reduce the energy costs or the costs in general.

1. Introduction The problem of efficiently separating products of a multicomponent mixture is a challenging task that is applied in many industries. The objective is to, within reasonable costs, separate the products as efficiently as possible retaining the preset purity requirements. The modeling of different chromatographic separation processes has been addressed in, for example, Saska ^f«/. (1991), Ching ^r a/. (1987) and Guiochon ^f a/. (1994). The optimization of separation processes has been adressed in the pertient literature, for example, in StrubezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA et al (1997) and in DUnnebier and Klatt (1999). A chromatographic separation problem was recently modeled and solved as an MINLP problem by Karlsson (2001). Comparisons of solving the MINLP problem in Karlsson (2001) using the ECP method and the BB method was carried out by Emet (2002). In the present paper, the formulations in Karlsson (2001) and in Emet (2002) are further studied.

108 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

2. Formulation of the Model

Figure 1 shows the different possibilities in a two-column system with two components. At the inlet of a column, it is possible to feed the mixture to be separated (e.g. molasses), the eluent (e.g. water), or the outflow from some other column. At the outlet of a column, one can collect the products or re-use the outcome for further separation. These decisions are modeled using binary variables,zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB y]^,ykij and Xku, as illustrated in Figure 1. The times when these decisions are made are denoted by to, t i , . . . , ^T, where to = 0. The number of intervals is denoted by T and the length of the period by r = tr- The index i denotes which binary variables are valid during the time interval [ti-i,ti]. The indexes k and / denotes the column and the index j the component. The main questions are, thus, what decisions should be made and at which times in order to retain as much separated products as possible.

yu y'^i feed into column k. ykij collect product j from column xiik recycle the outflow from column / to column k.

Column ]

xm ym

Figure 1: A two-column system with two components. 2.1 Dynamic response model The concentration of the component j at the time t > 0 and at the height-position z within column k is denoted by Ckj{t, z). The height of a column is denoted by ZH, and hence 0 < z < ZH' The responses of the concentrations within each column were modeled with the following system of PDEs (Guiochon et ai, 1994):

j +u ) ^ + F . ^ f t , ( ^ c , , - ^ + c,,- dt zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPON (l + F ^ , dckj = Dj (1) dz dz'^ '

dt

where j = 1 , . . . , C, and k = 1 , . . . ,K. The estimates of the parameters ^j and Pji that were given in Karlsson (2001) were used here. The feed and the recycling decisions provide the following boundary conditions (at the inlet of column k): K

Ckj {t, 0) = yp it) • c f + ^

xik (t) • cij (t,

ZH)

(2) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ

1=1

The logical functions yp{t)

and xik{t) in (2) are modeled using the following stepwise-

109 linear functions: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA T zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA i=l T

xik{t) = J2xiik'Si{t)zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (4) where the (Jj(^)-function is defined as

S(t) = l ^ ifte[ti-uti]J = 1,...,T *^zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA \ 0 otherwise.

(5)

The steady state condition of the system (Karlsson, 2001) can be modeled as a periodical boundary condition as follows: Ckj{0,z) =Ckj{r,z)

(6)

That is, the concentrations in a column should be equal at the start and at the end of a period. Conditions on the derivatives can also be formulated in a similar way (Emet, 2002) 2.2 Optimization model The objective function, to maximize the profit over the period, was formulated as:

^^^ ] 7 IZ I ] I ^^*^^ " Y^Pj^f^iJ I [

(7)

where w and pj are parameters that denote the prices of the input feeds, dki, and the collected products Skij • Note that the objective function in (7) is pseudo-convex. The volume of the input feed into column k within the interval i is modeled using the variable dki with the following "big-M" formulation: dki Rj

S

(14)

2 Qkij

k=l i = l

where Rj < 1 denotes the purity requirement of component j . The g^^j-variables in (14) are used for measuring the volume of all components within interval i if the component j is collected from column k: c ^

rrikij - Mykij

< Qkij

(15)

The purity constraints (14) were written as linear constraints as follows: K

T

K

T

Rj'YlYl, ^^'3 - IZ m ^**J' - ^ fe=i i = i

fe=i

(^^)

i=i

Linear constraints regarding the order of the timepoints and the binary constraints for the inlet and the outlet were formulated as: fori = 1 , . . . ,T

ti-l

(17)

K

ytl +E Xlik C

E

i=i

1=1 Vkij +

< 1

(18)

X^ Xkil < 1

(19)

K /=1

3. Numerical Methods An analysis of solving the boundary value problem using orthogonal collocation, neural networks and finite differences was conducted in Karlsson (2001). The finite difference method was in Karlsson (2001) reported to be the most robust one (when applied on the chromatographic separation problem), and was hence applied in the present paper. The periodical behavior of the solution was achieved by solving the PDEs iteratively until the changes in the concentrations of two successive iterations resided within a given tolerance

Ill (Emet, 2002). The optimization problem was solved with the ECP method described in Westerlund and Pom (2002). Comparisons were carried out using an implementation of the BB-method for MINLP problems by Leyffer (1999). Whereas the applied BB-method requires both gradient and Hessian information, the ECP-method only requires gradient information. The derivatives needed in each method were thus approximated using finite differences. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

4. Numerical Results

The profiles of the concentrations of a solution obtained with the BB method are illustrated in Figure 2. Corresponding values are presented in Table 1. The total number of times the system of PDEs has been solved when solving the MENLP problem is also given in Table 1. Note, that most of the CPU-time used in each method was spent on solving the PDEs and calculating the integrals. The purity requirements, (0.8,0.9), were well met using a one-column system. A solution to the two-column problem, obtained with the ECP-method, is illustrated in Figure 3. In the latter problem the purity requirement was (0.9,0.9), and hence recycling was needed. There were, however, severe problems in obtaining any solutions, to the two-column problem, with the BB-method because of the high number of function evaluations needed within each NLP subproblem (Emet, 2002). zyxwvutsrqponm

Table 1: Results of a one-column system with the purity requirement (0.8,0.9). BB ECP -12.12 -12.28 purity (0.81,0.90) (0.82,0.90) # sub-prob. 124 (MILP) 42 (NLP) # PDE-solv. 11285 1385 1265.4 CPU [sec] 210.0

/•

Figure 2: Profiles of a one-column problem, by BB. Ijl2 (0.01.0.99)

I

^^/; (0.97.0.03)

I

recycle (0.82.0.18)

(0.8, 0.2)

(0.98,0.02)

(0.12,0.81 2H

(0.04,0.96)

(0.23,0.77)

y ^



SSH^M

65.6 I

(a) Column 1, recycle to col. 2.

(b) Column 2.

Figure 3: A solution to the two-column problem, f* = —12.4.

recycle

92 9 \ feed

112 3 \

112 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

5. Discussion A chromatographic separation problem was formulated as an MINLP problem and solved with the ECP method and the BB method for MINLP problems. The dynamics of the underlying separation process was formulated as a boundary value problem that was solved using finite differences. It was shown that, for a lower purity demand, all the outflow of a one-column system could be utilized as products. For a higher purity demand, a more complex system with two or more columns was needed in order to enable the recycling of unpure outflows for further separation. It was further observed that the advantage of the ECP-method was its need for relatively few function evaluations. The main drawbacks of the applied BB-method was the dependency on exact Hessian information. However, improvements in the solving of the boundary value problem, in the solving of the MINLP problem and also within the modeling of these are interesting future research challenges.

6. References Ching C.B., Hidajat K. and Ruthven D.M. (1987). Experimental study of a Simulated Counter-Current adsorption system-V. Comparison of resin and zeolite adsorbents for fructose-glucose separation at high concentration. Chem. Eng. Sci., 40, pp. 2547-2555. Diinnebier G. and Klatt K.-U. (1999). Optimal Operation of Simulated Moving Bed Chromatographic Processes. Computers Chem. Engng Suppl, 23, pp. S195-S198. Emet S. (2002). A Comparative Study of Solving a Chromatographic Separation Problem Using MINLP Methods. Ph.Lic. Thesis, Abo Akademi University. Guiochon G., Shirazi S.G., Katti A.M. (1994). Fundamentals of preparative and Nonlinear Chromatography. Academic Press, San Diego, CA. Karlsson S. (2001). Optimization of a Sequential-Simulated Moving-Bed Separation Process with Mathematical Programming Methods. Ph.D. Thesis, Abo Akademi University. Leyffer S. (1999). User manual for MINLP BB. Numerical Analysis Report, Dundee University. Saska M., Clarke S. J., Mei Di Wu, Khalid Iqbal (1991). Application of continuous chromatographic separation in the sugar industry. Int. Sugar JNL., 93, pp. 223-228. Strube J., Altenhoner U., Meurer M. and Schmidt-Traub H. (1997). Optimierung kontinuerlicher Simulated-Moving-Bed Chromatographie-Prozesse durch dynamische Simulation. Chemie Ingenieur Technik, 69, pp. 328-331. Westerlund T. and Pom R. (2002). Solving Pseudo-Convex Mixed Integer Optimization Problems by Cutting Plane Techniques. Optimization and Engineering, 3, pp. 253-280.

7. Acknowledgements Financial support from TEKES (Technology Development Center, Finland) is gratefully acknowledged.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B. V. All rights reserved.

113

Selection of MINLP Model of Distillation Column Synthesis by Case-Based Reasoning Tivadar Farkas*, Yuri Avramenko, Andrzej Kraslawski, Zoltan Lelkes*, Lars Nystrom Department of Chemical Technology, Lappeenranta University of Technology, P.O. Box 20, FIN-53851 Lappeenranta, Finland, [email protected] *Department of Chemical Engineering, Budapest University of Technology and Economics, H-1521 Budapest, Hungary

Abstract A case-based library for distillation column and distillation sequences synthesis using MINLP has been developed. The retrieval algorithm, including inductive retrieval and nearest neighbor techniques, is presented. The retrieval method and the adaptation of the solution is tested by a heptane-toluene example.

1. Introduction

Distillation is the most widespread separation method in chemical process industry. Since the equipment and utility costs are very high the synthesis of distillation columns and distillation sequences is important task. Whereas it is very difficult due to the complexity of structures and equilibrium models. One of the most popular methods of synthesis is hierarchical approach (Douglas, 1988). The other common method of synthesis is mixed integer nonlinear programming (MINLP). MINLP is also used to perform synthesis and system optimization simultaneously (Duran and Grossmann, 1986). The simultaneous design and optimization method has three steps: (a) build a superstructure; (b) generate the MINLP model of the superstructure; (c) fmd the optimal structure and operation with a proper tool. There are two main difficulties when using MINLP: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPON a) Generating a working accurate MINLP model is a complicated task. Usually, every published paper reports a new MINLP model and superstructure according to the problem under consideration. Up to now only one automatic combinatorial method is reported to generate superstructure (Friedler et al., 1992). However it is hard to use for cascade systems. b) Most of the MINLP algorithms provide global optimum in case of convex objective function and searching space. Rigorous tray-by-tray and the equilibrium models in distillation column design usually contain strongly non-convex equations, therefore finding global optimum is not ensured. In such cases the result may depend on the starting point of calculations. In order to overcome these difficulties, the earlier experiences should be used when solving a new problem. Case-based reasoning (CBR) is an excellent tool for the reuse of experience. In the CBR the most similar case to an actual problem is retrieved from a case library, and the solution of this case is used to solve the actual problem. Finally the solution of the problem is stored in the case library for future use (Aamodt and Plaza, 1994; Watson, 1997). The objective of this paper is to present a case-based reasoning method, which for a new distillation problem - an ideal mixture of components that is to be separated into a number of products of specified compositions - provides proper MINLP model with

114

superstructure and gives an initial state for the design of distillation column or distillation sequence. The creation of the case library of the existing MINLP models and results were considered. The library contains 27 cases of separation of ideal mixtures for up to five components. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB

2. Case-Based Reasoning Case-based reasoning imitates a human reasoning and tries to solve new problems by reusing solutions that were applied to past similar problems. CBR deals with very specific data from the previous situations, and reuses results and experience to fit a new problem situation. The central notion of CBR is a case. The main role of a case is to describe and to remember a single event from past experience where a problem or problem situation was solved. A case is made up of two components: problem and solution. Typically, the problem description consists of a set of attributes and their values. Many cases are collected in a set to build a case library (case base). The library of cases must roughly cover the set of problems that may arise in the considered domain of application. The main phases of the CBR activities can be described typically as a cyclic process. During the first step, retrieval, a new problem (target case) is matched against problems of the previous cases (source cases) by calculating the similarity function, and the most similar problem and its stored solution are found. If the proposed solution does not meet the necessary requirements of actual situation, the next step, adaptation, is necessary and a new solution is created. The obtained solution and the new problem together build a new case that is incorporated in the case base during the learning step. In this way CBR system evolves as the capability of the system is improved by extending the stored experience. One of the most important steps in CBR is the way of calculation of the similarity between two cases during the retrieval phase.

3. Retrieving Method During the retrieval, the attributes of the target case and the source cases are compared to find the most similar case. There are two widely used retrieval techniques (Watson, 1997): nearest neighbor and inductive retrieval. The nearest neighbor retrieval simply calculates the differences of the attributes, multiplied by a weighting factor. In inductive retrieval a decision tree is produced, which classifies or indexes the cases. There are classification questions about the main attributes in the nodes of the tree, by answering these questions the most similar case is found. Due to variety of specifications of the cases, the two retrieval techniques are combined. First, using inductive method, a set of appropriate cases is retrieved, and then only the cases in the set are considered. Next, the cases in the set are ranked according to theirs similarity to the target case using nearest neighbor method.

3.1. Inductive retrieval There are the following classification attributes in the inductive retrieval: zyxwvutsrqponmlkjihgfedcb Sharp or non-sharp separation Heat integration: According to this classification there are three possibilities: structure without heat integration; structure with heat integration; thermally coupled structure. In single column configuration only non-heat integrated structure is possible. Number of products: Number of products can change from 2 to 5. This classification is considered because the single column configurations and models do not consist of mass balances for the connection of distillation columns, thus these models cannot be used for three or more products problems.

115 zyxwvutsrqpo

Number of feeds: Cases with 1, 2 or 3 feeds are considered. This attribute is required because of the dissimilarity between the MINLP models with single and multiple feeds. zyxwvutsrqp 3.2. Retrieval based on the nearest neighbor method The similarity between the target case and all the source cases is calculated using nearest neighbor method. The evaluation of the global similarity between the target and a source case is based on the computation of the local similarities. The local similarity deals with a single attribute, and takes the value from the interval [0;!]. Thus, from the local similarities the global similarity can be derived as: SIM{T,S)

= YJ ^i' ^^^i / S

(1)

^'

where w/ is the weight of importance of attribute /; sinii is the local similarity between the target (7) and the source case (5); k is the number of attributes. The weight of importance takes an integer value from 1 to 10 according to the actual requirements. Five attributes are used: Components similarity. It is a non-numeric attribute. The similarity of components is based on theirs structure. The similarity tree (Fig. 1), where the nodes represent the basic groups of chemical components, was created. To each component group a numeric similarity value was assigned. For two components the similarity value is the value of the closest common node. The more similar the components are, the higher is the similarity value between them. For the identical components the similarity value isl. components 0

alcohol 0,7 - methanol

hydrocarbon 0,6

/ paraffinic 0,8 - propane - n-butane - iso-butane - n-pentane - n-hexane - n-heptane - n-octane - n-nonane

\

nitrile 0,4 - acetonitrile

aromatic 0,5 - benzene - toluene - o-xylene - diphenyl

keton 0,3 - acetone

unsaturated 0,8 - methylacetylene - tams-2-butene - cis-2-butene

Figure 1. Similarity tree of components. The local similarity of the components {simc) is defined as the average of the similarity values between the components:

116 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (2) um^ =Y,^cj /n zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

where Xcj is the similarity value of the components from the similarity tree; n is the maximal number of components in the compared mixtures. Boiling point and molar weight of components. These attributes are numeric. The similarity is calculated utilizing simple distance approach: the shorter a distance between two attribute's values the bigger the similarity is. For the higher sensitivity not the original values are used, but normalized one from interval [0;1]. The local similarities for these attributes are defined as: (3) sim^ = l - ^ A f ^ . / n zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (4) /=i

/

where Ati,j is the difference of normalized boiling points; Amj is difference of normalized molar weights; n is the maximal number of components. Feed and product compositions. These are also numeric attributes that are vectors. Comparing vector attributes the distance vector is determined. ? = (^P^2v..,^)

5 = (5,,52,...,5„) r,5G[0;l]; (5)

where T is the attribute vector of the target case; S is the attribute vector of the source case. Because there is a number of product composition vectors, the difference vector and the distance are counted for every product pair. The method is the same in the case of multiple feeds cases. The local similarity of feed compositions {simf) and product compositions {simp) are defined as: VJ Sim zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

(6)

• -\%

(7)

Sim

E^,where g is the maximal number of feeds; q is the number of products; e^ are the basis vectors in the J?" space (necessary for normalization). Other attributes can also be considered according to the actual requirements, and the weights are taken from 1 to 10.

117 zyxwvutsrqp

4. Solution The model consists of a superstructure, the set of variables and parameters, the mass and enthalpy balances and other constraints, but in the original articles usually only the superstructure, the variables and the main equations are detailed. Due to these reasons instead of MINLP models descriptions the original articles have been included in the case base. In the articles usually only a flowsheet and some general data are reported as the optimum of a case. In the CBR program this flowsheet and its mathematical representation are the solution, and basing on these data initial point can be proposed for the MINLP iteration. The base of the mathematical representation of a flowsheet (Figure 2) is a mathematical graph (Figure 3). The vertexes of the graph are: the feed (Fl), the distillation columns (CI, C2,...), the heat-exchangers (condensers: Conl, ...; and reboilers: Rebl, ...), the mixers/splitters (MSI, MS2, ...) and the products (PI, P2, ...); the branches are the streams between the units. This graph can be represented in a matrix form (vertexvertex matrix). In this matrix if fly=l then there is a branch from vertex / to vertex 7, if fly=0, then there is no branch. The streams are signed (SI, S2, ...). There is a set of data describing a stream (temperature, flow rate, main component(s) or mole fractions). These connections are described by vertex-branches matrix, where the starting end ending vertexes of the signed streams are shown. Q55 32.167 MU/hr (L/D) = 9J2 97.1% A 0,732 F

tsH

- >zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA $9,9% 8 49 TOTAL TRAYS

H— 0.545>F zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

n

Q= 33.907 MU/br

Figure 2. Example of flowsheet.

^^-Ki) zyxwvutsrqpon Figure 3. Graph representation of flowsheet.

In the graph only simple columns are used with maximum three inputs and two outputs. There are reported as a solution three closest cases and according to actual requirements and engineering experiences the most useful model can be selected from among them. Due to the complexity of the distillation problems there is no adaptation of the found MINLP model. The solution of the closest case is proposed as initial point in the design task.

118 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

5. Example An example is used to test the method including the retrieval, the revision of the model and the solution of the chosen MINLP model. There is given a heptane-toluene mixture. The flowrate of the equimolar [0.5,0.5] feed is 100 kmol/h. The target is to separate the mixture into pure components with 95% purity requirement at the top and at the bottom. In inductive retrieval, a set of non-heat integrated sharp separation cases with one feed and two products was retrieved. The similarity values between our problem and the cases of the set were calculated using the nearest neighbor formulas. The most similar case was a benzene-toluene system (Yeomans and Grossmann, 2000). Using the MINLP model of this case our problem was solved: the optimal solution is a column with 67 equilibrium trays; the feed tray is the 27* from the bottom; the reflux ratio is 5.928; the column diameter is 3,108m.

6. Summary A case-based program has developed, which in the case of a new separation problem can help to generate a superstructure and an MINLP model for the design of distillation column or sequences. During retrieval the important design and operational parameters are compared. The method is tested by heptane-toluene example, which has been solved from the problem statement through the retrieval.

7. References Aamodt,

A., Plaza, E., 1994, Case-Based: Reasoning, Foundational Issues, Methodological Variations, and System Approaches. AI Communications. lOS Press, Vol. 7 : 1 , 39-59. Douglas, J.M., 1988, Conceptual design of chemical processes, McGraw-Hill Chemical Engineering Series; McGraw-Hill: New York. Duran, M.A. and Grossmann, I.E., 1986, A mixed-integer non-linear programming approach for process systems synthesis, AIChE J., 32(4), 592-606. Friedler, P.; Tarjan, K.; Huang, Y.W. and Fan, L.T., 1992, Graph-theoretic approach to process synthesis: axioms and theorems, Chem. Eng. Sci., 47(8), 1973-1988. Watson, I., 1997, Applying case-based reasoning: techniques for enterprise systems, Morgan Kaufman Publishers, INC. Yeomans, H. and Grossmann, I.E., 2000, Disjunctive programming models for the optimal design of distillation columns and separation sequences, Ind. Eng. Chem. Res., 39(6), 1637-1648.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

119

Discrete Model and Visualization Interface for Water Distribution Network Design Eric S Fraga, Lazaros G Papageorgiou & Rama Sharma Centre for Process Systems Engineering, Department of Chemical Engineering, UCL (University College London), Torrington Place, London WCIE 7JE, United Kingdom ABSTRACT The water distribution network design problem poses challenges for optimization due to the tightly constrained nature of the typical mathematical programming formulation. The optimization of existing water distribution networks has therefore often been tackled through the use of stochastic optimization procedures. However, even these suffer from the need to solve systems of nonlinear algebraic equations. This paper describes the implementation of a hybrid method which combines a fully discrete formulation and visualization procedure with mixed integer nonlinear programming (MINLP) solution methods. The discrete formulation is suitable for solution by stochastic and direct search optimization methods and provides a natural basis for visualization and, hence, user interaction. Visual hints allow a user to identify easily bottlenecks and the aspects of the design that most greatly affect the overall cost. The result is a tool which combines the global search capabilities of stochastic algorithms with the pattern recognition and tuning abilities of the user. The solutions obtained provide good initial points for subsequent optimization by rigorous MINLP solution methods.

1 INTRODUCTION

The design of a water distribution network (Alperovits & Shamir, 1977) involves identifying the optimal pipe network, the head pressures of the individual supply and demand nodes, and theflowsbetween the nodes, including both the amount and the direction of flow. The objective is tofindthe minimum cost network which meets the demands specified. The water distribution network design problem poses challenges for optimization tools due to the tight nonlinear constraints imposed by the modelling of the relationship between node heads, waterflowin a pipe, and the pipe diameter. The objective function is often simple, consisting of a linear combination of pipe diameters and lengths. The optimization of existing water distribution networks has been tackled through a variety of methods including mathematical programming (Alperovits & Shamir, 1977; Coulter & Morgan, 1985) and stochastic procedures (Cunha & Sousa, 1999; Guptazyxwvutsrqponmlkjihgf et al, 1999; Savic & Walters, 1997). However, even the latter require an embedded solution

120 of systems of nonlinear algebraic equations leading to difficulties in initialization and handling convexity. This paper describes a fully discrete reformulation of the water distribution network problem. This formulation is suitable for directly evaluating the objective function and is particularly appropriate for the use of stochastic and direct search optimization methods. Furthermore, it provides a natural mapping to a graphical display, allowing the use of visualization. Visualization permits the user to interact, in an intuitive manner, with the optimization procedures and helps to gain insight into the characteristics of the problem. The solutions obtained are good initial solutions for subsequent rigorous modelling as a mixed integer nonlinear programme (MINLP). Although the majority of existing work is limited to the optimization of a given network, the new formulation can also generate new network layouts. The visualization and optimization methods have been developed to cater for both the optimization of a given network and the identification of optimal networks. The results presented in this paper, however, are limited to network optimization. 2 T H E PROBLEM STATEMENT The least-cost design problem can be stated as follows. Given the water network superstructure connectivity links for the nodes in the network, the pipe lengths, the demand at each node (assumed to be static) and the minimum flowrate and head requirements, determine the optimalflowrateand direction in each pipe, the head at each node and the diameter of each pipe so as to minimise the total cost of the pipes in the network. 3 T H E DISCRETE FORMULATION AND ITS VISUALIZATION

The discrete formulation is based on the modelling of the nodes (both demand and supply) in the network as horizontal lines in a two dimensional discrete space. The position of each line is represented byzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE {x,y). The y value specifies the head at the node. The length of a horizontal line in this discrete space represents the amount of water through the node, irrespective of the actual location along the jc-axis. Transportation of water from one node to another occurs when the lines corresponding to the two nodes overlap (in terms of the x co-ordinates) provided a connection between the two nodes is allowed. The definition of the network is actually a superstructure of allowable pipe connections, pipe diameters and distances between nodes. Given the set of x and y values for the nodes in the network, the allowable connections and the pipe diameter, d, (chosen from a discrete set) allocated to each possible connection, the evaluation of the objective function is based on identifying all the matches defined by the positions of the lines in the discrete space. This evaluation is deterministic and enables the identification of the network layout and the direct evaluation of the cost of the water distribution network. This objective function forms the basis of a discrete optimization problem in jc, y, and d. The discrete formulation provides a natural basis for visualization and, hence, user interaction. Figure 1 presents an annotated screenshot of the visual interface. The graphical interface employs visual hints to allow to user to identify bottlenecks and the aspects of the design that most greatly affect the overall cost. Specifically, the diameter of each

121 zyxwvutsrqpo mm- %ifitrmi*ff^ m>t^.

*t^ '^mn^m^ ^• aj ism^mif jh^^

W g.q» » « B >^i

^ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Indication of excess water

Figure 1: Water distribution network visualization interface for Alperovits & Shamir example. pipe is indicated by its width in the visual display. The violation of demand requirements for a node, or in fact the excess of water delivery to a node, is represented using a small gauge within the horizontal bar representing each node. Although not necessarily apparent in this manuscript, a red colour is used to indicate a shortfall in the demand for the node and a blue colour an excess. The interface allows the user, through use of the mouse or the keyboard, to manipulate the location of each horizontal bar and the diameter of each pipe. Furthermore, the implementation, based on the Model-View-Control design pattern (Gamma et al., 1995) and written in Java, allows the user to interact directiy and easily with the optimization procedures. The user can specify which features to manipulate (e.g. pipe diameters alone) and the specific optimizer to invoke. The choice of optimizer includes a variety of implementations of genetic algorithms and simulated annealing procedures. The user may also export a current configuration as an MINLP model which can be subsequently optimized rigorously, using the GAMS system (Brooke et al, 1998), as described in the next section.

The result is a tool which combines the global search capabilities of stochastic algorithms with the pattern recognition and tuning abilities of the user. The solutions obtained provide good initial points for subsequent rigorous optimization with a mixed integer nonlinear model. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB 4 M I N L P OPTIMIZATION MODEL The objective function expresses the network cost, which is assumed to be linearly proportional to the pipe length and pipe diameter. It is assumed that pipe diameters

122 are available at discrete commercial sizes. The objective function is minimized subject to three main sets of mathematical constraints: continuity (flow balance) constraints, energy loss constraints, and bounds and logical constraints. The first set of constraints represents the mass conservation law at each node of the water network. The second set describes the energy (head) losses for each pipe in the network to relate the pressure drop (head loss), due to friction, to the pipeflowrate and the diameter, roughness, material of construction, and length of the pipe. In this work, the commonly used Hazen-Williams empirical formula (Alperovits & Shamir, 1977; Cunha & Sousa, 1999; Goulter & Morgan, 1985) is used. The third set of constraints includes bounds on variables such as minimum head orflowraterequirements. This set also includes constraints to ensure that only one diameter can be selected for each pipe (stream), a more realistic representation rather than having a split-pipe design. The above problem corresponds to an MINLP model due to nonlinearity in the HazenWilliams correlation. This model is solved using the DICOPT method in the GAMS system (BrookezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA et al, 1998). DiCOPT invokes MILP and NLP solvers iteratively. In this work, we have used the CPLEX 6.5 MILP solver and four different NLP solvers. 5 ILLUSTRATIVE EXAMPLE An example from the literature (Alperovits & Shamir, 1977) is presented. The results show the improved behaviour, particularly in terms of robustness and consistency, achieved through the combination of the stochastic optimization of a discrete model, user interaction, and rigorous MINLP solution. The problem consists of seven nodes and up to eight pipes. When the viewer is instantiated, the initial values for all the variables default to the mid-point between the lower and upper bounds. From this starting point, the user may immediately interact directly with the viewer to alter the configuration or may request the application of a stochastic optimization step. At any point, the current configuration may be exported as a GAMS input file and solved using the rigorous MINLP formulation. Table 1: Sequence of operations for illustrative example (Alperovits & Shamir, 1977). Step 1. 2. 3. 4.

Operation GA User User User

Objective Function ($'000) 397 411 463 423

Infeasibility Measure

oTs 0.13 0.10 0.13

A typical sequence of steps is presented below. For each step, the current objective value (in discrete space) and a measure of its infeasibility (shortfall in demands met in m^ js) is obtained and these are collated in Table 1. Due to the highly constrained nature of the discrete formulation, an exactly feasible solution is unlikely to be achieved. However, the aim is not so much to solve the problem directly with the visualization tool but to provide good initial solutions for the rigorous optimization procedure.

123 zyxwvutsrqponm m^mM^^^i^e^^^^^m^Mmi

JM '^Jm^^M^fi'^^T^j^imM

t C - » 1 > t « *ym»mmmtJ)$fm^

faw» 3

S 150

o a

N G

• Jf 100

o 50 0

1

1

1

2

1

3

4 Time (h)

Operating time (h)

Figure 4. Optimal operation.

zyxwvutsrqponmlkjihgfedcba

Figure 5. Retentate concentration.

For the optimal design (Figure 3), the processing tank volume is 6250 L, the feed pump power is 8.74 kW, the circulation pump power is 33.89 kW and the 961 modules involve a total membrane area of 233.92 m^. For the optimal operation (Figure 4), each batch is 5.00 h long, comprising 2.97 h of filtration time followed by 2.03 h of cleaning. This cleaning time is usual in the food industry and involves rinsing, acid cleaning and basic cleaning stages. During the filtration time, the work pressure should be linearly decreased from 258 to 210 kPa. The evolution of retentate concentration can be seen in Figure 5. The permeate flow rate (Figure 6) is decreasing due to the decrease in the work pressure and, more importantly, due to the presence of membrane fouling. This phenomenon is reflected in the increasing in the membrane resistance Rm (Figure 7).

owu -

2500

^• '•' E 30

V ^v

3- 2000o o 1500 (0

a> E 1000 -

CL 25

^s^^

^

^v.

u 20 c

^v^ ^^v^^^

(0

.i2 15 CO

^"^s^^

^^*v,^

0) Q.

^^

500 nC)

2

a> 10 c (0

1 ^ 2

1

2 Time (h)

Figure 6. Permeate flow.

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR

0

C Time (h)

Figure 7. Membrane resistance.

The contribution to the total hourly cost of $33.1 is 12 % capital and 88 % operating costs, respectively. A detailed breakdown of the total capital cost, in total $204 141, is

154

represented in Figure 8, where the membrane cost is highlighted. Operating costs are $146.46 per batch (Figure 9), the cleaning cost being the most significant percentage. zyxwvutsrqpon

1% 4%

Tank 0 Feed pump D Circulation pump ID Membranes

Figure 8. Capital cost breakdown.

Feed pump D Circulation pump CD Cleaning

zyxwvutsrqponmlkjihg

Figure 9. Operating cost breakdown. zyxwvutsrqponm

6. Conclusion This work addresses the optimal design and operation of a batch protein ultrafiltration plant. The dynamic optimisation procedure adopted identifies simultaneously the optimal design parameters and operating policy of the installation. It should be emphasised that the approach can be directly applied to other ultrafiltration processes. This is the first work in which a formal dynamic optimisation methodology is applied to batch ultrafiltration.

7. References Belhocine, D., Grib, H., Abdessmed, D., Comeau, Y., Nameri, N., 1998, J. Membrane Sci., 142, 159. Cheryan, M., 1998, Ultrafiltration and Microfiltration Handbook, Technomic, Lancaster. Furlonge, H.I., Pantelides, C.C, S0rensen, E., 1999, AIChE J., 45,781. Georgiadis, M.C., Rotstein, G.E., Macchietto, S., 1998, AIChE J., 44, 2099. Kvamsdal, H.M., Svendsen, H.F., Hertzberg, T., Olsvik, O., 1999, Chem. Eng. Sci., 54, 2697. Liu, C. and Wu, X., 1998, J. Biotechnol., 66, 195. Process System Enterprise Ltd., 2001, gPROMS Advanced User Guide, London. US Filter, 2002, Ultrafiltration Systems for Wastewater Treatment, Palm Desert. Zeman, L.J. and Zydney, A.L., 1996, Microfiltration and ultrafiltration: Principles and Applications, Marcel Dekker, New York.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

155

Process Intensification through the Combined Use of Process Simulation and Miniplant Technology Dr. Frank Heimann, GCT/A - L540, BASF AG, 67056 Ludwigshafen, Germany

Abstract Various approaches to process intensification can be taken at various levels. At the lowest level, it may be possible to optimise basic physical, thermodynamic and chemical processes, for example, by changing geometries and surface structures, or using catalysts, etc. At the next level, possibilities include the use of cleverly designed plant items such as spiral heat exchangers or centrifugal reactors. Finally, at the highest or process level, improvements may involve carrying out several unit operations simultaneously in a single piece of equipment or making specific process modifications. This paper uses three examples (extractive distillation, reactive distillation column, steam injection) to demonstrate the breadth of possibilities for intensifying chemical processes. The processes were all developed with the help of process simulation and verified experimentally in laboratory-based miniplants. The information gained was essential to the successful design and operation of production-scale plant.

1. What is Process Intensification? Often when examples are given for process intensification, at first there are only examples in which it is possible to combine several unit operations in one piece of equipment. The fact that process intensification is more complex than this is clarified by the definition by Stankiewicsz and Moulijn (2000): Process intensification is any chemical engineering development that leads to a substantially smaller, cleaner and more energy-efficient technology! Intensification measures can thus be carried out at different levels which are different in degrees of complexity. The bandwidth extends from the simplest level, the level of underlying physical and chemical processes (e.g. improving heat transfer by the choice of geometry), to the next more-complex level of equipment and machines (e.g. intensification by an optimum construction design) and on to the most complex level, the process level, in which several unit operations can be combined in one piece of equipment. The examples listed here cover all the levels of complexity. Before these examples are handled in detail, there will be a clarification of what characterises miniplant technology.

2. Characteristics of Miniplant Technology (Heimann, 1998) Miniplants are complete systems for process development in laboratory scale, i.e. typical volumes lie in the range from 0.5 to max. 5 1. The throughputs are

156 correspondingly small at approx. 100 g/h to max. 1 - 2 kg/h. It should be noted here that the miniplant does not represent a true-to-scale, miniaturised simulation of a production plant. It is much more the case that the functions of the future production plant will be simulated in a representative manner. Operation of a miniplant is generally fully automated using a process control system. All of the process steps are integrated in this. It is especially necessary to simulate all important material recycling (e.g. solvents or catalysts). Ultimately the miniplants work out all the information necessary to increase the scale of the production process from the miniplant. Another important aspect of miniplant technology is that the construction design of the equipment and machines is selected in such a way that operation can be carried out under defined process engineering conditions. In this way, general modelling and simulation performed simultaneously with testing is possible and thus the foundation is created for an increase in the scale of the equipment. This will be clarified using column packings as an example. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 9 1

1

8 50mbar / ._^/_^^

7

100{nbar

^

-"^^i

400lmbar

10ld mbar 1 1 1 1 1 1 1 1 1 1 1

0,0

1.0

2.0

zyxwvutsrqponmlkjihgfedcbaZY

3.0 zyxwvutsrqponmlkjihgfedcba

F-Factor (Paos)

Figure 1. Separation efficiency with the chlorohenzene/ethyl benzene system. The photo in Fig. 1 shows a packing in miniplant scale with 5 cm diameter. The separating efficiency of these miniplant column packing is measured with defined test mixtures. The thermodynamical properties of these test systems, e.g. the vapour/liquid phase equilibrium, are known exactly. The separating installations can be calibrated with the use of these test mixtures. The graph in Fig. 1 shows a separating efficiency measurement, in which the number of theoretical stages per metre is entered over the vapour load in the form of the F-factor. When the column packings are used with actual material systems, reference can be made to this calibration. Miniplant columns with calibrated internal fittings then make it possible to increase the scale of the equipment directly from the miniplant scale to the production scale. This offers the advantage of a fast and cost-effective process development. Miniplant technology is thus an ideal tool to ensure experimentally process concepts selected with a view to process intensification. This will be demonstrated using three examples.

zyxwv 157

3. Examples

3.1. Example "Extractive rectification" Extractive rectification is an example of process intensification at the process level. In a process chain consisting of reaction, precipitation and centrifuging, a mother liquor develops which contains an alcohol, a chlorinated hydrocarbon, abbreviated as CKW, and a by-product formed during the reaction. The mother liquor must be separated into the individual components in the simplest possible way by a central processing unit. This means, the two solvents CKW and alcohol must be recovered in a high degree of purity in order to recycle them into the process. At the same time, the by-product must be separated from the two solvents. First, using simulation calculations, there will be a search for and development of a simple process concept. The table below in Fig. 2 shows some thermodynamic data for the alcohol/CKW solvent system. The boiling temperature increases from the by-product, to the alcohol and then to the chlorinated hydrocarbon. At the same time, it must be noted that the alcohol and the chlorinated hydrocarbon have an azeotrope. This azeotrope prevents the two solvents from being separated in a single rectification column.

zyx

zyxwvu zyxw

azeotropes of the R-OH / CKW solvent system (at 1.013 bar) comp.1

comp.2

Xi

kg/kg H,0 by-product by-product H,0 CKW H,0 R-OH H^O R-OH CKW R-OH CKW

0.07 0.84 0.55 0.19

X2

kg/kg 0,93

0.16 0.45

0.81

T X

68 79 88 93 100 112 118 121

hetero-az.

Iietero-az. hetero-az.

azeotrop

Figure 2. Azeotropic compositions and boiling temperatures of the material system. Extraction comes into play here. By adding another component, in this case water, three additional hetero-azeotropes form. By breaking down into an aqueous and an organic phase, an additional material separating effect is obtained, which can be used for processing the mixture. This will become clear from the following diagram (see Fig. 3). The rectification column developed on the basis of the process simulation calculations is shown here. According to the boiling temperatures of the three heteroazeotropes in the column, three side streams can be drawn off and each sent to a phase separator, in which each azeotrope breaks down into the aqueous phase and the organic phase. The organic phases consist of the purified solvents and/or the concentrated by-product. The aqueous phases are each returned to the column. At the column bottom, a waste water stream

158 then occurs which will be disposed of and/or in part will be returned to the head of the column.

aqueous zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA NaOHc;)

N

^

C>NP,96%

OCKW

ao 0,1% R-OHO.5%

feedzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA tf^

ao 0,5%

waste R-OH O^l'ro

Figure 3. Extractive rectification column. Extractive rectification offers another advantage. The chlorinated hydrocarbon can hydrolyse, i.e. splits off hydrogen chloride which leads to corrosion in the column. If aqueous sodium hydroxide solution is used instead of water as the extraction medium, it is possible to neutralise the hydrogen chloride which develops. The process concept was confirmed experimentally in a miniplant column and the foundations were set up for an increase in scale to production scale. Open questions tested in the miniplant related to the correct description of the vapour/liquid phase equilibriums in the simulation, testing of the fluid dynamics behaviour on the basis of the two liquid phases and the corrosion problems. In the mean time, the production column has been put into operation successfully. The specifications required for the alcohol and the chlorinated hydrocarbon are achieved and no corrosion occurs. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 3.2. Example "Reaction column" In an equilibrium reaction, an educt is converted using aqueous hydrochloric acid as solvent. In this process, acetone develops in addition to the product. The disadvantage of this reaction is that it is an equilibrium reaction, in which the equilibrium is greatly on the side of the educt. What is advantageous is that the equilibrium state is reached quickly. By removal of acetone, the equilibrium can be moved towards the product side. The fact that no thermal stress can be put on the product is something which will also have to be considered with all process concepts selected for removal of the acetone. Among the alternative solutions tested, a reaction column is the most elegant possibility. Since the chemical equilibrium occurs very quickly in this example, it

159

mainly offers the advantage that in parallel to the reaction, distillative separation can also be carried out. It was also possible to prove this process concept experimentally using miniplant technology. A bubble tray column, 30 mm in diameter, was used as the miniplant column. The advantage of the bubble tray column is that there is a hold-up for each tray giving the advantage that the residence time in the column can be varied by the variation of the feed flow into the column. In this way, the foundation was set up for the increase in scale using thermodynamic and fluid dynamic simulation on the production column. This has a diameter of 600 mm and was manufactured of glass due to the corrosive nature of the aqueous hydrochloric acid. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ miniplant

acetone ^HCI

production

diameten iilMBiil bubbletray: iilBllli: residence timeftraiiiilliHl

educt^l zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA HCI water

^j>roduct HCI

30mm^ 25 trays 4min

bubble tray coluillil of glass with PTFE-bubbiiil

zyxwvutsrqponmlkjihgfedcbaZYXWV

testing of • residence time • number of trays, energy consumption • optimum position of feed tray

Figure 4. Increase in scale of the reaction column from miniplant to production scale. In this example of process intensification at the level of equipment, as well, the technical realisation was completed successfully. The column is in operation since 2 years. 3.3. Example "Steam stripping" The example explained in the following illustrates process intensification at the level of basic chemical and physical processes. Again, it concerns an equilibrium reaction, in which the equilibrium is strongly on the educt side and a slightly volatile component has to be removed from the system to displace the equilibrium. However, the equilibrium does not occur spontaneously, so the use of a reaction column is not possible. Therefore, a high-stress field develops with high yield and duration of thermal stress. Different process technology alternatives for removal of the low-boiling fraction were computer-simulated. Direct discharge of water vapour steam offers the most favourable option which protects the product.

160 A problem of the reduction in scale of the production to the miniplant scale developed in the simulation of the discharge of steam, since the volume and the cross-section area change by different orders of magnitude. This means it is not possible to keep both steam load and steam introduction duration constant during the reduction in scale from production to the miniplant. An elegant solution is to carry out separate experiments regarding the influence of thermal stress duration and the influence of fluid dynamic load. In addition, the question of simulating the discharge of steam in the miniplant scale was of central importance. In order to achieve the finest possible distribution of steam with the greatest steam bubble surface, it was planned to use special steam discharge valves, which were constructed on the basis of a fluid dynamic simulation in the miniplant (see Fig. 5). In this example, again it was possible to successfully ensure the process concept experimentally using miniplant technology. This production plant is already successful in operation. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

production plant

3 valves 80 mm 0 with 216 holes, ea. 3,5 mmzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ 0 miniplant:

1 pipe 10 mm 0 with 5 holes, 3zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK mm 0

^^^^^^^*^^zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 6,3 m3. reactor zyxwvutsrqponmlkjihgfe Figure 5. Reduction in scale of the steam discharge from production to miniplant scale.

4. Closing Remarks Worldwide competition, the necessity of protecting natural resources and minimising environmental stress will continue to play a central role in the development of new processes. The examples presented above show that cost-effective as well as sustainable solutions can been found using process intensification. Miniplant technology is an important tool for quickly and cost-effectively experimental ensuring of solutions which have been proposed, in view of process intensification.

5. References Heimann, F., 1998, CIT 9,1192. Stankiewicsz, A. and Moulijn, J., 2000, Chem. Eng. Progress 1,22.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

161

A Constraint Approach for Rescheduling Batch Processing Plants Including Pipeless Plants W. Huang and P.W.H. Chung Department of Computer Science, Loughborough University, Loughborough, Leicestershire, LEll 3TU, UK

Abstract In the process industries batch plants are attracting attentions because of their suitabiUty for producing small-volume, high-value added commodity chemicals. Pipeless plants have also been developed and built to increase plant flexibility. Unexpected events, such as the failure of a processing unit, sometimes happen during operations. To avoid risk and to utilise the remaining resource, it is important to reschedule the production operation quickly. The constraint model in BPS has been extended to include constraints for rescheduling. These additional constraints are described in this paper and a case study is used to demonstrate the feasibility of the approach.

1. Introduction Efficient scheduling of batch plants is required since it harmonizes the entire plant operation to achieve the production goals. The scheduling of batch plants is challenging, especially for pipeless batch plants where the plant layout has to be considered as well. Many researchers addressing these issues use the Mixed Integer Linear Programming (MILP) approach, where an elaborate mathematical model is required to describe a problem. Kondili et al. (1993) suggested a general algorithm for short-term batch scheduling formulated in MILP using a discrete time representation. Pantelides et al (1995) presented a systematic and rigorous approach for short-term scheduling of pipeless batch plants. However, as the complexity of a plant increases, scheduling problems become harder to formulate in MELP. Constraint Satisfaction Technique (CST) has been used to solve problems ranging from design to scheduling. CST does not require elaborate mathematical formulae but requires a problem to be stated in terms of its constraints. Das et al (1998) investigated a simple but typical production scheduling problem and found it is possible to develop CST-based scheduling solution within very modest computation time. Huang and Chung (1999) developed a constraint model to represent a common class of traditional batch plant scheduling problems and a simple scheduling system. Batch Processing Scheduler (BPS) was produced based on this model. Das et al (2000) compared the approach developed by Huang and Chung (1999) with established mathematical programming approaches and concluded that it is relatively easy to represent complicated batch plant scheduling problems by using the constraint-based approach. Huang and Chung (2000) proposed a constraint model to represent scheduling problems in pipeless batch plants and improved the scheduling system BPS accordingly. Unexpected events, such as the failure of a processing unit, sometimes happen during plant operations. These events will make the original schedule invalid. To avoid risk and to utilise the remaining resource, it is important to reschedule the production operations quickly. However, few papers have reported on the investigation on rescheduling of chemical batch plants. Ko et al (1999) proposed a rescheduling approach for pipeless

162

plants. Their system can overcome unexpected events by adjusting the starting time of reactions and re-determining the sequence of equipment to be processed. Although the paper took plant layout into account, transportation time was ignored, which means the generated schedule would not be feasible in practice. This paper reports on the rescheduling capability as an extension to BPS. The extended constraint model can be applied to solve rescheduling problems for both traditional and pipeless batch plants, where the transfer time between stations is considered. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR

2. Constraint Model For Rescheduling 2.1. A typical pipeless plant process The constraint model for rescheduling is described with the help of a typical pipeless plant process. The production process is shown in Fig. 1. Circles and rectangles denote states and jobs respectively. The process starts with clean vessels being charged with A and B in an appropriate amount. The charged vessels are then taken to a blender, where the content is homogenized in a blending operation to form material AB. Following this, AB reacts with a third raw material C to form another intermediate material. Int.. Three final products PI, P2 and P3 are formed by blending Int. with three different additives Al, A2 and A3 respectively. The corresponding products PI, P2 and P3 are discharged through a discharge station and the empty dirty vessels must be cleaned before they can be used again. Finally, the clean vessels must move back to the start point waiting for A to be charged. The plant layout is shown in Fig. 2.

Blended AB in vessel

Empty dirty vessel

zyxwvutsrqponmlkjihgfedcbaZ

Fig. 1: Production processes of a pipeless batch plant

2.2. Rescheduling constraints Unexpected events can be formulated as additional constraints on the original problem. The goal of rescheduling a problem must is to find a feasible solution so that what has been done before the failure cannot be changed and the breakdown resource cannot be used during its down time. In order to achieve this the failed resource is allocated to a "breakdown" activity in BPS.

163 zyxwvutsrqpo BA^F,^ STiBA) = Ts md ET(BA) = T,

(1)

Where BA and Fj. represent a "breakdown" activity and the failed resource respectively. Ts and T^ represent the start and end time of the failure period of the breakdown resource respectively. The above formulae show that the failed resource is allocated to the "breakdown" activity to ensure that other jobs cannot use it. Its start and end time {ST and ET) are equal to T^ and T^ respectively. If ET(Jd < ST(BA) then ST\Jd = ST(Jd, ET\Jd = ET{Jd and Sp{Jd = Sp(Jd

(2)

This formula means that if in an original solution a job ended before T^, which is equal to the start time of the "breakdown" activity introduced in rescheduling the problem, the start and end time of the job as well as the selected resource will remain unchanged. ST*(J{) and ET*{J{) represents the job's corresponding start and end time respectively, and ^p* represents the corresponding resource in the rescheduled solution. Essentially, the above constraints "freeze" the part of the original schedule that has already happened. If Then

ST{BA) < ET{Ji) < ET{BA) either (BA) < ST\J{) & ST\Jd = TV^, or J^ ^FGA ^^ 1 JSGA+^'SGC+yPGB^l ySGA + >'SGC + >'S80B ^ 1 JSGA+ySGC+JP37^ 1 >'SGA+ ySGC+ yE211A^ 1

(10) (11) (12) (13) (14)

The generation of electricity CVSGD) is including the condensation (ytur)* ysGD-ytm'^0

(15)

The MINLP model is using additional annual profit of heat and power integration criterions. The additional annual income of integration sums up the additional savings of: fuel, 5 bar steam, 8 bar steam, cooling water and 37 bar steam and electricity production. In the model the existing areas can be used (AHE,ex) by enlarging them with additional areas (AHE,add)- The additional annual depreciation of enlarged and new areas (AHE,new) of heat exchangers and pipings (Table 1), is multiplied by the payback multiplier (r = 0,2448) to obtain the maximum annual profit of heat and power integration:

Max annual PROFIT = Cfuei-[^G-jFGA+^GB+^8o-}'s80A+^s80B ] zyxwvutsrqponmlkjihgfed + Q ' [ ^ 1 0 A + ^ 1 0 B + ^ 1 0 C +^5-tyE5A+ }'E5B+ ^'ESC) + ^6-CVE6A+3^E6B)] + Cg'^204-JE204 + Ccw[^E2irCVE211A+3^E211B) + ^ E 2 1 0 A + ^ E 2 1 0 B ] + Q i -Ptur'^tur

+ Qr^i

-[S(670.

AHE,add'''')-l,8 +1(8600+670. AHE,new'''')-l,8

- Cd,tur - Cd,pum - Cpip • ( S (zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE yi^A+ yi,B + yi,c) + ? ( J;,A+ }';,B+ >';,c) )]' r (16)

The simultaneous heat and power integration as optimized by MINLP is selecting the generation of electricity using the high pressure turbine (40 bar with the efficiency of 82 %, 7;ir,in = 500 °C, T^^out = 125 °C, ptur,out = 2 bar ) and the boiler rMsG-TUR in the retrofitted methanol plant (VSGD = ytur= 1; grey heat exchanger in Fig.l). The condensers KI-FEIO and K2-FE5 are transfering total heat flow to the nonretrofitted formalin (y^^ = jEioc = ^Esc = 1 ) - The flue gas is integrated with the process stream in the nonretrofitted refinery process (JFGB = yssoB = 1; RFG-RSSO)- New or additional areas of heat exchangers are: rMsc-TUR (with 2096 kW) of 8,7 m ^ KI-FEIO (with 1464,4 kW, TKI out= 117,7 °C) of 85,7 m ^ K2-FE5 (with 139,2 kW) of 8,5 m\ RFG-RS8O (with 667,5 kW) of 57,5 m\ The

184 structure enables the generation of 496,6 kW of electricity and savings of 5 bar 1603,6 kW of steam and (2-667,5 kW) of fuel. The additional annual depreciation of: the high pressure turbine, pump, insulation piping, new and additional areas of heat exchanger are 176,8 kUSD/a. The additional annual income of electricity production, saving of fuel and 5 bar steam are 480 kUSD/a. The additional profit of the integration is estimated to be 303,2 kUSD/a. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Table 1: Cost data for example processes. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK 4,

USD (8 600,0+670A^'^^)-3,6 Installed costs of heat exchanger : 17,06 USD/(GJ) Cost of electricity (Q/)**: 4,17 USD/(GJ) Cost of 37 bar steam (Q7)**: 2,95 USD/(GJ) Cost of 8 bar steam (Cg)*^: 2,60 USD/(GJ) Cost of 5 bar steam (Cs)^: 0,40 USD/(GJ) Cost of cooling water (Ccw)^-zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 4,43 USD/(GJ) Cost of fuel {Cfueif: 2380,0 USD zyxwvutsrqponm Cost of insulation piping (Cpjp)^:

&

Ahmad; Tjoe and Linnhoff, i A=area in m Swaney;

Perry, - Ciric and Floudas,

4. Conclusions We have extended the simultaneous integration method to exchange waste heat between the processes with the simultaneous generation of electricity using the steam turbine. Simultaneous heat and power integration between processes can be performed using the MINLP algorithm, in which alternatives of the heat transfer between several existing nonretrofitted or retrofitted processes can be included. We have carried out simultaneous heat and power integration of four existing, nontrivial plants. The objective function has maximized the annual profit to 303,2 kUSD/a.

5. References Ahmad, S. 1985. Heat exchanger networks: Cost tradeoffs in energy and capital. Ph. D. thesis. University of Manchester, Manchester, 113 -306. Ahmad, S. and Hui, C.W. 1991. Heat recovery between areas of integrity. Computers chem. Engng, 15, 809-832. Biegler, L.T., Grossmann, I.E. and Westerberg, A.W. 1997. Systematic methods of chemical process design. Prentice Hall PTR. Upper Saddle River New Jersey. Bagajewicz, M.J. and Rodera, H. 2000. Energy savings in the total site. Heat integration across many plants. Comput. chem. Engng 24, 1237 - 1242. Ciric, A.R and Floudas, C.A. 1989. A retrofit approach for heat exchanger networks. Comput. chem. Engng 13/6, 703-715. Kovac^ Kralj, A., GlaviC, P. and Kravanja, Z. 2000. Retrofit of complex and intensive processes II: stepwise simultaneous superstructural approach. Comput. chem. Engng 24/1, 125-138. Kovac Kralj, A., Glavic, P. and Krajnc, M. 2002. Waste heat integration between processes. Applied Thermal Engng 22, 1259-1269. Perry, R.H. 1974. Chemical engineer's handbook, McGraw-Hill, New York, 25-19. Rudman, A. and Linnhoff, B. 1995. Process integration: planning your total site. Chemical Technology Europe January/february. Swaney, R. 1989. Thermal integration of processes with heat engines and heat pumps. AIChE Journal 35/6, 1010. Tjoe, T.N. and Linnhoff, B. 1986. Using pinch tehnology for process retrofit. Chem. Engng. 28,47-60.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

185

Integration of Process Modelling and Life Cycle Inventory. Case Study: i-Pentane Purification Process from Naphtha L. Kulay^^\ L. Jimenez^^\ F. Castells^^\ R. Banares-Alcantara^^^ and G. A. Silva^*^ (1) Chem. Eng. Dept., University of Sao Paulo, Av. Prof. Luciano Gualberto tr.3 380, 05508-900 Sao Paulo, Brazil. E-mail: {luiz.kulay, gil.silva}@poli.usp.br (2) Chem. Eng. Dept., University Rovira i Virgili, Av. Paisos Catalans 26,43007 Tarragona, Spain. E-mail: {Ijimenez, fcastell, rbanares}@etseq.urv.es

Abstract A framework for the assessment of the environmental damage generated by a process chain and based on a life cycle approach is proposed. To implement it, a methodology based on the integration of process modelling and environmental damage assessment that considers all the processes of the life cycle was developed. This integration is achieved through an eco-matrix formed by eco-vectors containing the most relevant environmental loads. To verify the methodology, a case study on the deisopentaniser plant of REPSOL-YPF (Tarragona, Spain) has been carried out. The environmental profile of the alternative scenarios is improved when co-generation and heat recovery are considered.

1. Introduction Modern society demands a constant improvement on the quality of life. One of the actions of the administration is to guarantee a better environment. In this context, chemical process industries suffer an increasing pressure to operate cleaner processes. To achieve this goal, environmental aspects and the impact of emissions have to be considered in the design of any project using one of the procedures already developed [ISO, 1997]. Life Cycle Assessment (LCA) is the most common tool for the evaluation of the environmental impact of any industrial activity. The LCA is chain-oriented procedure that considers all aspects related to a product during its life cycle: from the extraction of the different raw materials to its final disposal as a waste, including its manufacture and use. According to ISO 14040 [ISO, 1997], LCA consists of four steps: goal and scope definition, inventory analysis, impact assessment and interpretation. The LCA identifies and quantifies the consumption of material and energy resources and the releases to air, water and soil based upon the Life Cycle Inventory (LCI). The procedure as it is applied to chemical processes has been previously described by Aelion et al. (1997). The results from the LCI are computed in terms of environmental impacts, which allow the establishment of the environmental profile of the process. For environmental assessment the application of potential impacts is restricted to the estimation of global impacts. For example, the amount of CO2 released is used as an indicator of climate change due to its global warming potential. One kilogram of CO2 generated by an industrial process in any of the different stages of a product life cycle

186

contributes equally to the climate change. However, this is not the case for sitedependant impacts, such as the potential impact of acidification measured as H"^ release. Unfortunately, the LCA does not accommodate for site-specific information of different process emissions. To include it, weighting factors across the system boundaries have to be selected, a task which is beyond the objective of this work [Sonneman et al., 2000]. For this reason, a methodology that includes environmental aspects in the analysis of processes has been developed. Applying the LCA perspective to different scenarios for electricity generation and steam production provides key information to decision makers at a managerial and/or political level. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIH

2. Methodology This section describes a proposed methodology to evaluate the environmental impact of a chemical industrial process chain in the most accurate way possible. It includes a procedure to compute the LCI based on the concept of eco-vectors [Sonneman et al., 2000]. Each process stream (feed, product, intermediate or waste) has an associated ecovector whose elements are expressed as Environmental Loads (EL, e.g. SO2, NOx) per functional unit (ton of main product). All input eco-vectors, corresponding to material or energy streams, have to be distributed among the output streams of the process (or subsystem). In this sense, a balance of each EL of the eco-vector can be stated similarly to the mass-balance (inputi = outputi + generation,). This is the reason why all output streams are labelled as products or emissions. The eco-vector has negative elements for the pollutants contained in streams that are emissions and/or waste. Figure 1 illustrates these ideas for an example of a chain of three processes that produces a unique product. The proposed procedure associates inventory data with specific environmental impacts and helps to understand the effect of those impacts in human health, natural resources and the ecosystem.

3. Problem Statement The methodology has been applied to the debutaniser and depentaniser columns of a naphtha mixture processed in the REPSOL-YPF refinery (Tarragona, Spain). The process PFD is shown in Figure 2. The first column is fed with a naphtha stream rich in C4 (= 28.3 tonh"^). This unit is a debutaniser and removes n-butane and lighter components (= 0.50 tonh"*). Perfect separation is not achieved since capital investment must be balanced against operating costs to arrive at an acceptable economic payout. As a result, it is more convenient to think of the debutaniser as having a cut point between n-butane and i-pentane, which is removed as top product from the second column (= 16.3 tonh'^). The intermediate naphtha input stream (C5 rich-naphtha = 71.5 tonh'^) comes from another plant in the same refinery. Production under design conditions is 83.0 ton• h'^ Proper understanding of recovery in both columns can improve refinery economics, due to the downstream effects of light components. The plant has four heat exchangers, and two of them (HX-1 and HX-3) recover process heat. Both condensers are air cooled, and thus plant utilities are electricity and steam. The production of these two utilities consumes additional natural resources and generates additional releases to the environment, and thus they were included.

4. Results The LCI was computed using process simulation as a support tool. This approach is appropriate for both, the design of new processes and the optimisation of existing ones. The use of process simulators to obtain the LCI guarantees a robust approach that

187 allows LCA to exploit their advantages in terms of availability of information, and reduces the uncertainty associated with data in the early phases of design. However, we can expect that on a long-term perspective, relative and uncertain values are valid when comparing among alternatives. The models for the naphtha plant, the electricity generation, the steam production and the heat recovery system were developed using Hysys.Plant®, and were validated using plant data. To build accurate models for all alternatives is not practical, and thus the models were reused for the different alternatives considered. The key simulation results were transferred to a spreadsheet (Microsoft® Excel), through macros programmed in Visual Basic™. Despite the fact that emissions were produced at different locations (e.g. those related to its extraction, transport and refining), the eco-vector has a unique value for each stream, i.e. it does not considers site-dependant impacts. The eco-vectors associated to all the inputs and outputs of the process are computed per ton of product (i-pentane). The aspects included in the eco-vector were divided into two categories: > Generated waste: in air (CO2, SO2, NO^, and VOC; estimated as fugitive emissions), wastewater (chemical oxygen demand, COD) and solids wastes (particulate matter and solids).

Process zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Process Raw Process -> Product material zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 3 I : 2zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB '

+ \

^ (^0^]

(soA

(^^A J RM

+

NO,

\

NO,

\

)1

CO^ NO,

+

J2

\

J3

=

fsoA co^ NO,

\

J%CA

Figure 1. Life cycle inventory analysis according to the eco-vector principle. zyxwvutsrqponmlkjihg Econd D6-C4

HX_1

I

Econd De-iC5

De-C4

Figure 2. Simplified PFD of the REPSOL-YPF plant. >

Consumption of natural resources: depletion of fossil fuels (fuel-oil, gas-oil, carbon, natural gas and oil), consumption of electricity and water. The plant consumes medium-pressure steam, while electricity generation and steam

188

production may use high or low pressure steam. The eco-vectors that correspond to these streams are also considered. The environmental loads of the process inputs were retrieved from the ETH Report [Frischknecht et al., 1996] and the TEAM™ database [TEAM, 1998]. The use of different scenarios allows the comparison among alternatives. The scenarios were chosen based on the source of steam and the generation of electricity (Table 1). Three of them focus on the environmental impacts of the original process (scenarios VI, VII and VIII) where changes related to the production of steam were compared. All other cases compare alternatives for a possible future implementation, e.g. those considering co-generation to produce electricity. For each one of the scenarios the eco-vector was divided into the three different processes: steam production, electricity generation and naphtha plant. As an example, the eco-vectors of scenario III are shown in Table 2. The results indicate that: • To reduce the CO2 and the BOD we have to focus on the production of steam. For scenarios VI, VII and VIII the electricity generation has also a certain impact zyxwvutsrqp (^ 3 to 29%). • To decrease the SO2 changes should be made in the production of steam and/or in the generation of electricity (Figure 3). The scenarios that include cogeneration radically minimise this value. • NOx, VOC and solid wastes are produced completely by the generation of electricity. • H2O consumption is mainly due to steam production. As expected, heat integration allows the reduction of this amount by 91%. Results (Figure 4a) show that scenarios VII, VIII and, to some extent, scenario V concentrate most of the consumption of fossil fuels, while the best alternatives in terms of water consumption are scenarios III and IV. As expected, heat recovery has a great impact on the results of scenarios III, IV, VI and, to a lesser extent, scenario VIII. If cases III and VIII are compared, the impact of co-generation on ELs is easily detected. Concerning the consumption of natural resources, the best alternative is scenario III (cogeneration, downgrading of steam and heat recovery). In terms of atmospheric releases (Figure 4b), the best options are scenarios III and IV. On the contrary, the most significant impacts were observed in scenarios VII and VIII. Nevertheless, the releases of NOx, SO2 (scenario V) and VOC's (scenario VIII) must be highlighted. Table 1. Main characteristics of the scenarios considered.

I II III IV V VI VII VIII

Electricity generation Co-generation Co-generation Co-generation Co-generation Expansion of steam in a turbine Spanish energy grid Spanish energy grid Spanish energy grid

Steam production Generation of steam Expansion of steam Generation + heat recovery Expansion + heat recovery Fuel oil & fuel gas burning Fuel oil & fuel gas burning + heat recovery Fuel oil & fuel gas burning Generation + heat recovery

189

Table 2. Eco-vectors for scenario 111. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFED

Natural gas/kg Water/kg Electricity/kW High pressure steam/kg Medium pressure steamykg Electricity/kW High pressure steam/kg Medium pressure steam/kg COz/kg SOa/kg NO,/kg VOC/kg Particulate matter/mg DQO/mg

• Steam production

Steam Electricity Plant Total production generation operation /ton i-Cs Inputs 0. 1.410"' 0. 1.410"' 3.0-10"' 1.510"' 0. 1.5-10' 3.4-10"^ 3.4-10"^ 0. 0. 1.8-10-' 0. 1.810"' 0. 0. 1.810"' 1.810-' 0. Outputs 3.410"" 0. 3.4-10"'' 0. 1.810"' 0. 0. 1.8-10"' 0. 0. 1.810"' zyxwvutsrqponmlkjihgfedcbaZYXWVU 1.810"' Atmospheric emissions ^ 0. 0. 3.7-10" 3.7-10"' 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.510 1.5-10" 1.510" 0. 0. 1.510" Liquid efluents 3.9-10"' 0. 0. 3.910"'

I Electricity generation

Figure 3. Comparison of the SO2 generation, (a) Scenario VI; (b) Scenario VII; (c) Scenario VIII. With respect to wastewater generation, there are a few scenarios with low impact (III, IV, VI and VIII) while the rest exhibit very similar values. If all aspects are analysed simultaneously, the best alternatives are scenarios III and IV, while the worst one is scenario VII. It is noteworthy to say that all environmental loads considered in the ecovector have to be balanced to reach a compromise, as their impacts in the ecosystem and human health differ widely. Also, note that some of the impacts are local (e.g. steam production), while others are distributed in different regions (e.g. extraction, external electricity generation) even though the LCA approach does not allow to differentiate among them.

190

I B Natural gas QOil

n

m

H Fuel oil

IV

V BFuelgas

VI

vn

vffl

• Carbon

I IZ1C02

n

in

BS02

IV

V • NOx

VI

vn

Vffl

HVOC

D Electricity a Water zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA ^QOD • Sobd wastes zyxwvutsrqponmlkjihgfed 0 Particulate

Figures 4a and 4b. Percentage of the impact on different Environmental Loads for each scenario, (a) Raw materials consumed; (b) Emissions.

5. Conclusions Significant progress in the integration of environmental aspects with technical and economic criteria has been achieved to date, although limitations still exist due to the uncertainty of the available data. The proposed methodology shows that the use of process simulators to obtain the LCI guarantees a robust approach. Furthermore, the methodology provides valuable information to compare alternatives for future implementation by assessing and preventing environmental impacts. This study will be extended with the application of models to predict the damage on human health, natural resources and the ecosystem. For the case study two different types of environmental profile can be identified (scenarios I-IV and scenarios V-VIII). The use of co-generation to produce electricity decreases the total damage, as its relative impact is lower than the one resulting from the use of the Spanish electricity grid.

6. References Aelion, V., Castells, F. and Veroutis, A., 1995, Life cycle inventory analysis of chemical processes. Environ. Prog., 14 (3), 193-195. Frischknecht, R., Bollens., U., Bosshart, S. and Ciot, M., 1996, ETH report, Zurich, Switzerland. ISO 14040, 1997, Environmental management. Life cycle assessment. Principles and framework, ISO, Geneve, Switzerland. Sonnemann, G.W., Schuhmacher, M. and Castells, F., 2000, Framework for environmental assessment of an industrial process chain, J. Haz. Mat., 77, 9 1 106. TEAM®, 1998, Ecobilan Group, Paris, France.

7. Acknowledgements One of the authors (L. Kulay) wishes to thank CAPES (Ministry of Education of Brazil) for the financial support. We also acknowledge the cooperation of REPSOL-YPF, and Hyprotech (now part of Aspentech) for the use of an academic license of Hysys.Plant®.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

191

Superstructure Optimization of the Olefin Separation Process Sangbum Lee, Jeffery S. Logsdon*, Michael J. Foral*, and Ignacio E. Grossmann Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh, PA15213, USA;*BP, Naperville, IL60563, USA

Abstract The olefin separation process involves handling a feed stream with a number of hydrocarbon components. The objective of this process is to separate each of these components at minimum cost. We consider a superstructure optimization for the olefin separation system that consists of several technologies for the separation task units and compressors, pumps, valves, heaters, coolers, heat exchangers. We model the major discrete decisions for the separation system as a generalized disjunctive programming (GDP) problem. The objective function is to minimize the annualized investment cost of the separation units and the utility cost. The GDP problem is reformulated as an MINLP problem, which is solved with the Outer Approximation (OA) algorithm that is available in DICOPT++/GAMS. The solution approach for the superstructure optimization is discussed and numerical results of an example are presented.

1, Introduction The olefin process involves a number of steps for producing and separating hydrocarbon components consisting of hydrogen and Ci-Cs components. We address the optimization of the separation system, where the goal is to select a configuration of separation tasks and their corresponding units, as well as pressure and temperature levels in order to perform heat integration. The objective is to minimize the total annualized cost of the separation system. Figure 1 shows the superstructure of the olefin separation system. There are number of states and separation tasks. The white boxes represent sharp split separations and the shaded boxes represent non-sharp split separations. We consider 8 components in the separation system and they are hydrogen, methane, and C2~C5 components. Since we are mainly concerned with the recovery of ethylene and propylene, we assume that the C4 mixture and the C5 mixture can be treated as a single component. As shown in Figure 1, there are 25 states including final products and 53 separation tasks. Non-sharp split separations have intermediate components which appear in both the top and bottom products. For each separation task, there is a subset of technologies available depending on the separation task. Table 1 shows 7 separation technologies considered in the separation process. Dephlegmator is a separation unit where heat exchange and mass transfer take place at the same time. Cold box is a cryogenic separation unit that is based on Joule-Thomson effect. Each separation task can be performed by a number of separation technologies, which are selected based on the components involved in the feeds.

192 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Figure 1: Superstructure of separation system. Table 1: Separation technologies. Tl T2 T3 T4 T5 T6 T7

Distillation column Physical absorption tower Membrane separator Dephlegmator Pressure Swing Adsorption (PSA) Cold Box Chemical Absorption tower zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ

2. GDP Model We propose a generalized disjunctive programming model for optimizing the superstructure of the separation system shown in Figure 1 (see Yeomans and Grossmann, 1999a, 1999b). The first level in the embedded disjunction corresponds to the selection of the separation task. Once the separation task is selected, the second level disjunction is for the selection of the separation technologies. For example, if a distillation column is chosen, then the mass and energy balances for distillation column are enforced and the corresponding cost term is considered. An additional disjunction is the heat integration for the distillation columns, and another disjunction is for compression, pumping or pressure reduction of each state. For the separation units, simple mass/energy balances are used. Assumptions for modeling the separation system are as follows: 1) Vapor pressure of the stream is calculated with Raoult's law and by the Antoine equation (Reid et al., 1977) 2) Utility (cooling water/hot steam) cost is given by a function of temperature (see Figure 2) 3) Investment cost is given by concave cost functions (Douglas, 1988)

193 Based on these assumptions, the following nonconvex GDP model is constructed: Indices Distillation column I zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA StateszyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA k Separation technology Separation task st s Sets K Distillation column k / States / STs Separation technology st for task s Separation task s for state / Si zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Parameters Minimum T difference EMAT CRu Upper bound for comp. Ratio Lower bound for comp. ratio CRL Variables r, Temperature of state / Xi Flowrate of state / ICi Investment cost for separation of / Pressure of state / Pi Compressor cost for state / UCi Utility cost for separation of / CQ Selection of separation task yz,jt Selection of heat integration for YSi,s state / YCi Selection of compression for state / Selection of separation tech. Y^ s,st RBi Bottom recovery ratio of state / Top recovery ratio of state / RTi Heat transferred from state / to distillation column k QEXu Heat generated or consumed by state / Qi Condenser temperature in distillation column for / Tf Reboiler temperature in distillation column k Model Olefin 1: a) Minimize the annualized cost of capital investment, compression and utility min Z = ^ {iCi + CCi + UC^)

b)

Overall mass balances s.t.

c)

Ax = 0

Pressure and temperature calculation by Antoine equation Pi=fa(Ti),

d)

V/G/

Embedded disjunction for the separation task

xf" = RT^xfi"' xf"" = RB^xf'"'

Vie/

YT

V SES:

mass balance: ftn{Xi) = 0 V steST,

energy balance: /e(jc,, 7;, /^, Q,-) = 0 cosf fuction:iiq,UCi)

=

fc(xi,Ti,Pi,Qi)

194 e)

Disjunction for the heat integration zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJI

r -• I'Za zyxwvutsrqponmlkjihgfedcbaZYXWVU

T[^>T^ + EMAT

e£X,.,>0

MQEX,.,-0

\/ieI,\/keK

(/C,.,f/C,) = ^(x,,7;.,i^.,(2£X,.,)J f)

Disjunction for the compressors/pumps

(Ti,Pi)ou.=JPliTi,Pi)in CR[^0.0 and finding a point of the property of isovolatility with JCB=^. Then the corresponding F/Y ratio can be determined according to this point being on some extractive profile [y=(F/y+l)x-(F/V)xF], thus (F/V)min=|>*A(x)-JCA]/[xA-JCF,A]- In this way the SN criterion (y = y*) is not used. Both methods were tried for XD=[0.94; 0.025; 0.035], F=760 torr, V=48 mmol/s, with different trial values of e, giving the same results: Finin=6.133 mmol/s (6)=0.01). zyxwvutsrqponmlkji 4.2. Minimum stage numbers (A^miii,extr and A^miyect) (^D,A)min should be specified for determining minimum stage numbers. A stage number Nmin is minimal at R=co and given F/V if {^D,A^(-^D,A)min at N^^ and XD,A 0 is some positive function, while d=l and s=l,....n for the upstream, and d=2 and ^=l,....m for the downstream units. The amount of the material transferred by the s^^ unit during an operation period is t=0

1

'

^i'i

tt•

^^^^'^



1 L

(^7+&)^7.7

1

[ ik i L

t



^ ^

^ ^ w 2^^ failure cycle w Moment of the 7^' Moment of the 2"' failure failure

V^ failure cycle

Fig.l. Characteristic time intervals of the process with equipment failures. ^(^^r)->^^zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (^^j\-p^yc^_^,^^ zyxwvutsrqponmlkjihgfedcbaZ pil zyxwvutsrqponmlkjihgfedcbaZ Figure 5b. m-STN.

Figure 5a. STN.

Figure 5c. RTN.

Solving the motivating example using the three representations explored above and the data expressed in Table 1, the results obtained are shown in Table 2. These corresponds to a 0% margin of optimality and to an objective function of 1248,54 x 10^ monetary units where units Rl, V3, CI and C3, with 80 units of capacity where chosen. Table 1. Capacities and equipment cost.

Capacity [u.m/m ] max:min Cost[10^ C.U.] fix;var

VI ^^^

V2 ^^j

V3 ^^S.Q

Rl 200:0

0

0

1:10'^

10:10'^

CI 200:0

C2 200:0

C3 200:0

10'^ 10'^

(c.u.= currency units; u.m= mass units; unl.= unlimited) Analysing the model statistics it can be seen that the m-STN representation results in a smaller problem both in terms of variables and constraints, followed by the STN and RTN. As for the general versus the adapted STN, the latter results in a smaller model. The model statistics influence the CPU times, as it can be seen in Table 2. The m-STN is solved more quickly (0.234) followed by the STN (0.312) and finally the RTN (0.359). However the differences are not very marked.

Table 2. Computational data. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFED Methodology STN -general STN - adapted m-STN RTN

N** Variables 137 98 75 364

N** Binary 48 36 24 127

N" Constraints 227 176 122 511

CPU time (s) 0.313 0.312 0.234 0.359

LPs 6 5 7 16

STN - general (dedicated storage model as a task); adapted (state/unit allocation) 2.2. Example 2 Using an example proposed by Barbosa-Povoa and Macchietto (1994), the above representations are again explored. In here, a plant must be designed at a maximum profit so as to produce three final products, S4, S5 and S6, with production capacities between [0:80] ton for S4 and S5, and [0:60] for S6, from two raw materials, SI and S2. The process operates in a non-periodic mode over a time horizon of 12 hours. The results in terms of model statistics are shown in Table 3. The problem modelled through the mSTN presents the smallest statistics followed by the STN-adapted and finally the RTN. The same behaviour is observed when analysing the computational times associated.

262

These facts indicate that again the m-STN representation appears as the most adequate for the modelling/solution of the detailed design of batch plants. zyxwvutsrqponmlkjihgfedcbaZYXWVU

Table 3. Computational data. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFED Methodology STN -general STN- adapted m-STN RTN

N** Variables 974 794 600 2615

N^ Binary 358 298 116 1018

N"* Constraints 1743 1503 918 3662

CPU time (s) 8.734 4.469 4.187 123.765

LPs 570 523 522 5029

3. Conclusions This paper discusses the applicability of STN, m-STN and RTN representations to the detailed design of batch plants, where a discretisation of time and a non-periodic operation mode were assumed. The main differences identified were concerned with three important aspects that are related. The need of explicitly considering the storage tasks, which account for the continuous availability of material and the usage of a suitable equipment. The need of considering the different locations of material in the plant and consequently the definition of transfers of material with suitable equipment associated. And, finally the instantaneous characteristic of each one of these tasks. These representations resulted in larger models when using the RTN methodology and consequently harder models to be solved. For the STN also larger models where obtained with regards to the m-STN representation. In conclusion, within the scope of the problem characteristics covered, the m-STN appears as the most adequate representation to the detailed design of batch plants, since it explores the problem characteristics reducing the need of auxiliary instances in the representation, as well as reducing the mathematical formulation statistics associated. Thus, the choice of an adequate representation for the solution of a given problem should, as much as possible, explore the problem intrinsic characteristics. However, it is important to note that the work presented should be further explored and more examples should be solved so as to confirm this conclusion. Also, other problem characteristics such as set-ups dependency, cleaning needs, connectivity suitability, amongst others, should also be studied. This is now ongoing research by the authors.

4. References Barbosa-Povoa, A.P.F.D. and Machietto, 1994, Detailed design and retrofit of multipurpose batch plants. Computer Chem. Engng, 18, 11/12, 1013-1042. Barbosa-Povoa, A.P., Pantelides, C.C., 1997, Scheduling and design of multipurpose plants using the resource-task network unified framework. Computer Chem. Engng, 21b, S703-S708. Kondilli, E., Pantelides, C.C. and Sargent, R.W.H., 1988, A general algorithm for scheduling bath operation. In Proc. of 3"^^ Ind. Symp. on Process Systems Engineering, pages 62-75, Sydney, Australia. Pantelides, C.C, 1994, Unified framework for optimal process planning and scheduling. In D.W.T. Rippin and J. Hale, editors. In Proc. Second Conf. on Foundations of Computer Aided Operations, CACHE Publications, pages 253-274. Shah, N., 1992, Efficient scheduling, planning and design of multipurpose batch plants. Ph. D. Thesis, Imperial College, University of London, U.K.

European Symposium on Computer Aided Process Engineering- 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

263

Generalized Modular Framework for the Representation of Petlyuk Distillation Columns p. Proios and E.N. Pistikopoulos* Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College, London SW7 2BY, U.K.

Abstract In this paper the Generalized Modular Framework (Papalexandri and Pistikopoulos, 1996) is used for the representation of the Petlyuk (Fully Thermally Coupled) column. The GMF Petlyuk representation, which avoids the use of common simplifying assumptions while keeping the problem size small, is validated for a ternary separation, by a direct comparison of its results to those obtained by a rigorous distillation model.

1. Introduction The Petlyuk column (PetlyukzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA et al, 1965) is an energy efficient distillation system which, along with its thermodynamically equivalent Dividing Wall Column (Wright, 1949), has been reported of being able to lead to energy savings of even up to 40% when compared to conventional simple column arrangements (Glinos and Malone, 1988 and Schultz et a/., 2002). The importance of this complex distillation column has compelled the development of numerous methods for its design and analysis. These methods can be classified into two main categories, namely, those using simplified (shortcut) models and those using rigorous (detailed) models. Petlyuk et a/. (1965) used shortcut calculations for the minimum reflux based on constant relative volatilities and internal flowrates. Cerda and Westerberg (1981) developed a shortcut model for the minimum reflux assuming sharp separations for the Petlyuk column. In Fidkowski and Krolikowski (1986) the Petlyuk column was studied for ternary mixtures and sharp calculations through a shortcut model for the minimum vapour flowrate based on the Underwood method. Glinos and Malone (1988) and Nikolaides and Malone (1988) designed the Petlyuk column using shortcut calculations under constant relative volatilities and equimolar flowrates. Carlberg and Westerberg (1989) and Triantafyllou and Smith (1992) used a three-simple-column approximation of the Petlyuk column. The former proposed a shortcut model for minimum vapour flowrate for nonsharp separations whilst the latter based their design on the Fenske-Underwood-Gilliland shortcut techniques. Halvorsen and Skogestad (1997) used a dynamic shortcut model based on assumptions of equimalr flowrates and constant relative volatilities for their Petlyuk/Dividing Wall Column model. Agrawal and Fidkowski (1998) used Underwood's method for their Petlyuk design and Fidkowski and Agrawal (2001) proposed a shortcut method for the separation of quarternary and higher mixtures in To whom correspondence should be addressed. Tel.: (44) (0) 20 7594 6620, Fax: (44) (0) 20 7594 6606, E-mail: [email protected]

264

Petlyuk arrangements extending the Fidkowski and Krolikowski (1986) method. Shah and Kokossis (2001) designed the Petlyuk columns in their framework based on the Triantafyllou and Smith (1992) shortcut procedure. Finally, Amminudinzyxwvutsrqponmlkjihgfedcba et ah, (2001) proposed a shortcut method for the design of Petlyuk columns based on the equilibrium stage composition concept. It must be noted that the above methods provide fast and simple ways of designsing and analysing the performance of the Petlyuk column. However, the fact that they are based on simplifying assumptions can place a limitation on their accuracy and applicability, notably for the cases where these assumptions do not hold. This limitation can be overcome through the use of rigorous methods, not relying on simplifying assumptions. Chavez et al. (1986) examined the multiple steady states of the Petlyuk column through a detailed tray-by-tray model under fixed design, which was solved with a differential arc-length homotopy continuation method. Dtinnebier and Pantelides (1999) designed Petlyuk columns using a detailed tray-bytray distillation model based primarily on the rigorous MINLP distillation model of Viswanathan and Grossmann (1990). Also based on the latter, Yeomans and Grossmann (2000) proposed a disjunctive programming model for the design of distillation columns including Petlyuk arrangements. These methods are based on detailed and accurate models with general applicability. However, they do generate considerably larger nonlinear programming problems which lead to an increase of the computational effort. The scope of the presented work is twofold: a) to provide a valid method for representing and analyzing the performance of the Petlyuk column with respect to its energy efficiency potential at a conceptual level and b) based on this, to put the foundations for the extension of the method to the synthesis level, that is, for the generation and evaluation of all column arrangements for this separation problem, involving simple and also (partially) thermally coupled columns. These will be realized in an integrated way, from a process synthesis point of view, and without generating a large optimization problem (as the rigorous methods), while avoiding the common limiting assumptions, characteristic of the shortcut methods. zyxwvutsrqponmlkjihgfedcbaZYXWV

2. The Generalized Modular Framework In this work the Petlyuk column is represented through the Generalized Modular Framework (GMF) (Papalexandri and Pistikopoulos, 1996), which is an aggregation framework for process synthesis/representation. The GMF is based on the fact that since a large number of process operations are characterized by mass and heat transfer phenomena (for instance the mass and heat exchange between liquid and vapour streams in distillation), using a generalized method for capturing those, the process operations in question can be systematically represented in a compact and unified way. The GMF through its generalized mass and heat exchange modelling, aims towards that direction. In brief, it can be stated that the GMF is a superstructure optimization method and, alike most of the methods belonging to this class, consists of a Structural Model, responsible for the generation of the (structural) process alternatives and a Physical Model, responsible for the evaluation of the latters' performance/optimality.

265 zyxwvutsrqpo Cooler Heater zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

6

-D

a—o Pure Pure Heat Module Heat Module zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Figure 1: GMF Building Blocks (Ismail et al 2001)' The Structural model consists of: (i) the GMF building blocks and (ii) their interconnection principles. The GMF building blocks (Figure 1) are representations of higher levels of abstraction and lower dimensionality where mass/heat or pure heat exchange take place. The existence of the building blocks is denoted mathematically through the use of binary (0-1) variables. The Interconnection Principles define the way the various building blocks should be connected to each other for the generation of physically meaningful alternative units and their resulting flowsheets. The mathematical translation of these principles is realised through a set of mixed and pure integer constraints, which define the backbone of the GMF structural model. The GMF Physical Model is employed for the representation of the underlying physical phenomena of the generated structures. Each building block is accompanied by its physical model, which is based on fundamental (and thus general) mass and heat exchange principles at the blocks' boundaries, consisting of mass and energy balances, molar fraction summation corrections and appropriate Phase Defining and Driving Force Constraints arranging the mass and heat transfer. The complete GMF mathematical model, as a combination of the structural and physical models is a Mixed Integer Nonlinear Programming problem (MINLP), and can be found in detail in Papalexandri and Pistikopoulos (1996) and Ismail et al. (2001). For the GMF representation of the Petlyuk column a minimum number of 6 mass/heat and 2 pure heat modules are employed (Figure 2). The connectivities of the building blocks are appropriately arranged so that the complex structure of the Petlyuk column is obtained. This is done by fixing the corresponding binary variables to 0 or to 7 for the respective nonexistence or existence of building blocks and their interconnections. For the Petlyuk column representation, each mass/heat module represents a column section (aggregation of trays), where a separation task takes place. The pure heat exchange modules represent the condenser (cooler) and the reboiler (heater) of the Petlyuk column. It must be noted that since a tray-by-tray model is not employed, the equilibrium constraints are being replaced by Driving Force Constraints at the two ends of each one of the six mass/heat modules, according to the type of contact (countercurrent for distillation). These constraints along with the Phase Defining Constraints and the conservation law constraints ensure mass and heat transfer feasibility and define the distribution of the components in the existing building blocks. However, the main motivation for representing the Petlyuk column through the GMF lies on the latter's main representational advantages, which for the examined case are summarised below: (i) the GMF physical model captures efficiently the underlying

266 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

S J ^ ^ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA r zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

xCl^ P2

y-^ Qh

LS^. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Qreb

f-~-^

Figure 2: Petlyuk Column (Conventional and GMF representation). mass/heat transfer phenomena, since it is not based on simplifying and limiting assumptions such as sharp splits, equimolar flowrates, constant volatilities and it does not involve any shortcut calculations, (ii) the GMF physical model can accomodate any thermodynamic model, (iii) the GMF structural and physical models allow the represention of the Petlyuk column in an aggregated way, leading to a smaller and easier to solve optimization problem (iv) the framework can be potentially extended to the evaluation of other (distillation) systems through a superstructure based on the existing six mass/heat modules and by allowing more interconnections. In the following section the above advantages and the framework's validity and representational merit will be demonstrated through a GMF/Petlyuk column case study.

3. Numerical Results - Validation The GMF representation of the Petlyuk column is employed for the separation of the ternary mixture of Benzene, Toluene and 0-xylene. The problem data was taken from Chavez et al (1986) and it involves the separation of a saturated liquid feed of 211.11 mol/s, with molar fractions of Benzene, Toluene and 0-xylene, 0.2, 0.4 and 0.4, respectively, into three product streams with molar fractions of 0.95, 0.9 and 0.95, in the above components. The objective is the minimization of the utility cost. For a fixed (Petlyuk) structure, the corresponding GMF mathematical problem is a nonlinear programming problem (NLP) which was solved in GAMS (Brooke at al, 1992) using the solver C0N0PT2. Due to the inherent stream mixing and splitting terms the problem is nonconvex which is solved only to local optimality. However a systematic procedure has been employed with appropriate initial guesses and bounds for the stream flowrates, temperatures and molar fractions in order to find a local optimal point which represents the potential (energy consumption levels) of the examined Petlyuk column. From the optimization runs for the mixture and feed composition examined, the GMF provided the energy consumption levels (heater duty of 9,026.3 kW) and the operating conditions of the Petlyuk column, using the mass/heat exchange principles of the GMF physical model.

267 However, since the GMF physical model is an aggregated (and, thus, nonconventional) model, the validity of the GMF results for the Petlyuk representation was evaluated by comparing these resultszyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA quantitatively and qualitatively, to those derived from a conventional tray-by-tray model. For these purposes, the rigorous model of Viswanathan and Grossmann (1990) was used for the minimization of the operating cost, with the problem definition and the column design taken also from Chavez et al. (1986). From the results of the optimization, the two models are found to be in quantitative agreement, since the reboiler heat duty in the rigorous model was 10530 kW, which is very close to that of the GMF heater, indicating that the GMF model predicted correctly the energy consumption of the Petlyuk column. The small divergence between the two is possibly due to the fact that in the GMF the bottoms product stream is removed before the heater (with less liquid entering it, Figure 2). However, such a quantitative agreement needs to be the product of a qualitative agreement (that is, in the components' distribution over the various column sections). Since the GMF does not provide information at the tray level, in order to enable a comparison of the composition and temperature profiles of the two models, the points of the feeds, interconnections and side streams of the GMF representation were placed on the corresponding points (tray locations) of the tray-by-tray model, in a common x-axis. In Figure 3 are shown the profiles of the Toluene composition and of the temperature in the main column of the Petlyuk arrangement. From these it is apparent that the two models are also in qualitative agreement in the main column (similar results were derived for the prefractionation column, as well). This qualitative agreement shows that the GMF provided insights on the performance of the Petlyuk column, with respect to its energy consumption, based on a sound physical model which is capable of capturing efficiently the mass and heat transfer phenomena of the examined system. Another point of importance is related to the size of the generated optimization problem. Due to the aggregated nature of the GMF representation (where variables and equations are accounted for only at the building blocks' boundaries and not at the tray level, as in the rigorous models), a size reduction of 75% in the number of variables and constraints, respectively, has been noted when using the GMF instead of the trayed model (depicted

I

3

5

7

9

11

13

15 17 19 Travs/Moduks

21

23

25

27

29

31

Figure 3: Qualitative Comparison of GMF and Rigorous Models (Petlyuk Column).

268

in Figure 3 with the fewer GMF points), with direct effects in the computational effort. Of course, as it can be observed in Figure 3, the GMF does not provide detailed results and profiles, as the rigorous model does, but this is beyond the aim of the framework, which is not a simulation but a synthesis/representation tool, at a conceptual level. zyxwvutsrqponml

4. Conclusions

As shown, the GMF provides a sound and useful tool for the representation and evaluation of the Petlyuk column and its underlying physical phenomena providing valid information about the energy consumption levels of the fully thermally coupled column. Moreover, the GMF results, which were derived using an aggregated physical model and thus generating a significantly reduced optimization problem, were evaluated for their consistency and validity through their comparison with a well-established rigorous distillation model. Finally, having validated the GMF physical model for the examined system, the complete GMF model, with its physical model, as it was used in the presented work, and with its full structural model (without incorporating a fixed structure but with an adequate number of building blocks and their interconnections to be determined by the optimizer) can now be used for the synthesis problem,zyxwvutsrqponmlkjihg i.e. the generation and evaluation of all the alternatives of interest (simple and complex) for the examined separation problem, which is the scope of our current research.

5. References Agrawal, R. and Fidkowski, Z.T., 1998, Ind. Eng. Chem. Res., 37, 3444. Amminudin, K., Smith, R. Thong, D. and Towler, G., 2001, Trans IChemE, 79(A), 701. Brooke, A., Kendrick, D. and Meeraus, A., 1992, GAMS - A User's Guide, Scientific Press, Palo Alto. Carlberg, N.A. and Westerberg, A.W., 1989, Ind. Eng. Chem. Res., 28, 1386. Cerda, J. and Westerberg, A.W., 1981, Ind. Eng. Chem. Process Des. Dev., 20, 546. Chavez, R.C., Seader, J.D. and Wayburn, T.L., 1986, Ind. Eng. Chem. Fundam.,25, 566. Dunnebier, G. and Pantelides, C.C., 1999, Ind. Eng. Chem. Res., 38,162. Fidkowski, Z.T. and Agrawal, R., 2001, AIChE J., 47(12), 2713. Fidkowski, Z.T. and Krolikowski, L., 1986, AIChE J., 32(4), 537. Glinos, K. and Malone, M.F., 1988, Chem. Eng. Res. Des., 66, 229. Halvorsen, I.J. and Skogestad, S., 1997, Comp. Chem. Eng., 21, S249. Ismail, S.R., Proios, P. and Pistikopoulos, E.N., 2001, AIChE J., 47(3), 629. Nikolaides, LP. and Malone, M.F., 1988, Ind. Eng. Chem. Res., 27(5), 811. Papalexandri, K.P. and Pistikopoulos, E.N., 1996, AIChE J., 42, 1010. Petlyuk, F.B., Platonov, V.M. and Slavinskii, D.M., 1965, Int. Chem. Engng, 5(3), 555. Schultz, M.A., Stewart, D.G., Harris, J.M., Rosenblum, S.P., Shakur, M.S. and O'Brien, D.E., 2002, CEP, 98(5), 64. Shah, P.B. and Kokossis, A.C., 2001, Comp. Chem. Eng., 25, 867. Triantafyllou, C. and Smith, R., 1992, Trans IChemE, 70 (A), 118. Viswanathan, J. and Grossmann, I.E., 1990, Comp. Chem. Eng., 14(7), 769. Wright, R.O., Fractionation Apparatus, 1949, U.S. Patent 2,471,134. Yeomans, H. and Grossmann, I.E., 2002, Ind. Eng. Chem. Res., 39,4326.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

269

A Multi-Modelling Approach for the Retrofit of Processes A. Rodriguez-Martinez\ I. Lopez-Arevalo^, R. Banares-Alcantara^* and A. Aldea^ ^Department of Chemical Engineering, ^Department of Computer Engineering and Mathematics, Universitat Rovira i Virgili, Tarragona, SPAIN.

Abstract The retrofit of an existing process is a complex and lengthy task. Therefore, a tool to support the retrofit by reasoning about the existing process and the potential areas of improvement could be of great help. A proposal for a retrofit approach based on a multimodelling knowledge representation is presented in this paper. The use of structural, behavioural, functional and teleological models allows the designer to work with a combination of detailed and abstract information depending on the retrofit step. The proposed retrofit process consists of four steps: data extraction, analysis, modification and evaluation. The HEAD and AHA! prototype systems were implemented for the two initial steps. These systems have been applied in a case study to the ammonia production process.

1. Introduction Industrial processes require periodic evaluations to verify their correct operation, both in technical and economical terms. These evaluations are necessary due to changes in the markets, and safety and environmental legislation. In order to satisfy these demands it is necessary to investigate process alternatives that allow the optimal use of existing resources with the minimum possible investment. The retrofit of processes is a methodology of analysis and evaluation of possible changes to an existing process in order to improve it with respect to some metric (economical, environmental, safety, etc). Historically, the retrofit of processes has been largely centred on energy savings. In the last decades significant advances in this area were obtained through the use of the pinch methodology (Linnhoff and Witherell, 1986) and mathematical programming techniques (Grossmann and Kravanja, 1995). Other systems, such as the one proposed by Fisher et al. (1987), combine heuristic rules with decision hierarchies. These methods generate process alternatives based on the modification of the process structure or the dimensions of the items of equipment. A possible improvement to these approaches would be the reduction of the complexity originated by the use of detailed information. As an alternative approach we propose the use of multiple models (structural, behavioural, functional and teleological) to represent detailed and abstract knowledge for the retrofit of artifacts in general and chemical processes in particular.

To whom correspondence should be sent. Email: [email protected]

270 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

2. Methodology zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA The proposed methodology for retrofit consists of four steps and the use of a multimodel knowledge representation. 2.1. Retrofit process Our proposed retrofit process is shown in Fig. 1. HEAD

Data Extraction

AHA!

Data Abstraction Design i Analysis zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Data Analysis

knowledge AcquisitionL

£

3:

Alternatives Generation Design Modification Alternatives Adaptation

RETRO Design Evaluation

3:

Alternatives Evaluation

TProposed Alternative

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJI

Fig. 1. The retrofit process based on a multi-model knowledge representation. The main steps of the retrofit process are: • Data Extraction. Information of the artifact is extracted from an initial representation (the simulation output from HYSYS^^ in our case). The HEAD system performs the data extraction, see Section 3. • Design Analysis. Extracted information is abstracted at several levels based on a set of hierarchical functions and precedence rules. This abstracted information is analysed to identify promising sections for retrofit. The AHA! system performs the design analysis, see Section 3. • Design Modification. Alternatives are generated based on the application of new specifications to the original artifact. • Design Evaluation. The generated alternatives are evaluated with respect to their specifications. If an alternative does not satisfy the specifications, the Design Modification step is repeated until they are satisfied. The RETRO system is being implemented for the design modification and evaluation steps (see Section 4). 2.2. Knowledge representation We propose a multi-modelling approach for the representation of knowledge as suggested by Chittaro et al. (1993). In our approach, a unit (i.e. the building block of an artifact; in the case of a chemical process it corresponds to an item of equipment or a section of the process) is represented by the following types of models: • Structural, i.e. the class of a unit and its connectivity.

271 • • •

Behavioural, i.e. how the unit works. Functional, i.e. the role of the unit within the artifact. Teleological, i.e. the objective and justification of the unit.

Depending on the retrofit step and the abstraction level we can use detailed information (structural and behavioural models) or abstract information (functional and teleological models) to reason about a unit. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFED

3. Application of tlie Methodology and Results For the data extraction step we have implemented the HEAD system (HYSYS ExtrAction Data). HEAD is programmed in MS Visual Basic™ and its goal is to extract information from a process flow diagram (PFD) taking advantage of the Programmability features of HYSYS™. The extracted information is then sent to AHA! (Automatic Hierarchical Abstraction tool), a Java based prototype system that generates different levels of abstraction from the initial PFD in order to identify sections where retrofit can be applied. In the near future, the output results of AHA! will be used by RETRO (Reverse Engineering Tool for Retrofit Options). RETRO (now being developed in Java) will generate and evaluate process alternatives. 3.1. Generation of meta-units Initially, the information extracted by HEAD from HYSYS™ is used by AHA! to generatezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Units (process blocks). A Unit consists of four models: structural, behavioural, functional and teleological. The models of a Unit are built as follows: the behavioural model is obtained by comparing its input and output values. The type of Unit and its connectivity constitute the structural model. Furthermore, each Unit is associated with a functional model. Finally, the teleological model defines on an abstract manner the goal and purpose of a Unit inside an artifact. The Units are abstracted by means of inference mechanisms. During this process Metaunit(s) are generated as a result of abstracting two or more Unit(s) and/or Meta-unit(s). These inference mechanisms are implemented as a rule-based system based on (a) Douglas methodology (Douglas, 1988); (b) the identification of generic blocks (Turton et al., 1998); and (c) application of a hierarchy of functions (Teck, 1995). A reduced version of the hierarchy of functions is shown in Table 1. These functions are prioritised according to the precedence shown in Fig. 2. The abstraction process trail can be interpreted as an inverse record of a plausible design history.

Reaction |—HSeparation —" Temperature Change—" Pressure Change —A Flow Change +

Precedence

Fig. 2. Functional precedence in AHA!

272

zyxwvutsrq zyxwvu

zyxwvut

Table 1. Hierarchy of Functions. General Function Reaction Separation Temperature_change Pressure_change Flow_change

Associated operations Reaction Decantation, extraction, distillation, absorption, stripping, adsorption, crystallisation, leaching, drying, and membranes Heating, cooling Pressure_decrement, Pressure_increment Mixing, splitting

3.2. Case study We have applied HEAD and AHA! to the ammonia production process, see Fig. 3. In this process, a hydrogen/nitrogen stream is fed to three catalytic reactors in series. The NH3 produced is fed to the separation section (V-100, V-101) to obtain a 95% pure product stream. Two heat exchangers are used for energy recovery and two coolers are used to obtain flash conditions. *

zyxwv

zyxwvutsrqponmlkjihgfe

Ste £ * ^mMsn* Ffew^iBe* F B i « t e jjMnff H8tt

D t»Cl|4:jgmi|rK=:aOj3»||t»'llx2"-

(3)

n\{n-i)\ Table 1 illustrates the number of the original partially coupled configurations, as well as the number of the total thermodynamic equivalent thermally coupled configurations generated from the conventional simple column configurations for feed mixtures with different number of components.

280 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Table 1. The number of thermodynamic equivalent thermally coupled schemes generated from the simple column configurations for an n-component mixture. No. of No. of SC No. of OPC No. of total TEPC components configurations configurations configurations

3 4 5 6 7 8 9 10 11

2 5 14 42 132 429

2 5 14 42 132 429

1,430 4,862 16,796

1,430 4,862 16,796

4 20 112 672 4,224 27,456 183,040 1,244,672 8,599,552

Obviously, the thermodynamic equivalent partially coupled column configurations have formulated a unique search space of the possible thermally coupled alternatives for optimal design of distillation systems for multicomponent separations. Because of the space limit, the optimal design of the partially thermally coupled systems among all of the TEPC configurations for some specific multicomponent mixtures will be presented in the future publications. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHG

4. Conclusions In this work, the synthesis of the partially thermally coupled column configurations for a multicomponent distillation has been studied with regard to the thermodynamic equivalent structures. There has been formulated a complete space of the possible thermodynamic equivalent alternatives of the partially coupled configurations for multicomponent mixtures. A formula is presented to calculate the number of all the partially coupled schemes for any n-component mixture. The formulated alternatives of all the possible arrangements of PC configurations provide a complete search space for optimal design of multicomponent distillation systems not only for the economics, but also for column equipment design. This can help designers to find the final optimal thermally coupled distillation systems with concerns of both economics and equipment designs.

5. References Agrawal, R., 1996, Ind. Eng. Chem. Res., 35,1059. Carlberg, N.A. and Westerberg, A.W., 1989, Ind. Eng. Chem. Res., 28, 1386. Christiansen, A.C., Skogestad, S. and Lien, K., 1997, Comput. Chem. Eng., 21, S237. Petlyuk, F.B., Platonov, V.M. and Slavinskii, D.M., 1965, Int. Chem. Eng., 5, 555. Rong B.-G., Kraslawski, A., Nystrom, L., 2001, Comput. Chem. Eng., 25, 807. Rong B.-G. and Kraslawski, A., 2002, Ind. Eng. Chem. Res., 41,5716. Rong B.-G. and Kraslawski, A., 2003, AIChE J., 49, xxx. Sargent R.W.M. and Gaminibandara, K., In Optimization in Action; L.W.C. Dixon, Ed.; Academic Press: London, 1976, p. 267. Thompson R.W. and King, J., 1972, AIChE J., 18, 941.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

281

A Multicriteria Process Synthesis Approach to the Design of Sustainable and Economic Utihty Systems Zhigang Shang Department of Process & Systems Engineering, Cranfield University, Cranfield, MK43 OAL,UK Antonis Kokossis Department of Chemical Engineering & Process Engineering, University of Surrey, Guildford, Surrey GU2 5XH, UK

Abstract The proper design criteria for a modern utility plant should include both environmental and economic requirements. In other words, not only the capital and operating costs of a utility plant but also the corresponding utility wastes must be minimised. The paper presents a systematic multicriteria process synthesis approach for designing sustainable and economic utility systems. The proposed approach enables the design engineer to systematically derive optimal utility systems which are economically sustainable and economic by embedding Life Cycle Assessment (LCA) principles within a multiple objective optimisation framework. It combines the merits of total site analysis, LCA, and multi-objective optimisation techniques

1. Introduction In process industries, large amount of gaseous emissions is generated by combustion processes associated with the utility systems. The emissions can result in many impacts on the surrounding environment. As a result of serious concerns about environmental problems in recent years, development of process synthesis methods for waste reduction purpose has become a research issue of growing importance. Thus, the proper design criteria for a modern utility plant should include both environmental and economic requirements. In other words, not only the capital and operating costs of a utility plant but also the corresponding utility wastes must be minimised. Many applications have been presented previously to address the problem of synthesis and design of utility systems. (Papoulias and Grossmann,1983; Colmenares and Seider ,1989; BrunozyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA et ai, 1998; Wilkendorf et al, 1998; Rodrguez-Toral et al, 2001). It should be noted that all studies mentioned above addressed the utility system design problem only based on economical considerations and none of them adopted waste minimisation as one of their design criteria. Research in the latter area has not received much attention until recently. Smith and Delay (1991) tried to establish the minimum targets for the flue gas emissions in the utility system. Linnhoff (1994) proposed an approach to the minimisation of environmental emissions through improved process integration, i.e. the pinch technology. However, these approaches were not able to put a cost against emissions. As the impact of a process on the environment is dependent on its structure and design characteristic, environmental issues and economic ones should

282

be considered simultaneously as an integral part of process synthesis and design (FredlerzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA et al., 1994; Linninger et aL, 1994). This invariably requires some trade-off between these issues. The mathematical programming approach should be in general more comprehensive and less-prone to trade-off these issues as long as all essential engineering insights are formulated in the mathematical models. To address this idea of including environmental impact considerations into process design, Life Cycle Assessment (LCA) is gaining wider acceptance as a method in identifying more sustainable options in process design. Recently, LCA has started to be coupled with multi-objective optimisation to provide a framework for process design by simultaneously optimising on environmental, economic and other criteria (Stefanis et al., 1997; Azapagic, 1999). These developments are still underway. The multi-objective optimisation techniques used by these works can only obtain Pareto-optimum solutions which provide infinite number of options for optimal design. Therefore, other multicriteria decision-making (MCDM) techniques are further required to identify the best compromise solutions. Furthermore, few works have been reported to generate utility system designs based on the integration of LCA and multiobjective optimisation. Here we will present a systematic multicriteria process synthesis technology for designing sustainable and economic utility systems. The technology should be able to generate the best compromise solutions by simultaneously optimising on environmental, economic and other criteria, rather than obtain Pareto-optimum solutions which provide infinite number of options. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB

2. Multicriteria Process Synthesis The proposed multicriteria process synthesis technology enables the design engineer to systematically derive optimal utility systems which are economically sustainable and economic by embedding LCA principles within a multiple objective optimisation framework. It combines the merits of total site analysis, LCA, and multi-objective optimisation techniques. It follows a four-step procedure as follows: (i) Design candidates identification using total site analysis technology; (ii) Environmental impact assessment using LCA principles; (iii) Formulation of the multi-objective optimisation model to incorporate environmental impact criteria as process design objectives together with economics; (iv) Multi-objective optimisation using goal programming techniques. Step 1: Design candidates identiHcation using total site analysis technology The first step in the formulation of the synthesis problem of utility systems is to consider systematically many alternative configurations by including them in a superstructure. In this step, the technology screens various utility units and identifies the efficient utility units that will be implemented into a superstructure from which the optimum design will be selected. There are enormous number of utility units which can be employed in a utility system, namely boilers, back-pressure /condensing turbines, gas turbines, electric motors, steam headers at different pressure levels, condensers, auxiliary units, and all of their different combinations. If all of them are included in a superstructure, it will be too large to be

283

solved. In this approach, the Total Site Profiles (TSP) (Dhole and Linhoff ,1992) are used to locate the feasible utility units in the context of a total site that may be used to satisfy the heat and power demands of a production site. TSP give a simultaneous view of surplus, and heat deficit, for all the processes on the site and reveal the cogeneration potential for the whole site. Thus the TSP can be used as a conceptual tool to screen and target feasible utility units for the site, such as the location of the steam headers and cogeneration units. The Thermodynamic Efficient Curve (TEC) (Shang and Kokossis, 2001) is then employed to identify the efficient utility units by screening among the feasible units. These efficient utility units will form a superstructure. The TEC tool is able to compare the efficiencies of utility units. Only the units with promising efficiencies will be included in the superstructure. Therefore, the superstructure derived by the proposed approach will be much smaller than a general superstructure that includes all possible units. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB

Step 2: Environmental impact assessment using LCA principles The second step of this approach involves carrying out LCA study of the superstructure. The LCA principles are used to estimate the environmental impact of each candidate unit included in the superstructure. The LCA study considers a broad system which considers not only the utility system but also all processes associated with raw material extraction and imported electricity generation. Raw materials such as fuels and water are assumed to be available at no environmental penalty. In this approach, a typical coal-fired power plant is included to generate the electricity that needs to be imported by the utility system as shown in Figure 1. The advantage of the broad system is that input wastes (to the utility system) by importing electricity can be also accounted for together with output emissions (from the utility system). Next, the LCA study involves estimating the amount and type of each waste leaving the system boundary. Once the inventory has been determined, the impact of each waste on the surrounding environment is quantified. Here we use the widely accepted approach described by Heijungs (1992) in which the wastes are grouped according to the environment on which they will impact. Impacts related to global warming, ozone depletion, acidification, nitrification, photochemical oxidation, resources depletion and human toxicity are considered. The advantage of using such environmental impacts is that the information provided is directly linked to impact on the environment rather than for instance mass flowrates of waste materials. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM Local Enmssions

Local Emissions

zyxwvutsrqponmlkjihgfedcbaZYXW

ih::. Process Plant

Figure 1. The broad system boundary.

Central Power Station zyxwvutsrqponmlkjihgfedcbaZYXWVUT

284 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Step 3: Formulation of the multi-objective optimisation model Having developed the superstructure for the utility system, one can then formulate a mathematical program accordingly for the synthesis of the utility system. In order to consider environmental criteria as distinct objectives together with economics in the design problem of the utility system, a multi-objective optimization formulation is considered to select the most sustainable and economic utility system from the superstructure by minimising all the environmental impacts from, the utility system simultaneously, while minimising the total cost of the utility system subject to the given set of utility demands. The numerical values of the environmental impacts and costs are dependent on design characteristics of the utility system. Therefore, the eight environmental impact criteria identified and quantified in the LCA study step and the total cost of the utility system are considered as independent, distinct minimisation functions in the multi-objective optimisation model. The cost objective function is the sum of annualised capital and operating costs. The former includes the fixed and variable costs of all system units. The latter consists of the costs of fuels, fresh water and purchased electricity. The material and energy balance equations associated with every unit in the superstructure are included as the equation constraints of the optimisation problem. Other than the balance equations associated with all units, models of gas emissions and environmental impacts are also integrated into the optimisation model. Binary variables are used to signify the existence or non-existence of units in the superstructure. The resulting multi-objective optimisation problem is formulated as an MINLP model. The decisions to be made by the multi-objective optimisation model include the configuration of the utility system, the values of the operating pressures and temperatures of different steam headers, the types of fuels used by the units, and all stream flowrates. Step 4: Multi-objective optimisation using goal programming techniques Both structural and parameter optimisation in the superstructure are performed for the multi-objective MINLP model on all environmental and cost objective functions to locate the best utility systems with minimal environmental impact and the desired economic performance. The multi-objective MINLP model is solved with goal programming (GP) techniques so as to provide the optimal configure from a superstructure that has embedded many feasible utility systems. By being able to tradeoff incommensurable objectives, e.g. environmental impacts and economic requirements, the GP methods are able to avoid the well-known problems encountered, for instance, weighting objectives, infinite number of non-inferior solutions, etc. In this approach, the objectives are ranked and then minimised lexicographically using the nonArchimedean GP to identify the best compromise solution. The best performance of each of the criteria over the specified operating ranges are used as the goals for the multi-objective optimisation problem. Rather than attempting to achieve solution optimality for single-objective problems, the approach of the GP is to find the best compromise solution that comes as closely as possible to satisfy the design goals.

285 zyxwvutsrqp

3. Case Study The methodology is illustrated as it has been applied to an industrial complex. The case study considers a design problem for a site utility system in the industrial complex. Figure 2 shows the superstructure for the utility system that is to be designed for satisfying utility demands of the industrial complex. The utility system should meet the demands of VHP, HP, MP, LP steam and power demands. The superstructure consists of three main boilers (Bl, B2, B3) which use different fuels (natural gas, coal and oil), one gas turbine boiler (GT boiler), two local boilers (PI and P2), six steam turbines (Tl to T6), two gas turbines (GTl and GT2) which use natural gas and oil respectively, one BFW pump and the deaerator. There are five steam levels (VHP, HP, MP, LP and VLP) and one vacuum level. Steam can be generated at two levels: very high pressure (Bl, B2, B3, GT boiler and P2) and high-pressure (PI). Letdown steam from higher levels is also available. The utility system is interconnected with the utility grid. Connection to the grid allows import of electricity in case of a need. There are options to export excess electricity. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

-

Emissions

i

-,

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Oil

zyxw

Natural gas

^Vm^ Emissions iissions BFW pump

1 5^

ElectricitvJ

Fmmr / — | - \ Centralzyxwvutsrqponmlkjihgf Vrif Statton zyxwvutsrqponml

C^ E

Figure 2. The superstructure of a utility system.

The problem is formulated as a multic-objective optimisation model and is solved using Goal Programming technique. The optimal solution includes one oil boiler (B3), one

286

natural gas turbine (GT2), one gas turbine boiler (GT Boiler), two local boilers (PI, P2) and four steam turbines (Ti , T3, T4 and T6). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM

4. Conclusions A systematic multicriteria synthesis technology for the design of sustainable and economic utility systems has been developed. The proposed technology enables the design engineer to systematically derive optimal utility systems which are economically sustainable and economic by embedding Life Cycle Assessment (LCA) principles within a multiple objective optimisation framework. The best design should be the utility system which incurs the minimum environmental impact and capital and operating costs.

5. References Azapagic, A. and Clift, R., 1999, The application of life cycle assessment to process optimisation. Computers & Chemical Engineering 23. Bruno, J.C., Fernandez, F., Castells, F. and Grossmann, I.E., 1998, A rigorous MINLP model for the optimal synthesis and operation of utility plants. Chemical Engineering Research & Design. 76. Colmenares, T.R. and Seider, W.D., 1989, Synthesis of utility systems integrated with chemical processes. Ind. Eng. Chem. Res. 28. Dhole, V.R., and Linnhoff, B., 1992, Total site targets for fuel, co-generation, emissions and cooling. Computers & Chemical Engineering, 17. Friedler, F., Varga, J.B. and Fan, L.T., 1994, Algorithmic approach to the integration of total flowsheet synthesis and waste minimisation. American Institute of Chemical Engineering Symposium Series, 90. Heijungs, R.,zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA et al., 1992, Environmental Life Cycle Assessment of Products Background and Guide. Leiden: Centre of Environmental Science. Linnhoff, B., 1994, Use pinch analysis to knock down capital costs and emissions. Chem. Engng Prog 90. Linninger, A.A., Ali, S.A., Stephanopoulos, E., Hanand, C. and Stephanopoulos, G., 1994, Synthesis and assessment of batch processes for pollution prevention. American Institute of Chemical Engineering Symposium Series 90. Papoulias, S.A., and Grossmann, I.E., 1983, A structural optimization approach in process synthesis - 1 : Utility systems. Computers & Chemical Engineering 7. Rodriguez-Toral, M.A., Morton, W. and Mitchell, D.R., 2001, The use of new SQP methods for the optimization of utility systems, Comp. Chem. Engng., 25. Shang, Z.G. and Kokossis, A.C., 2001, Design and synthesis of process plant utility systems under operational variations. ESCAPE-11, Denmark. Smith, R. and Delaby, O., 1991, Targeting flue gas emissions. Trans IchemE, 69. Stefanis, S.K., Livingston, A.G. and Pistikopoulos, E.N., 1997, Environmental impact considerations in the optimal design and scheduling of batch processes. Computers & Chemical Engineering, 21. Wilkendorf, F., Espuna, A. and Puigjaner, L., 1998, Minimization of the annual cost for complete utility systems. Chemical Engineering Research & Design, 76.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

287

A Decision Support Database for Inherently Safer Design R. Srinivasan^'*, K.C. Chia^ A-M. Heikkila^ and J. Schabel^ ^ Department of Chemical & Environmental Engineering National University of Singapore 10 Kent Ridge Crescent, Singapore 119260 ^ VTT Industrial Systems P.O. Box 1306 Tampere, Finland

Abstract An inherently safer process relies on naturally occurring phenomena and robust design to eliminate or greatly reduce the need for instrumentation or administrative controls. Such a process can be designed by applying inherent safety (IS) principles such as intensification, substitution, attenuation, limitation of effects, simplification, etc. throughout the design process, from conception until completion. While the general principles and benefits of IS are well known, a searchable collection of inherently safer designs that have been implemented in industry has not been reported. Such a database of inherently safer design (ISD) examples would assist the process designer in the early stages of the design lifecycle when critical design decisions are made. In addition to examples of IS design which have been successfully carried out, the database that we have developed contains process incidents which could have been averted by the application of ISD. In this paper, details of the database, the query engine, and potential applications are presented.

1. Introduction

Inherent safety is the pursuit of designing hazards out of a process as opposed to using engineering or procedural controls to mitigate risk. This is usually achieved through intensification, substitution, attenuation, limitation of effects, simplification, avoiding knock on effects, making incorrect assembly impossible, making status clear, tolerance of misuse, ease of control and computer control (Kletz, 1998). Using the above principles, a more robust plant can be designed where departures from normal conditions are tolerated without serious consequences for safety, production, or efficiency. Despite the obvious importance of ISD, there has only been limited work for developing tools that support the assessment of IS.zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB INSET V/SLS developed to promote IS principles, and contains a set of tools which support the adoption of IS principles into process development and design (Malmen et al., 1995; van Steen, 1996, Turney et al. 1997). Recently, an expert system that supports ISD by identifying safety issues and proposing inherently safer alternatives was reported (Palaniappan et al., 2002a; 2002b). One important criticism of toolkits and expert systems is that due to their 'generic' nature and the need to be applicable to a variety of processes, they cannot account for the subtle

' Corresponding Author. Tel: +65 67732671; Fax: +65 67791936; e-mail: [email protected]

288

nuances and special cases that occur during process design. Another issue relates to the link between safety, health, environmental aspects, economics, and the operability of a chemical plant (Palaniappan et al., 2002c). Since safety is rarely considered in isolation, there can be many synergies and tradeoffs between the different facets. Again it is not easy to foresee all the tradeoffs and judgement calls are required. To overcome these shortcomings, IS toolkits and expert systems can be complemented by a knowledgebase of design examples describing scenarios where IS principles have been used. Such a database would also help the process designer by illustrating possible synergies and tradeoffs between safety and other aspects during practical plant design. Such a decision-support database, calledzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA iSafeBase, is presented in this paper. The remainder of this paper is organised as follows: in the next section, the conceptual and implementation phases of iSafeBase are described. Two case studies are used to illustrate the use of iSafeBase in Section 3. Conclusions and future work directions for this work are presented in Section 4. zyxwvutsrqpo

2. Database Design and Development The following were some key considerations during the design of iSafeBase: • Expandable: A database is useful only if it has a sufficiendy large dataset of examples. In order to enable this, it should be easy to enter new examples into the database, not only by the designer familiar with the internal details of the system, but also by any user by means of a simple interface. • Customisable: As mentioned above, safety is related to numerous aspects of process design, not all of which can be pre-enumerated. The design of the database should allow new classes of information to easily be added. • Open architecture: It should have an open and flexible architecture that permits the exchange of information with other design support tools such as flowsheeting packages, CAD systems, or safety evaluation systems. Examples of ISD would then be available while working with those systems. After comparing various database development software packages (including Filemaker, Microsoft FoxPro, Corel Paradox, Microsoft SQL Server and Oracle), Microsoft Access was selected as the preferred platform because of its ubiquitous availability and ease of use. Two distinct steps were needed to develop a structure that met these objectives: designing the data structures and constructing relationships between them. These are described below. 2.1. Data structures The following major classes of information are important: 1. Material properties - such as toxicity, corrosivity, reactivity, explosiveness, and flanmiability. 2. Design-related information - including design stage (chemistry route selection, chemistry route detailed evaluation, process design optimisation, process plant design, etc.), chemistry, and equipment. 3. Safety-related information - including hazards, and IS principles. 4. Design alterations - involving chemistry, material, or equipment modifications. 5. Accident-related information. Tables are used to organise the above data in iSafeBase. Each table comprises a number of fields which store the necessary various attributes for that class. Table 1 shows some example tables and their fields. The reader should note that references are provided for each design example and accident in order to enable the designer to explore further.

289 zyxwvutsrqp

Table 1: Database tables and their fields. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK Table IS Design Accidents IS Principles Modification Type of Hazard

Fields Description, Illustration, Design Stage, Equipment, Reference Outcome, Initiating Event, Contributing factors. Consequences, Description, Equipment, Reference Principle, Suggestion Modification, Cost savings Type of hazard. Properties, Role, Unit Operations

2.2. Relationships Once the types of data have been specified, the relationships between the tables must be defined. The primary data tables provide a unique identifier (ID) for each record. Linking tables were created to relate records from different tables, and these links use the identifier to reference data across tables and enable one-to-one, one-to-many, many-to-one and manyto-many relationships. For example, a substance can have more than one hazardous property and a hazardous property can be present in many substances. A many-to-many relationship would be described for a substance (say with ID=1) that is toxic (ID=1) and flammable (ID=3) - this would be captured through one entry in the Materials table, two entries in the Properties table, and two rows in a material-properties link table (where the field 'Material' would have a value of 1, and the field TD-Properties' would have values 1 and 3 respectively). A simplified representation of the various relationships in iSafeBase is shown in Figure 1. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

f

3zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Materials ID (materials) .•

( ID-Role 1^ ID(RQle)

^ J ost-DesignlDj

pD-Design Stage^

i-J^

zyxwvutsrqponmlk

ID (Stage) J

{ ISprindples ^

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA • [lD(Suggestion)J ^ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Equipment I^ID(Acddents)J

Figure 1: Relationships in iSafeBase. 2.3. Querying the database Once the examples have been collated, they need to be retrieved. A key consideration for the acceptability of a database is the ease in which it can be queried. Queries have been

zyxwvuts zyxwvu zy zyxwvutsrqp

290

implemented in iSafeBase to allow searches by specific equipment, hazard, substance, IS principle, modification, design stage, or outcome. Free text searches which search through every field in the database can also be performed. Additionally, the functionality to browse all the cases in the database related to a specific category, through a hierarchical interface, has also been implemented. 2.4. Graphical user interface Developing the graphical user interface (GUI) was the last step in producing a functional database. Figure 2 shows the GUI for the two ways of querying iSafeBase as described.

^ iSafeBase

^ iSafeBase

Input l^eywQfd Contents ^

Bipofe Keyword [confcenbl

Inputfeeyvvord;| Equipment

Querybyi

Hazard

p

IS Principle

€o!

Design Stage Modification Outcome

Figure 2: Querying iSafeBase by (a) Keyword search, and (b) Category specific browsing.

3. Case Study: Keyword 'React' The current version of iSafeBase has forty design examples and accidents. Figure 3 lists the number of examples in each design stage while Table 2 lists the different sources from which the design and accident examples were selected.

O)

Process plant design

CO

CO c .5^ o O

Process design optimisation Chennistry route detailed evaluation Chemistry route selection

No. of cases: o Figure 3: Design examples in each design stage.

10

15

20

291 zyxwvutsrqp

Table 2: Sources of design examples. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIH Different sources Kletz, T.A., Process Plants: a handbook for inherently safer design, 1998, Taylor & Francis. Chementator, Chemical Engineering, (journal) Proceedings of International Conference & Workshop on Process Safety Management and Inherently Safer Processes, October 8-11, 1996, Orlando, AIChE. Bollinger, R.E. et al.. Inherently Safer Chemical Processes - A Life Cycle Approach, 1996, AIChE. Total

Cases 22 8 7 3 40

A query on the keyword 'react' is used to illustrate the different facets of the program. Twenty-eight different design examples and thirteen accidents were returned for this case from 'air contexts. Two such design examples are outlined in Tables 3 and 4. Table 3: Design example summary from Case study 1.

Hazardous Scenario IS Principle Suggestion Design stage Example of Modification Reference

Case description MIC reacted with alpha-naphthol to make carbaryl. Large inventories of MIC kept in plant. Substitution - Use another process route that involves less hazardous material or conditions. Chemistry route selection. Different sequence of reactions: Alpha-naphthol and phosgene are reacted together to give an ester that is then reacted with methylamine resulting in same product. No MIC is produced. (Pilot tested by Makhteshim, an Israeli company) Kletz, T.A. (1998). Process Plants: A Handbook for Inherently Safer Design, p.68

4. Conclusions While the importance of inherently safer design of chemical plants has been widely accepted, this has not been practised partly because of the lack of support tools. A database of examples of inherently safer designs has been reported in this paper. The software quickly retrieves cases of design modifications and related accidents for a given scenario. By making it possible to retrieve specific examples of ISD through a simple query process, it is hoped that this tool would guide plant designers in their effort to develop safer chemical plants. It would also promote IS in the mindsets of management, since concrete examples of what has been successfully implemented and the associated rewards can easily be presented.

292 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Table 4: Design example summary from Case study 2, zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ Hazardous Scenario IS Principle Suggestion Design stage Suggested Modification Example of Modification Reference

Case description Reaction runaway. Ease of control - Use physical principles instead of other measures that may fail or be neglected. Chemistry route detailed evaluation. Use another catalyst. ICI Chemicals & Polymers has developed oxy-anion promoted catalysts in which the selectivity promoter is adsorbed onto the catalyst to activate it. Any temperature excursion in the reactor results in desorption of the activator. Thus, reaction mnaway potential has been eliminated. Hawksley, J.L., and M.L. Preston (1996). "Inherent SHE: 20 Years of Evolution."

5. References Kletz, T., 1998, Process plants: a handbook for inherently safer design. Philadelphia: Taylor & Francis, pp. 1-19, 152-180. Malmen, Y., Verwoerd, M., Bots, P.J., Mansfield, D., Clark, J., Tumey, R. and Rogers, R., 1995, Loss Prevention by Introduction of Inherent SHE Concepts, SLP Loss Prevention Conference, December 1995, Singapore. Palaniappan, C , Srinivasan, R. and Tan, R., 2002a, Expert System for Design of Inherently Safer Processes - Part 1: Route Selection Stage, Industrial and Engineering Chemistry Research, Vol.41(26), pp.6698-6710. Palaniappan, C , Srinivasan, R. and Tan, R., 2002b, Expert System for Design of Inherently Safer Processes - Part 2: Flowsheet Development Stage, Industrial and Engineering Chemistry Research, Vol.41(26), pp.6711-6722. Palaniappan, C , Srinivasan, R. and Halim, I., 2002c, A Material-Centric Methodology for Developing Inherently Safer and Environmentally Benign Processes, Computers & Chemical Engineering, Vol. 26(4/5), pp.757-774. Tumey, R., Mansfield, D., Malmen, Y., Rogers, R., Verwoerd, M., Suokas, E. and Plasier, A., 1997, The INSIDE Project on inherent SHE in process development and design - The Toolkit and it's application, IChemE Major Hazards XIII, April 1997, Manchester, UK. van Steen, J., 1996, Promotion of inherent SHE principles in industry, IChemE - 'Realising an integrated management system', December 1996, UK.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

293

Using Design Prototypes to Build an Ontology for Automated Process Design* IDStalker^ ESFraga^ LvonWedel^ AYang^ ^Centre for Process Systems Engineering, Department of Chemical Engineering, UCL, London WCIE 7JE, UK ^Lehrstuhl ftir Prozesstechnik, RWTH Aachen, 52056 Aachen, Germany E-mail: i . s t a l k e r @ u c l . a c . u k

Abstract Recently there has been an increased interest in agent-based environments for automated design and simulation (Garcia-Flores et al. 2000). In such environments, responsibility for decision making is partly removed from the engineer to the underlying agent framework. Thus, it is vital that pertinent knowledge is embedded within this framework. This motivates the development of an ontology (Gruninger & Lee 2002) for the particular domain. An important first step is a suitable organisation of the knowledge in a given domain.

1. Introduction Automated process design is a complex task that typically makes use of an array of computational tools, for example thermophysical packages. Agent based systems, such as COGents (Braunschweig et al. 2002), offer a potential solution to the dynamic access and configuration of such tools.. To realise this potential an automated design agent requires both process design domain knowledge — that is an ontology — and appropriate knowhow to apply this domain knowledge. This paper describes the use of design prototypes to organise domain knowledge as a first step towards the development of an ontology for process design and the mechanisms needed to invest a design agent with the domain knowledge.

2. A Design Prototype for Conceptual Process Design Design prototypes arose in mechanical engineering but the ideas apply to generic design processes. The conceptual basis is the Function-Behaviour-Structure (FBS) framework (Gero 1990) which is motivated by the following:

[...] the metagoal of design is to transform functionzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ F (where F is a set) into a design description D in such a way that the artefact being described is capable of producing these functions. (Gero 1990) *Work funded by Project COGENTS, Agent-Based Architecture For Numerical Simulation, funded by the European Community under the Information Society Technologies Programme (1ST), under contract IST-200134431.

294 Documentation

Formulation

F Bg S Bs D

Function Expected Behaviour Structure Actual Behaviour Design Documentation

zyxwvutsrqponmlkjihgfedcba

Figure 1. The Function-Behaviour-Structure (FBS) Framework

This design description represents an artefacts elements and since there is generally no function in structure nor structure in function, the transformation from function to description proceeds in stages. Gero (1990) introduces the FBS framework, Figure 1, to elaborate these stages working on the premise that It is function, structure, behaviour and their relationships which form the foundation of the knowledge which must be represented (Gero 1990). The goal of conceptual process design is to generate and select good process designs, usually represented by aflowsheetwith design parameters and often supplemented with design rationale (Banares-Alcantara 1997) This is the Design Artefact. A design problem begins with the desired products, reactions of interest, available processing technologies, raw materials and a set of criteria for ranking. We seek a process which will derive the desired products from raw materials: this is the function F of our design artefact. Employing the FBS framework allows us to model process design as a combination of the the following activities: formulation^ to realise F a sequence of expected behaviours. Bey such as separation, reaction, etc., is formulated; synthesis, the expected behaviours are used to synthesise an appropriate structure 5; analysis, the structure is analysed for cost and actual behaviours, B^; evaluation, the actual behaviours are compared with the expected behaviours: ideally actula behaviours will be an acceptable superset of the expected behaviours; reformulation, as design problems are typically underdefined, we are likely to find that the first few drafts of aflowsheetare incomplete (Laing & Fraga 1997) and so the expected behaviours and function are reformulated; documentation,finally, the final design artefact is fully documented in (D). Examples of FBS for a generic prototype for conceptual design are shown in Table 1. A Design Prototype is a knowledge representation schema which abstracts "all requisite knowledge appropriate to that design situation" (Gero 1990). Symbolically, a prototype proforma is expressed ^ = {F,B,S,D,K,C)

295 zyxwvutsrqponm wherezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA K = {Kr,Kg,Kc,K^,Kj^) is a tuple of, respectively: relational knowledge, which provides and makes explicit the dependencies between the variables in the function, behaviour and structure categories; qualitative knowledge, which provides information on the effects of modifying values of structure variables on behaviour and function; computational knowledge, which specifies mathematical and symbolic relationships between variables in the function, behaviour and structure categories; constraints or contextual knowledge identifies exogenous variables for a design situation; and reflexive knowledge, a pair K^ = {T,P) comprising, respectively, the typology which identifies the broad class to which the prototype belongs and a partition representing the subdivision of the concept represented by the prototype. Examples are shown in Table 1. C denotes the context in which the design is activity is taking place. In our case, this is the context of process engineering and does not need further elaboration. Two common approaches to developing aflowsheetfor a given engineering process are a hierarchical approach (Douglas 1988) and an algorithmic approach, typically through mixed integer nonlinear progranmiing (MINLP) (Grossmann et al. 1999). These are different mechanisms to transform the expected behaviours identified in the prototype into a suitable structure. Both begin with the statement of the function of the final process. Accordingly, the approaches refine the same base prototype in different directions. The hierarchical approach refines the prototype in small steps: starting with the coarse-grained top level information of process type and applying a number of heuristics to derive the additional (refined) information; this approach emphasises qualitative knowledge. The algorithmic approach refines the prototype in large steps: a minimum of required information is developed and this is used to develop a number of sections of the prototype by appealing to external search mechanisms; this approach emphasises computational knowledge. The two approaches are largely complementary and share a minimum of overlap. Accordingly, to ensure a broad application we have extracted, organised and collated into a single prototype design knowledge from a representative of each approach (Douglas 1988, Fraga et al. 2000).

3. Towards an Ontology, OntoCAPE An ontology may be defined to be "an explicit specification of a conceptualisation" (Gruber 1993). The underlying FBS framework provides natural categories for an ontology of process design. Ontologies were originally motivated by the need for sharable and reusable knowledge bases. However, the reuse and sharing of ontologies themselves is still very limited. Those seeking to reuse a particular ontology do not always share the same model as those who built it: thus, it is often difficult to discover tacit assumptions underpinning the ontology and to identify the key distinctions within it (Gruninger & Lee 2002). The use of a prototype to develop an ontology circumvents these problems: the key distinctions derive from the framework of prototype; if a prototype has been fully developed, then all assumptions are made explicit in the knowledge categories. OntoCAPE specifies a conceptualisation of process modeling, simulation, and design. A skeleton ontology has been developed in which the major categories of COGents concepts

296 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Table 1. Generic Prototype for Conceptual Process Design zyxwvutsrqponmlkjihgfedcbaZYXWVUTS

Function

to convert raw materials to desired products subject to specified constraints: inputs —> outputs

Behaviour

Structure

Behaviours

separation, reaction, mixing, heating, cooling, recycle, etc.

Variables

recoveries, rates, duties, etc.

Elements

flash, distillation column, reactor, mixer, heater, etc.

Variables

number of units, volume of reactor, heights of distillation columns, reaction temperature, operating pressure, etc.

Properties

component thermophysical properties, thermal conductivity, tray efficiency, etc.

Function to If function is to isolate pure product, Kr zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Behaviour Behaviour to Behaviour Variables

the required/expected behaviour would be ''separation."

Recovery specification for separation units.

K,

Recycle structure required; economic potential has an inverse relationship with raw material costs; etc.

Kc

Unit models; cost correlations; product specifications; reaction equilibria; etc.

^ct

Plant data, such as amortisation period; site constraints; ambient conditions; etc.

UR

Typology

process flowsheet

Partition

separation, reaction, reaction-separation, with recycle, without recycle, etc.

297 zyxwvutsrqpon Application Specific Concepts

process design

process modelling

process simulation

Common Concepts chemical process system processing subsystem realisation

function

behaviour

processmg material

software system zyxwvutsrqponmlkjihgfedcbaZYX

Figure 2. OntoCAPEfigure

are identified and key concepts given for each category. The resulting top level structure of OntoCAPE is illustrated in Figure 2. The full ontology is currently being developed and will provide more detailed class hierarchies, class definitions, attributes, relations and constraints. OntoCAPE comprises of a number of relatively independent partial models. In particular, there are partial models conmion to different CAPE applications nd those peculiar to specific applications. The processing subsystem in the skeleton OntoCAPE has three distinctive aspects: realisation, function, and behaviour. This corresponds naturally to the FBS framework of in design prototypes. Accordingly, design prototypes, especially concrete examples, provide a suitable organisation of material for use in refining the concepts and relations of skeleton ontology relevant to process design. Moreover, the full ontology, in turn, can be used to provide more formal specification of design prototypes. The formal specification of the full ontology will be expressed in DAML+OIL ( h t t p : / /www. d a m l . o r g ) .

4. From Design Prototype to Design Agent To function in an agent based system, the design agent must supplement knowledge of both what is, domain knowledge, with know-how, problem solving knowledge. To this end, ontologies and problem solving mechanisms (PSMs) (also, called problem solving methods or generic task models), go hand-in-hand (van Heijst 1995): ontologies capture domain knowledge; PSMs capture the task-level application of the domain knowledge. Since FBS framework separates knowledge from the computational processes which operate upon it, a design prototype provides a basis from which to develop a systematic approach to identifying PSMs. The transformations broadly embrace the computational processes through which one category of knowledge is developed into another. We apply PSMs to function io formulate expected behaviour; behaviour to synthesise structure;

298 structure tozyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA analyse for actual behaviour; expected and actual behaviour to evaluate actual behaviour.

Thus, well-developed prototypes are invaluable in developing a design agent: an ontology is derivable from the prototypes; the transformation processes of the FBS framework provide us with a basis for a systematic approach to discovering PSMs. zyxwvutsrqponmlkjihgfedcbaZ

5. References Banares-Alcantara, R. (1997), *Design support for process engineering III. Design rationale as a requirement for effective support'. Computers and Chemical Engineering 21, 263-276. Braunschweig, B. L., Fraga, E. S., Guessoum, Z., Paen, D. & Yang, A. (2002), COGents: Cognitive middleware agents to support e-cape, in B. Stanford-Smith, E. Chiozza & M. Edin, eds, 'Proc. Challenges and Achievements in E-business and E-work', pp. 1182-1189. Douglas, J. M. (1988), Conceptual Design of Chemical Processes, McGraw-Hill International Editions. Fraga, E. S., Steffens, M. A., Bogle, I. D. L. & Hind, A. K. (2000), An object oriented framework for process synthesis and simulation, in M. F. Malone, J. A. Trainham & B. Camahan, eds, 'Foundations of Computer-Aided Process Design', Vol. 323 of AIChE Symposium Seriesy pp. 446-449. Garcia-Flores, R., Wang, X. Z. & Goltz, G. E. (2000), 'Agent-based information flow for process industries supply chain modelling'. Computers chem. Engng 24,11351142. Gero, J. S. (1990), 'Design protoypes: A knowledge representation schema for design', AI Magazine Winter, 26-36. Grossmann, I. E., Caballero, J. A. & Yeomans, H. (1999), 'Mathematical programming approaches to the synthesis of chemical process systems', Korean J Chem Eng 16(4), 4 0 7 ^ 2 6 . Gruber, T. R. (1993), 'A translation approach to portable ontology specifications'. Knowledge Acquisition 5(2), 199-220. Gruninger, M. & Lee, J. (2002), 'Ontology applications and design: Introduction', Communications of ACM 45(2), 39-41. Laing, D. M. & Fraga, E. S. (1997), 'A case study on synthesis in preliminary design'. Computers & Chemical Engineering 21(Suppl.), 53-58. van Heijst, G. A. C. M. (1995), The role of ontologies in knowledge engineering. Thesis, University of Amsterdam.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

299

Engineer Computer Interaction for Automated Process Design in COGents* ID Stalker^ R A Stalker Firth^ E S Fraga^ ^Centre for Process Systems Engineering, Department of Chemical Engineering, UCL, London WCIE 7JE, UK ^Summertown Solutions Ltd, Suite 140, 266 Banbury Road, Oxford 0X2 7DL, UK E-mail: i . stalker@ucl .ac.uk

Abstract We identify those interaction issues necessary to foster creativity in automated process design. We apply the key distinctions of Engineer Computer Interaction (Stalker & Smith 2002) to ensure that these are included in the development of a process design agent within the COGents framework (Braunschweig et al. 2002, COGents n.d.). The formalism is used to develop a blueprint for interactivity between a designer and a design agent which fosters creativity in design.

1. Automated Process Design in COGents

A process design problem begins with the desired products, reactions of interest, available processing technologies, raw materials and a set of criteria for ranking. The result of process design is a flowsheet supplemented with design rationale: this is our zyxwvutsrqponm Design Artefact (Baiiares-Alcantara 1997). This is a complex task and benefits greatly from the use of automated tools. Recently, agent based systems (Ferber 1999) have received an increased interest for application to automated design and simulation (GarciaFlores et al. 2000). COGents is a European project to use cognitive agents to support dynamic, opportunistic interoperability of CAPE-OPEN compliant software over the internet (COGents n.d., Braunschweig et al. 2002). It is essentially 2L proof of concept for numerical simulation using agent technology, software components and web repositories with the chosen context being computer aided process engineering. Part of this work involves the development of a process design agent which will make use of an automated design tool, Jacaranda (Fraga et al. 2000), in coordination with other agents. The current usage scenario for the Jacaranda System is typical of design tools. The user input is comprehensive: the user sets up the system; fully defines the problem; defines the nature of the solution space through units available for a problem; defines the granularity of the solution space through discretisations of continuous parameters; provides cost *Work funded by Project COGENTS, Agent-Based Architecture For Numerical Simulation, funded by the European Community under the Information Society Technologies Programme (1ST), under contract IST-200134431.

300 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Figure 1. Current Usage Scenario Use Case Diagram zyxwvutsrqponmlkjihgfedcbaZYXWVUTS

models, material components and raw material specifications. We sunmiarise a use case analysis of the current usage scenario in Figure 1. In COGents, we seek to remove the onus from the user through the use of agents. As a necessary step we have identified how to redistribute the use cases appropriately among the design agent, a design tool and the additional COGents framework. We illustrate the anticipated final distribution in Figure 2. The design agent prepares the problem definition with minimum input from the user, obtaining information from other agents in the COGents platform and employing its own knowledge to undertake appropriate decisions. zyxwvutsrqpon

2. Interaction Issues Advantages of an agent based approach to process design include the automation of routine tasks, access to up-to-date information, access to new technologies and access to an increased range of solution mechanisms. Reducing the burden from the the designer allows him to focus on more creative aspects, increasing the likelihood of truly novel designs. However, an agent based approach not only removes the onus from the user, it also removes a certain amount of control. Consider, in Figure 1 the user controls the information employed by the design tool, through level of discretisation, values for variables and the constants used, and so forth. He can make use of the design tool for preliminary explorations of a given solution space, a key to successful design (Navinchandra 1991, Smithers 1998); for example, through the use of partial solutions (Fraga 1998, Fraga et al. 2000). In Figure 2 the level of automation seems to prevent this creative use of the design tool: the designer must either accept the results of the system without question or seek an alternative; should a design problem remain unsolved, there is no indication of neamess to a solution, nor of those constraints which may have restricted particular design alternatives. Thus, there is no information available to guide a reuse of the system or to take on board when preferring an alternative design tool.

301

E>esign Tool

COGents zyxwvutsrqponmlkji

Figure 2. Anticipated Final Usage Scenario Use Case Diagram

We seek to realise the full potential of an agent based approach by using the technology to reduce the burden and including mechanisms through which to re-introduce the designer into the loop. One way is to allow a choice of responsibility from the current situation of Figure 1 to the final situation of Figure 2. This returns control but also returns the burden. A preferable way is to promote an increased interactivity, allowing the designer to supervise the design agent. This returns control without the burden. Engineer Computer Interaction (ECI) is a methodology for coordinating aspects of HCI with domain specific knowledge to facilitate the development of more appropriate software systems to support engineers. ECI developed in Structural Engineering (Stalker & Smith 2002). Application of ECI to a given discipline requires the development of three elements: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Organisational Schema A representation of the important stages in the life cycle of a design artefact which can be translated into a software structure for computer implmentation. Task Decomposition A decomposition of the generic tasks in developing the artefact through its life cycle. The decomposition available for each task in the original ECI blueprint offers the following modules: Data Management to examine the input information for fitness for use; Model Selection to offer a choice of underlying assumptions; Model Use to allow appropriate revisions and tuning of models; Viewpoints to encourage exploration of the space of solutions from different perspectives; and Comparison of

302 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Documentation

Formulation

F Bg S Bs D

Function Expected Behaviour Structure Actual Behaviour Design Documentation

zyxwvutsrqponmlkjihgfedcba

Figure 3. The Function-Behaviour-Structure (FBS) Framework

multiple interpretations. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Engineer Identikit A set of generic engineering representations to facilitate the development of domain specific user system interaction.

3. Automated Process Design with Engineer Computer Interaction Organisational Schema We employ the Function-Behaviour-Structure (FBS) Framework (Gero 1990), illustrated in Figure 3. The function F of our design artefact is to represent a process which will derive the desired products from raw materials. To realise this function, a sequence of required behaviours, such as separation, reaction, etc. (expected behaviours, Be) is formulated; these are used to synthesise an appropriate structure 5; the structure is analysed for cost and actual behaviours, B^; as design problems are typically underdefined, we are likely to find that the first few drafts of a flowsheet are incomplete (Laing & Fraga 1997) and so the expected behaviours and function are reformulated. Finally, thefinaldesign artefact is documented in D. Task Decomposition Of particular interest to process design are: Model Selection and Use Appropriate model selection and use are vital to synthesis and analysis tasks. For process design in COGents we have access to models within our design tool and also from the additional COGents framework. Access to model parameters is essential: these are often problem specific, for example, amortisation periods, selectivity, conversion, recoveries; a designer oftens makes a number of choices of discretisation during preliminary explorations (Fraga 1998, Laing & Fraga 1997). Viewpoints and Comparison Results from a number of different models are extremely useful for evaluation and reformulation of behaviours. We compare the actual behaviours of the synthesised structure with the formulated behaviours; we compare full and partial solutions generated in order to maximise insight. For example, the primary design tool in COGents, Jacaranda (Fraga et al. 2000), generates the best N solutions, as requested, by the user. Engineer Identikit Generic engineering representations identified for process design are:

303 zyxwvutsrqponm ClassificationzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA such as physical properties databases and thermophysical packages; ontologies and information technology based data models (Bayer et al. 2001); subproblem classifications, such as the subproblem dependencies, qualifiers and solution hierarchies in (Fraga 1998); and cost tables. Procedure and Sequence such as the necessary order of unit operations; procedural information subsists in computational methods. Graphical Representations includingflowsheetreaders such as HYSYS; simple tools for reading tables of subproblems, for example (Laing & Fraga 1997); traditional sketches of graphs. Formulae including mathematical formulae; reaction equations; potentially clauses of logic programs to capture design heuristics of hierarchical approaches, such as developed in (Douglas 1988). Symbols depicting the various unit operations and theflowsheetsthemselves. Customs and Practice including standards, guidelines and other information observed as a general practice by process designers and engineers. Tables and Lists of physical properites; unit specifications and constants; subproblem listings with status measures (Fraga 1998). We note, for example, applying dynamic programming techniques to process design is based on the use of cost tables (Fraga 1998). Natural Language to enlarge upon or provide a commentary to the information in the other categories. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

4. Discussion Agent based systems offer enormous benefits to automated process design, reducing the burden of effort, increasing access to information, models and solution techniques and so forth. However, it is imperative that we provide an interactivity to ensure that the designer has a creative input; that he retains control and has access to partial solutions to foster a systematic search of design space and a computational efficiency. We have applied the key distinctions of EC! to ensure that development of the process design agent accommodates these needs. Enhancing the potential interactivity with a design agent invested with design expertise encourages a less expert user to employ the system in a creative manner similar to that of a more experienced designer. The impact of the inclusion of ECI on the development of a process design agent is minimal. It does not affect the progression suggested by the differences in Figures 1 and 2. Rather we are increasingfinalsystem and it is only in light of system that we can properly determine whether the desirable interaction issues are best served through extending the functionality of the design agent; or through the introduction of a personal assistant agent (Ferber 1999). Nothwithstanding, there are ontological implications: we must ensure that our design ontology embraces relevant additional concepts such as partial solutions, cost tables, preliminary exploration, coarseness of discretisation, subproblem dependencies, dependency qualifiers, solution status, and similar.

304 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

5. References zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Banares-Alcantara, R. (1997), 'Design support for process engineering III. Design rationale as a requirement for effective support'.zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK Computers and Chemical Engineering 21, 263-276. Bayer, B., Krobb, C. & Marquardt, W. (2001), A data model for design data in chemicalm engineering - information models. Technical Report LPT-2001-15, Lehrstuhl fuer Prozesstachnik, RWTH Aachen. Braunschweig, B. L., Fraga, E. S., Guessoum, Z., Paen, D. & Yang, A. (2002), COGents: Cognitive middleware agents to support e-cape, in B. Stanford-Smith, E. Chiozza & M. Edin, eds, Troc. Challenges and Achievements in E-business and E-work\ pp. 1182-1189. COGents (n.d.). The COGents Project Agent-based Architecture for Numerical Simulation', http;//w WW. cogents.org. Douglas, J. M. (1988), Conceptual Design of Chemical Processes, McGraw-Hill International Editions. Ferber, J. (1999), Multi-Agent Systems: An Introduction to Distributed Artificial Intelligence, Addison Wesley. Fraga, E. S. (1998), The generation and use of partial solutions in process synthesis', Chemical Engineering Research and Design 76(A1), 45-54. Fraga, E. S., Steffens, M. A., Bogle, I. D. L. & Hind, A. K. (2000), An object oriented framework for process synthesis and simulation, in M. F. Malone, J. A. Trainham & B. Camahan, eds, 'Foundations of Computer-Aided Process Design', Vol. 323 of AIChE Symposium Series, pp. 446-449. Garcia-Flores, R., Wang, X. Z. & Goltz, G. E. (2000), 'Agent-based information flow for process industries supply chain modelling'. Computers chem. Engng 24,11351142. Gero, J. S. (1990), 'Design protoypes: A knowledge representation schema for design', AI Magazine Winter, 26-36. Laing, D. M. & Fraga, E. S. (1997), 'A case study on synthesis in preliminary design'. Computers & Chemical Engineering 21(Suppl.), 53-58. Navinchandra, D. (1991), Exploration and Innovation in Design: Towards a Computational Model, Springer-Verlag. Smithers, T. (1998), Towards a knowledge level theory of design process, in J. S. Gero & F. Sudweeks, eds, 'Artificial Intelligence in Design '98', Kluwer, pp. 3-21. Stalker, R. & Smith, I. (2002), 'Structural monitoring using engineer-computer interaction'. Artificial Intelligence for Engineering Design, analysis and Manufacturing 16(5). Special Edition, Human-Computer Interation in Engineering Contexts.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

305

Developing a Methanol-Based Industrial Cluster Rob M. Stikkelman\ Paulien M. Herder^, Remmert van der WaP, David Schor^ Delft University of Technology ^Interduct, Delft University Clean Technology Institute ^Faculty of Technology, Policy and Management; Energy & Industry Section The Netherlands

Abstract We have conducted a study in collaboration v^ith the Port of Rotterdam in which we explored possibilities for developing a methanol-based industrial cluster in that area. The study had two main goals. The first goal was to develop a realistic methanol-based industrial cluster, supported by technical and economic data. For our cluster we have considered plants and processes from the entire production chain. The second goal of the study was to bring together various actors in the field of our proposed methanol cluster. In order to create a common language among the actors and to get the actors involved actively, we developed a virtual prototype of the cluster. During a workshop with the actors, we used the virtual prototype as a vehicle to initiate discussions concerning technical and economic issues and improve upon the proposed cluster. The key actors that are needed to bring about innovative changes are expected to continue the discussions and explorations in this field together in the future.

1. Introduction The Rotterdam port area in The Netherlands is the main hub in the world-wide methanol infrastructure. About 1 million ton of methanol is imported, stored and sold each year. The importance of methanol in the Rotterdam port area is expected to increase, and possibly double, in the future, as the application of methanol in fuel cells may be a promising option for improving the sustainability of the road transportation sector. The world-wide transportation sector currently depends on oil for roughly 98% of its operations. These oil-based fuels contribute considerably to urban air pollution in the form of emissions of CO2, ground-level ozone precursors (NOx), carbon monoxide (CO) and particulate matter (PM). The application of these fuels in conventional combustion engines is also a source of noise pollution. The application of methanol in fuel cells, however, increases energy efficiency and decreases noise and emission levels compared to the conventional combustion engine. When methanol is to be applied broadly in the transportation sector, the current methanol demand will increase far beyond the current world production levels for the downstream production of fuel additives and adhesives. In order to produce the required amounts of methanol, new, sustainable production routes are being explored and developed world-wide (e.g.. Herder and Stikkelman, 2003). Accordingly, the importance of existing methanol hubs in the world is expected to increase significantly.

306

We have conducted a study in collaboration with the Port of Rotterdam, in which we explored futuristic, and sometimes unusual possibilities for developing a methanolbased industrial cluster based upon the existing methanol infrastructure in that area. The study had two main goals. The first goal was to develop a realistic methanol-based industrial cluster, supported by technical and economic data. The second goal of the study was to bring together the various actors in the field of our proposed methanol cluster, and to create support for the envisaged transformation. zyxwvutsrqponmlkjihgfedcbaZYXW

2. Theoretical Background 2.1. Cluster modelling A number of approaches have been reported in literature that deal with the modelling of a cluster of industrial processes. A conventional systems engineering approach to modelling clusters, using mass and energy balances for the chain and its subsystems was introduced by RadgenzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA et al. (1998). It was reported to be a valuable way of modelling and analysing production networks and chains. The authors used existing process simulators, using mass and energy balance calculations to build and analyse chains. In this work, however, we decided to develop a dedicated tool, based on spreadsheets, in order to simplify the building of the virtual cluster. Some other studies aim at optimising an entire cluster with respect to economic and/or ecological objectives. In our study we did not yet aim at obtaining an optimised cluster, but merely at identifying the design space of the methanol cluster. The functional approach as suggested by Dijkema and Reuter (1999) and Dijkema (2001) was used in this study to identify and explore the design space for designing our methanol cluster in the Rotterdam port area. The functional approach can deal effectively with system complexity as it focuses on system functionality in stead of system contents, and the functional characteristics of a system are technology-free. A technology-free, functional design of a methanol cluster provided us with the necessary structure for the definition of the cluster design space without compromising or going into detail of the wide array of technical solutions. 2.2. Transition management The theoretical development of a methanol cluster is of no use when the actors that will have to invest in the new cluster are not involved from the very beginning. These actors can enrich the design space of the methanol cluster with new ideas and alternative plants and processes. The transformation to a methanol-based cluster will likely be a gradual transformation. We, therefore, used the transition management body of knowledge (e.g., Rotmans et al, 2001) to build our theoretical framework with respect to creating involvement of various actors in the change processes. Transitions are modelled as Scurves divided into four phases. A pre-development phase is followed by a take-off phase. Then the acceleration phase takes place which is concluded by a phase of stabilisation. Transition management concepts can help to create involvement, to expose barriers for change and to support the taking down of those barriers. An important tool that is offered by transition management theories is the design of a transition agenda that

307 zyxwvutsrqp Table 1. Overview ofsubgoals and research methods. Subgoal 1. To develop a theoretical framework 2. To explore and map the design space broadly 3. To bound the design space 4. To quantify the design space 5. To design a viable methanol-based industrial cluster 6. To design a viable transition process

Research method Literature survey Functional modeling Interviews and literature survey Virtual Prototype Workshop with relevant actors Workshop with relevant actors

w^ould indicate which stage the transition process is in, and would give an indication of how to reach the next stages by creating a long term vision and short term actions. zyxwvutsrqponm

3. Research Approach In order to achieve our goals we have divided the study into a number of subgoals, and we used different research approaches for each step. The subgoals and associated research methods are summarised in table 1. We conducted a literature survey in order to build a manageable and useful theoretical framework. This framework was described in the previous section. Second, we developed a functional design of a methanol cluster, using the approach described by Dijkema, and using our current knowledge and expertise regarding the developments in the Rotterdam port area. This functional design was used to identify which actors should be approached if this cluster was to be realised. Through interviews and further literature study we were able to identify the most relevant actors and consult with them in order to obtain a realistic design space. We also used the interviews to get a quantitative feel for the cluster, by asking the various actors about their long-term vision with respect to the developments concerning methanol in the broadest sense. We then turned these interview results into a quantitative model, the Virtual Prototype, describing our design space of alternative methanol-based clusters and allowing users to modify the cluster and get an impression of the viability of alternative cluster designs. Finally, we will use the Virtual Prototype in a workshop with relevant actors as a means to further the transition process. The intended results of the workshop are a well thought out methanol-based industrial cluster in the Rotterdam port area, and the start of a platform or community of actors who need and want to get involved in developing such cluster.

4. Results 4.1. The methanol cluster For the functional design of our cluster we have considered plants and processes from the entire production chain, ranging from fossil and renewable fuels to methanol derivatives, such as fiber board plants that make use of formaldehyde. In addition, the cluster includes industries that process or use by-products' such as hydrogen and platinum. The cluster comprises five main functional areas. For each of these functional areas we have made an inventory of possible interactions, flows and subsystems:

308 1. 2. 3. 4. 5.

power production waste processing transportation fuel s methanol and derivatives spin-off processes

The functional design of the cluster is shown in Figure 1. zyxwvutsrqponmlkjihgfedcbaZYXWVU

Bectridty

Biomass

Innport

Crg. waste

Fossil fuels

n

Methanol

ICE car FCcar

Derivates

Rber board

H2

R recycling

Airplane zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHG FC production H zyxwvutsrqponmlkjihgfedcba

Furniture

Figure 1. Functional design of methanol cluster. Power production Power plants currently use fossil fuels as their main feedstock, but are interested in supplementing their feedstock with biomass. A quick calculation, however, shows that under the current market conditions the application of biomass in a power plant is not economically viable. The economic variable cost margin of the conversion of biomass into electricity would be 10% at most. This is too small a margin to justify the use of biomass in electricity production at this moment. Biomass can, however, be used more economically for the production of synthesis gas by gasification, which can be turned into methanol. Roughly, 1 ton of biomass can be converted into 1.2 tonnes of methanol, rendering the variable cost margin at a more attractive 60%. Waste processing The presence of a gasification unit opens up possibilities for the gasification of all kinds of organic wastes, such as solid waste, plastics, sludge, rubber wood and household waste (Schwarze Pumpe, 2002). Transportation fuels Fossil fuels are practically the sole provider of energy for transportation of goods over the road infrastructure, in the form of natural gas, petrol and diesel. The application of methanol, however, as a replacement fuel in conventional internal combustion engines (ICE) is promising. Only strict economic considerations hold back a large scale introduction of methanol into ICE cars. The use of methanol in cars powered by fuel

309

cells has a brighter future, as methanol can be a convenient and safe hydrogen carrier. The viability of implementing a methanol fuel cell into cars has been demonstrated, among others, by DaimlerChrysler (2002), who has developed a series of demonstration models (NECAR). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Methanol and derivatives The supply of methanol to the area is expected to grow in the future. This will attract new large-scale installations that process methanol to convert it into derivatives, such as olefins, through the Methanol to Olefins process (MTO) and it will cause expansion of the production of formaldehyde. At its turn, formaldehyde can be used in the production of fiber board, a key ingredient for the furniture industry. In addition to the import of biomass for gasification purposes, imported wood chips can be used in the production of fiber board. Spin-off processes Finally, we introduced a subsystem of spin-off processes to capture any processes that are not directly linked to methanol production or processing, but may come to play a significant role in the future. Generally the life span of fuel cells is shorter than the life span of cars, so we introduced a platinum recycling industry in order to process used fuel cells. In addition, we added an extreme example of using the hydrogen surplus as an aeroplane fuel, since the energy-mass ratio for hydrogen is 3 times higher than for kerosene. This scenario, however, may well be realised only in the very far future. zyxwvutsrqponml 4.2. Actor involvement The relevant actors come from a very wide range of industries. In order to create a common language among the actors and to get the actors involved actively, we developed a virtual prototype of the cluster, based upon our functional design and the interview results. Some key conclusions and trends that could be extracted from these interviews were: • a main obstacle for methanol cluster development is a high initial investment • relatively inexpensive natural gas inhibits wide-scale research into biomass applications • there is a need for research into a large-scale biomass gasifier • there is a lot of tacit knowledge within companies concerning future developments During a workshop to be held with the actors, we will discuss and detail our ideas and proposals for a methanol cluster, and we will use the virtual prototype as a vehicle to initiate discussions concerning technical and economic issues of the cluster., and extract the tacit knowledge that is present in the actors. The workshop will comprise a panel of representatives from actors considered in the Virtual Prototype. Sessions will include surveys, hypothetical scenarios, and free exchange of ideas to refine our methanol cluster model and develop a consensus on necessary developments along a transition path. As an example, a hypothetical scenario may take as fact near term, significant and enduring cost increases in petroleum. Under such supposed conditions the panel's thinking with regard to creating and operating a methanol cluster at Rotterdam will be captured through survey instruments.

310 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

5. Discussion and Conclusions The preliminary results of our study support many of our ideas about possibilities for a methanol cluster. The functional design of the cluster proved to be useful in identifying a wide array of possible processes and actors. Secondly, many of the key actors that are needed to bring about such innovative industrial clusters have been interviewed and indicated to be very willing to be brought together in a workshop. These actors are expected to further their discussions and explorations in this field by means of several other transition management initiatives that are currendy being deployed by the Dutch Ministry of Economic Affairs. We trust that this research contributes to the research body of knowledge concerning the development of industrial clusters, as well as to a healthy and competitive methanol-based Rotterdam port area.

6. References Dijkema, G.P.J, and Renter, M.A., 1999, Dealing with complexity in material cycle simulation and design, Computer and Chemical Engineering, 23 Supplement, pp. S795-S798. Daimler-Chrysler, 2002, Study Cites Low Cost for Methanol Refueling Stations, methanol.org, March. Dijkema, G.P.J., 2001, The Development of Trigeneration Concepts, Proc. 6*^ World Congress of Chemical Engineering, Melbourne, Australia. Herder, P.M. and Stikkelman, R.M. 2003, Decision making in the methanol production chain, A screening tool for exploring alternative production chains. International Conference on Process Systems Engineering 2003, Kunming, China. Radgen, P., Pedernera, E.J., Patel, M. and Reimert, R.,1998, Simulation of Process Chains and Recycling Strategies for Carbon Based Materials Using a Conventional Process Simulator, Computer and Chemical Engineering, 22 Supplement, pp. S137-S140. Rotmans, J., Kemp, R., van Asselt, M.B.A., Geels, F., Verbong, G., Molendijk, K.G.P. and van Notten, P., 2001, Transitions & Transition management: The case for a low emission energy supply, ICIS BV, Maastricht, The Netherlands. Schwarze Pumpe, 2002, Sekundarrohstoff-Verwertungs-zentrum Schwarze Pumpe (SVZ), http://www.svz-gmbh.de/.

7. Acknowledgements This study benefited from the support and expertise of the municipal authority of the Port of Rotterdam, and the authors would like to thank Pieter-Jan Jongmans and Anne van Delft for their co-operation. The authors would also like to acknowledge the valuable contributions of Hugo Verheul (Delft University of Technology) to the study, specifically in the area of transition management.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B. V. All rights reserved.

311

Risk Premium and Robustness in Design Optimization of Simplified TMP Plant Satu Sundqvist*, Elina Pajula, Risto Ritala KCL Science and Consulting, P.O.Box 70, FIN-02150 Espoo, Finland

Abstract This paper illustrates issues related to optimal design under uncertainty in a simplified TMP (thermomechanical pulp) plant design case. Uncertainty in the case study is due to four dynamic scenarios of the paper machine pulp demand serviced by the designed TMP plant. Both a risk premium approach and a multi-objective optimization technique were employed. In the latter the worst-case scenario (representing the highest cost) was taken as the robustness measure of the design, and the design parameters were determined as a trade-off between the optimum of the mean cost model (i.e. the stochastic model) and of the worst-case scenario. The TMP model is a general example of an industrial case having parallel on/off production units and time-variant productions costs. Therefore, the design case could also be interesting for other fields of chemical industry than paper manufacturing, and the optimization procedures can be applied for risk premium and robustness studies in general dynamic optimization cases.

1. Introduction In papermaking, TMP (thermomechanical pulp) plant has to satisfy the pulp demand of the paper machine. Design optimization of the simplified TMP plant includes the number of refiners (A^Ref) and the storage tank volume (Ftank) as design parameters. The optimization is genuinely a dynamic problem having paper machine demand and production costs, and thus — when optimally operated — also the number of active refiners varying in time. In the TMP plant design, the optimum of the total costs is found via a subtask of minimizing the capital costs and the production costs in operations and scheduling optimization. The TMP design optimization is a MINLP (mixed-integer non-linear programming) problem since it has both a discrete, A/Ref, and a continuous, Fiank, design parameter. The operational optimization subproblem has integer decision variables (number of active refiners in time) affecting the continuous state of intermediate tank volume through process dynamics. The tank volume is constrained to stay between a minimum and a maximum volume. In the operational optimization, the task is to schedule startups and shutdowns of refiners in order to minimize the production cost when the demand of the paper machine and the price of the electricity are known over a given time horizon.

312 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

2. Optimization Procedure 2.1. Operations and scheduling optimization In general, the operations optimization task is to find suitable set point trajectories for the controllers. As the controllers are omitted from our simplified TMP system model, no setpoint optimization is included in the study. However, the refiner scheduling optimization can also be considered as operations optimization with refiner activity set point trajectory as binary valued (on/off) function of time. In this case, the operations optimization over a time horizon of some one hundred decision time intervals took approximately one minute by using a low-end PC and Matlab environment and the simulated annealing algorithm (Otter and van Ginneken, 1987). 2.2. Design optimization The MINLP problem in the TMP case is simple in that the NPV (net present value) per capital employed can be determined by first treating both the design parameters (A^Ref and Ftank) as discrete ones and then interpolating a continuous cost function Cost = X^tank) for the optimal number of refiners. Consequently, no advanced MINLP solvers are needed.

2.3. Objective function With a given scenario of the paper machine TMP demand, the production schedule can (p^), the be optimized and with a given probabiUty distribution of all scenarioszyxwvutsrqponmlkjihgfed operational costs as a function of n(t) and V(t) can be calculated. By adding the capital costs, the optimal values for the decision-making amongst the studied design alternatives (A^Ref, Ktank) are obtained. DESIGN LEVEL:

C t {f P'^^"'°' ('' ^-'""- ^^^ ""capital (^-' ^- J

^'^

subject to OPERATIONS LEVEL:

«^°H^;^Ref,^.ax) = argming{nW}

(2)

"(0 100

gHt)} =Y.''At)+K"up+ h^^n^^ dV_ -t-^="(0/..-/4 dt 0lC($/yr)

298000

140000

307000

142000

218000

134000 zyxwvutsrqponmlkjihgfedcbaZY

- ^ ^ ' ^ 2 kg/s -k and m>n and by assuming independent inputs and disturbances it follows that rank{F) = k. The fundamental theorem of linear algebra (Strang 1988) tells that the left null space of F, fAt(F^), has rank m — r, where r = K — Rank{F). Since H G 0^{F^) we have that dim{H) —m — k and by assuming that the number of controlled variables must be equal to the number of inputs we get rank{H) = n m-k

= n^m

= n-\-k

(7)

so that #y = #d-\-#u , e.g. the minimum number of measurements needed is equal to the number of inputs plus the number of disturbances. We then have zyxwvutsrqponmlkjihgfedcbaZYXW Theorem 2.1 Assume we have n unconstrained independent variables u, k independent disturbances d, and m measurements y, of which at least n-\-k are independent. It is the possible to select measurement combinations Ac = HAy

(8)

356 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA such that HF = 0 where F = - ^ . Keeping c constant at its nominal optimal value then gives zero loss when there are small disturbances d. The matrix H is generally not unique.

In summary, the main idea is to select the selection matrix H such that Acopt = HAym^opt — 0 by using m = n-\-k independent measurements. If the number of available measurements exceeds the number of inputs and major disturbances, there is some freedom to choose these as to reduce the implementation error and to maximize the observabihty of the disturbances in the measurements, see Alstad & Skogestad (2002) for further details. zyxwvutsrqponm

3. Example: Gas-Lift Allocation Optimization In many oil/gas fields the production of oil, gas and water are constrained by the processing capacity and other process constraints such as available flow-line transportation capacity. Wang et al. (2002) point out that the available literature does not provide robust procedures on how to formulate and solve typical optimization problems for such systems. Often, the "optimization" consider the constraints sequentially, .or only subproblems are considered (e.g by not including the transportation system to the processing facility). Dutta-Roy & Kattapuram (1997) considered the effect of including process constraints for a two-well case that share a common transportation line to the process. They found that failing to include the process constraints (in this case the transportation line) gave a sub-optimal solution of the problem. Here, we focus on how to implement the optimal operation in the presence of low frequency disturbances. In typical oil/gas producing systems there are large uncertainties (e.g. reservoir properties, models) and few measurements, so methods that can help operate the process optimally when disturbances occur are of great value. In this paper we consider the gas-Uft structure in Figure 1 with the data given in Table 2. The model used is a distributed pseudo one-phase flow model (Taitel 2001) assuming black oil compositional PVT behavior (Golan & Whitson 1996). The valves are modeled as one-phase with a linear characteristic. The flow model represent a two-point boundary value problem and the partial differential equations are discretized using orthogonal collocation. The two wells (W\ and Wi) are connected to a common transport line {T). We assume that the system is dynamically stable. Gas is injected through valves {CV(, and CV-]) to increase the production from the reservoir by making the static pressure (head) less. The operating objective is to maximize the profit, J — Y,i=o,g,giPi^i where indices o,g,gi are oil, gas and injected gas respectively, pi is the price for phase and m/ is the mass rate for phase /. We have neglected water in this analysis. The inputs in this case are u = [Vi V2 V3 V4 V5 V^ V-j]^ where Vi is the valve position for valve /. We assume that the level and pressure of the separator are controlled at the setpoints using CV4 and CV5 respectively. These setpoints can not be manipulated, thus removing 2 DOR In typical offshore systems, the ratio of oil and gas (GOR, the ratio of stock-tank gas mass to stocktank oil mass) from each well is not exactly known, so we assume that the low frequency disturbance is the ratio of gas and oil (d = [GORi GOR2f[^^^]) in the reservoir, where the GOR is given at reservoir properties. The available measurements are the pressure upstream the valves for the wells (/Vi and /V2) and the injection gas mass rates (m^/^i and mgi^2)- It is assumed that there is a upper Umit in the gas processing capacity in the process.

357 zyxwvutsrqpo due to compressor limitations in the process. The optimally active constraints (for all dis[nig^tot Vi Vi ^3] so we have DOF = 1 -l-A—X unconstrained DOR Since turbances) arezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA it is optimal to control the total gas massflowat the constraint, we can reformulate the objective to only consider the cost of injecting the gas into the well (7 = Z/=o,gi Pii^i)- In this case we have assumed that po — 0.17[$//:^] and p^i — -0.05[$/^g] corresponding to a oil price of $ 20 per barrel. The cost for recycling gas in the system has been assumed to be half the sale price of natural gas which was assumed to be 0.1$/5m^. Following the procedure in Section 2 we have that m = n - f / : = l - f 2 = 3, sowe need three measurements. We select the measurements /Vi, /V2 and m^/^as measurements. The optimal sensitivity function (F) is calculated by imposing the above constraints and upon requiring HF — 0, this result in the controlled variable cuc-Hy[0.76 - 0.65 0.09][fVi Fyi ^gi,^- The loss is calculated for several structures and is given in Table 1. We see that controlling c = Cue have good self-optimizing properties with the lowest average and worst case loss. A constant setpoint policy for the other controlled variables give a higher loss.

Table 1. Loss for the alternative control variables for the gas optimization case. Loss (in million $/year) GOR\ GORi GORi : 0.03 -+ 0.06 Average 0.03-4 0.06 0.10-^0.13 GO/?2:0.10 ^ 0 . 1 3 0.05 0.16 0.0 0.0 1 Cue 1.0 1.5 1.9 1.5 2 Pvi 2.3 4.5 2.0 0.5 3 mgi,2 1.4 2.5 4 5.3 0.8 rrigi^x 3.3 4.1 2.7 5 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 3.2 PV2

Rank

c

Table 2. Data for the gas lift allocation example. Parameter L'W\,W2 Dw\,W2 LT DT Rivs,l Pres,2

PIresA PIres,2

Psep Pi P2 M, GOFfl GOlf^

Figure 1. Figure of the well network.

"^g,tot

Unit Comment Value 1500 m Lepgth well 1 and 2 m 0.12 Diameter well 1 and 2 300 m Length transportation line Diameter transportation line 0.25 m Pressure reservoir well 1 150 bara 155 bara Pressure reservoir well 2 m3 lE-7 Production index well 1 sPa 0.9SE -zyxwvutsrqponmlkjihgfedcbaZYXWVUTS 7 Production index well 2 s Pa 50 bara Pressure separator 750 Black oil density reservoir 1 800 Black oil density reservoir 2 20 Molecular weight gas kmole 0.03 Nominal gas oil ratio kg 0.10 Nominal gas oil ratio kg 15 Maximum gas capacity

1 i

358 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

4. Conclusion We have derived a new method for selecting controlled variables as linear combination of the available measurements, that from a hnear point of view have perfect self-optimizing properties if we neglect implementation error. The idea is to calculate the optimal sensi{^yopt = FAd) and select controlled variables as linear combination of the tivity functionzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA measurements c = Hy, such that HF = 0. The method has been illustrated on a gas-Uft allocation example. The example illustrate that in a constant set-point control structure, selecting the right controlled variables are of major importance.

5. References Alstad, V. & Skogestad, S. (2002), 'Combinations of measurements as controlled variables; application to a petlyuk distillation column'. Submitted to ADCHEM 2003. Dutta-Roy, K. & Kattapuram, J. (1997), A new approach to gas-lift allocation optimization, in *paper SPE38333 presented at the 1997 SPE Western Regional Meeting, Long Beach, CaHfomia'. Golan, M. & Whitson, C.H. (1996), Well Perfomance, 2 edn. Tapir. Morari, M., Stephanopoulos, G. & Arkun, Y. (1980), 'Studies in the synthesis of control structures for chemical processes, part I: Formulation of the problem, process decomposition and the classification of the controller task, analysis of the optimizing cotnrol structures', AIChE Journal 26(2), 220-232. Morud, J. (1995), Studies on the Dynamics and Operation of Integrated Plants, PhD thesis, Norwegian Institute of Technology, Trondheim. Skogestad, S. (2000a), Tlantwide control: the search for the optimal control structure', J. Proc. Control 10,487-507. Skogestad, S. (2000Z?), * Self-optimizing control: the missing link between steady-state optimization and control', Comp. Chem. Engng. 24, 569-575. Skogestad, S., Halvorsen, I. & Morud, J. (1998), * Self-optimizing control: The basic idea and tay lor series analysis', In AIChE Annual Meeting, Miami, FL . Strang, G. (1988), Linear Algebra and its Applications, 3 edn, Harcourt Brace & Company. Taitel, Y. (2001), Multiphase Flow Modeling; Fundamentals and Application to Oil Producing System, Department of Fluid Mechanics and Heat Transfer, Tel Aviv University. Course held at Norwegian University of Science and Technology, September 2001. Wang, P., Litvak, M. & Aziz, K. (2002), Optimization of production from mature fields, in 'Proceedings of the 17th World Petroleum Congress', Rio de Janeiro, Brazil.

6. Acknowledgements Financial support from the Research Council of Norway, ABB and Norsk Hydro is gratefully acknowledged.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

359

Integrating Budgeting Models into APS Systems in Batch Chemical Industries Mariana Badell, Javier Romero and Luis Puigjaner Chem. Engng. Dept, UPC, E.T.S.E.I.B., Diagonal 647, E-08028 Barcelona, Spain. e-mails: [javi.romero, mariana.badell, luis.puigjaner]@upc.es

Abstract This work is our proposal for a new conceptual approach to include in enterprise management systems. It addresses simultaneous integration of the supply chain outflows with scheduling and budgeting in short term planning in batch chemical process industries. A cash flow and budgeting model coupled with an advanced planning and scheduling (APS) procedure is the key of the advance. The model development made in this paper and the results suggest that a new conceptual approach in enterprise management systems consisting of the integration of the enterprise finance models with the company operations model is a must to improve overall earnings.

1. Introduction The new economy sets more challenges to overcome in the industry, especially in the batch chemical processes. Here inherent batch process flexibility is to be exploited to obtain maximum enterprise-wide earnings. This flexibility is transformed in complexity to its cash flow management due to the large product portfolio, assortment of material resources, equipment, manpower and so on. Hence, in this work we propose the financial to take advantage of the scheduling and planning level to improve the overall enterprise earnings. Accounting systems, which have been used up until today, are insufficient and will not stand the contest from the increasingly efficient capital market. Consequently the value added given by the transactional-based enterprise systems and its metrics is in doubt. As a first aid answer trends give more weight to treasury and budget than to accounting and planning. Indeed, the reinforcement of cash flow models is a way to put the investors in the focus to take into account the factors that affect them. The majority of companies cannot drive cash flows matching them in the best manner with the operations and other functional decision tasks that provoke them due to the lack of integrated cross-functional links. This means that almost any company has an automatic solution equivalent to a MRP for the resource "liquidity". If companies cannot drive liquidity they cannot control it, being the insolvency the first step into the bankruptcy path of enterprises. Hence, it is the most important to create the integrated tool for simultaneous optimal solutions.

2. Financial Problem Formulation and Previous Work System integration has got quite big attention but lacks prediction, which reflects the same lack of theoretical background as happened with MRP systems in the sixties. Today the lack of adequate enterprise computer-aided systems capable of managing optimally the working capital forces CFO to make decisions using out of date, estimated

360

or anecdotal information. It is like driving by looking through the rear-view mirror. In that position it is not possible to see where the entity is going, only where it has already been, not knowing if damage was provoked. The common issue to begin a budget is to ask about the sales to place the cash inflows. That is the difference with our approach: a simultaneous solution of the production demanded is determined in unison following the guidelines and contraints of CFO. A review of the previous theoretical works in the literature reveals that while in the area of deterministic models of cash management most of them were developed focusing more in the individual financial decision types; at the stochastic side of cash management models, two basic approaches were developed. Baumol's model (1962) had an inventory approach assuming certainty. On the contrary. Miller and Orr (1966) assumed uncertainty. At the sixties linear programming was introduced to the area of finance, allowing the consideration of intertemporal aspects. Orgler, 1969, used an unequal period LP model to capture the day-to-day aspect of cash management problem to minimise the budget cost over the planning horizon subject to constraints involving decision variables. Commercial offthe-shelf financial budgeting software uses toolboxes to add forecasts and other facilities, but no budgeting model functionally integrated to work on line at enterprise level was found in the literature either in the market. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ

3. The Integrated Model The proposed framework is shown through a specific case study. This case study consists of a batch specialty chemical plant with two different batch reactors. Here, each production recipe basically consists of the reaction phase. Hence, raw materials are assumed to be transferred from stock to the reactor, where several substances react, and, at the end of the reaction phase, products are directly transferred to lorries to be transported to different customers. Plant product portfolio is assumed to be around 30 different products using up to 10 different raw substances. Production times are assumed to range from 3 to 30 hours. Product switch-over basically depends on the nature of both substances involved in the precedent and following batch. Cleaning time ranges from 0 up to 6 hours till not permitted sequences. Scheduling & Planning Model The problem to solve is a 13-week period. Equations 1 to 23 show the scheduling and planning model. The first week is planned with known product demands and the others with known (regular) and estimated (seasonal) demands. Here, orders to be produced are scheduled considering set-up times (cleaning times). This way, the sequence of orders to satisfy customer requirements and the equipment unit to order assignment that minimises the overall required cleaning time is calculated for the first week. For the rest of weeks, sudden demands are just known with one week in advance but as their overall number can be estimated, the model lefts enough idle time to accommodate these 'unknowns'. Besides, it is assumed that sudden orders may be accepted in function of the actual plant capacity. In this way, the model is to be rerun every week as forecasts develop into real orders. Next weeks from the first one are not exact. Indeed, they probably won't be executed as calculated, but their planning is useful to know if there will be enough room to accommodate coming orders. Here, no exact sequence is calculated, and so, no set-up times are considered. The amount of raw materials final products stored at every week-period is also monitored in function of the amount stored in the precedent period, the amount bought or produced and the amount consumed or

361

sold in that period. With this, the model will be able to decide when to ask for raw materials, considering a minimum possible order-size or a relationship between unitary raw material cost and order size. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDC TPH =zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA S J^TOP^nx^^,^, + X^^o. (1) o p o zyxwvutsrqponmlkjihgfedcbaZYXWVU TPfw^ < 168 {week production time) (2)

CT„, = E SCT„.x^„,x^,„_,^, o>\

(3)

CT^^ - 0 (initial required cleaning time) o = \

Yj^p.o,e^Yj^p,o-X,e

X

(6)

l

(11)

TPi^^ l

(12)

k>l

(13)

M.W,,k,e^nw^,^^ k>l

(14)

^p,k\er -^ if product p is of type R2 k>l

(15)

^p,k\e2' ~^ ^"^ ^p,k\e3' -^ if P^oduct p is of type Rl k>l

(16)

^^p,k=l,e=^^p,o,e

(17)

satisfaction,^^,^^^_^ff, = 1

^^^^

P_Stock^^ =P_Stocky,,, +Yi^e'^^pxe - S^Pi'satisfaction, e

i\k-D,+d i\prodj=p

R_StocK, = R_StocK,.,-Y, e

P_Stocky, > 0 (lyj)

J^^r^-n^pxe +^^r^^a-i R-Stock^, >0 (22) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJ

p\Rp=r

Budgeting Model Short-term budgeting decisions can be taken every week-period. Week production expenses will consider an initial stock of raw material and products. An initial working capital cash is considered beneath which a short-term loan must be requested. The minimum net cash flow allowed is determined by the CFO considering its variability.

362

Production liabilities incurred in every week-period are assumed to be due to buy of raw materials and production exogenous cash-flows incurred in every week to sale of products. A short term financing source is represented by a constrained open line of credit. Under agreement with banks, loans can be obtained at the beginning of any period and are due after one year at a monthly interest rate (i.e 5%). This interest rate might be a function of the minimum cash. The portfolio of marketable securities held by the firm at the beginning of the first period includes several sets of securities with known face values in monetary units (mu) and maturity week-period k' incurred at month-period k. All marketable securities can be sold prior to maturity at a discount or loss for the firm. Introducing these equations into the budgeting model presented give an integrated model for production scheduling and planning and enterprise budgeting. zyxwvutsrqponm Wcashj^ > Min _ Cash

. .

R_Liability,_, =^qb^- rb^j, • CostRaw^

(25)

r

Exogenous_cas\ = ^ satis^qp. -SaleP^

(26)

i\D,=k

Debt^ < Max _ debt (27) Debt^ = Debt^_^ + Borrow^ - Out _ Debt^ + F • Debt^_^ zyxwvutsrqponmlkjihgfedcbaZYXW MS_net_cashflow^ =-^

[MSinv^,^^ -MSsale^,^^^

k '=k+l1

(28) k-l

+ S {^k,k ^^Sin\k' - ^k,k MSsale,,,) With this, cash balance is as follows. Exogenous_cas\ -R_liability^^ + Borrow,^ - Out_Debt^ MS _ net _ Cashflowj^ + WCas\_^ + others^_ = WCas\

(29)

Objective function: For m = 3, 6, 9 and 12, cash is withdrawn from the system in form of shareholder dividend. Objective function will consist of maximising these dividends as follows:

others^^-^^912 ~ ~ s h a r e _ d i v i I = 1,2,3,4 ' v^ (30) O.F. = msix 2^ai'share _divI zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJ

4. Results of Integration of Models The model is run for a plant product portfolio of 30 different products using up to 10 different raw substances. Production times are assumed to range from 3 to 30 hours. Product switch-over basically depends on the nature of both substances involved in the precedent and following batch. Cleaning time ranges from 0 up to 6 hours till not permitted sequences.

363 zyxwvutsrqp Case Study Planning results The model proposed has been implemented in GAMS/CPLEX and solved at a 1 GHz machine. Optimal solution is achieved in 190 CPU seconds. Figure 1-a shows stock of raw materials and final products profile during the three month period. Figure 1-b shows a diagram with the different number of batches of products to be produced at each week. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 1000 900 800

8zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA '°° i 600 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 2 c

- Product sstcxjk

500-^

-Raw Stock

I 400-1 ^

300 200 100 0

1

2

3

4

5

6

7

8

9 10 11 12 13

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF

Figure 1 (a and b). Planning results when solving the planning&scheduling model. Case study Budget results For the first set of the budgeting model, it is assumed no marketable securities invested at the beginning of the period, an initial cash equal to the minimum cash (20000 u.), an open line of credit at annual interest of 10% and a set of marketable securities at 5% annual interest. In the 12-months horizon it is assumed that cash is withdrawn for dividend emission at 4, 8 and 12 period (month). With this, it is solved the LP problem proposed and results give Figure 2, where overall marketable securities and cash borrowed during the first 3 months are shown. Cash withdrawn in the year is 185.588 u. 40000 35000 30000 25000 w 20000 15000 © o 10000 O w 5000 0

90000 4- 80000 70000 60000 + 50000 40000 30000 20000 10000 0 10

15

Week

Figure 2. Budgeting results when solving the sequencial procedure.

- Marketable securities Debt

364 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Integrated Model. Figures 3 and 4 shows the results of integration of models. 1000 900 800

8 ^°° i 600 o

- Product s stock

c 500-1

I 400 ^

-Raw Stock

300 200 100 0

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 1 2 3 4 5 6 7 8 9 10 11 12 13 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK

Figure 3. Planning results when solving the integrated model. 70000

90000

- Marketable Securities - Debt

Weeks

Figure 4. Budgeting results when solving the integrated model. The overall cash withdrawn during the year, when using the integrated framework, is of 203.196 u.

5. Conclusions The concept behind improved enterprise resource planning systems is the overall integration of the whole enterprise functionality through financial links. This framework is able to support in real time optimal schedule budgets. A difference of 9.5% in earnings/year is achieved in the case study when integrated approach is used. Armed with up to the minute information on the overall budget status, costs and schedules, allocation of resources, reschedules and cost of capital, the enterprise is ready to respond efficiently to events as they arise. The financial support from Generalitat de Catalunya (CIRIT), EC-VIPNET (G1RDCT2000-003181 & GCO (BRPRCT989005) projects are acknowledged.

6. References Badell, M., Puigjaner, L. "Discover a Powerful Tool for Scheduling in ERM Systems", Hydrocarbon Processing, 80, 3, 160, 2001. Badell, M., Puigjaner, L., "A New Conceptual Approach for ERM Systems". FOCAPO AIChE Symposium Series No.320, V 94, pp 217- 223 (1998). Baumol, W.J., "The Transactions Demand for Cash: An Inventory Theoretic Approach," Quarterly Journal of Economics, Vol. 66, No.4 (1952), 545-556. Miller, M.H., Orr, R., "A Model of the Demand for Money by Firms," The Quarterly Journal of Economics, Vol. 80, No.3 (1966), 413-435. Orgler, Y.E., "An Unequal-Period Model for Cash Management Decisions", Management Science, Vol. 20, No.lO (October 1970), 1350-1363. Srinivasan, V., 1986, "Deterministic Cash Flow Model", Omega, 14, 2,145-166.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

365

A System for Support and Training of Personnel Working in the Electrochemical Treatment of Metallic Surfaces Athanassios F. Batzias and Fragiskos A. Batzias Laboratory of Simulation of Industrial Processes, Industrial Management Dept. University of Piraeus, Karaoli & Dimitriou 80, Piraeus 185 34, Greece

Abstract Fuzzy multicriteria analysis is used for decision making in a network of procedures that describes a complete electrochemical finishing plant. The decision alternatives result by means of fault tree analysis and neuro-fuzzy reasoning; the criteria are categorized as objective and subjective. The training of the technical staff is achieved in a cooperative environment by playing with 'what if scenarios based on real and simulated data.

1. Introduction For many reasons, productivity, safety, reliability, liability, and quality conditions require a significant degree of skill from the technical personnel. In order to face lack of skill and/or unexpected events in operation and product quality problems, special cooperative procedures in the domain of Computer Aided Process Engineering (CAPE) should be developed. The basic idea is to change the operational space by means of a Knowledge Based System (KBS), allowing (a) a human operator to interact with the process via less knowledge intensive rules and (b) the process/quality/knowledge engineers to select the relevant data and to design/develop/implement a neuro-fuzzy mechanism producing these rules in cooperation with the operator. Such a cooperation is indispensable when several persons from different human user classes are involved within the same computerised system (Johanssen 1997; Johanssen and Averuk 1993). This work deals with a KBS which can provide rules and guidance to technical personnel working in the electrochemical treatment of metallic surfaces, creating also a cooperative environment between members of the staff that belong to different hierarchical levels. The same system creates/enriches a local knowledge base and performs Fault Tree Analysis (FTA) when a critical defect has been detected, increasing traceability according to ISO-standards of 9000-series. The KBS consists of the discrete sub-systems CIS, EIS, TIS, and the envelop-system EDS. The CIS (Chemical Interactive Sub-system) provides rules to the operator concerning the conditions he must keep in order to obtain the product within an allowed region as regards defects, according to specifications. The CIS is suitable for a chemical process that takes place in a homogeneous bath and is based on a neuro-fuzzy classification algorithm. The EIS (Electrochemical Interactive Sub-system) provides rules to the operator concerning the conditions he must keep in order to obtain both, defect-free surface and quality according to specifications set by the client or the market, with minimal cost. The EIS is suitable for an electrochemical process that takes place in a non-homogeneous bath and

366

is based on a neuro-fuzzy approximation algorithm performing in combination with an external heuristic procedure. The TIS (Topological Interactive Sub-system), which is hierarchically under the EIS, provides prohibitive rules and offers consultance to the operator concerning the arrangement of jigs and racks within the tank of electrochemical processing to avoid defects and ensure the desired quality. This is a very difficult task described in technical manuals to be rather an art than a technology, demanding continuous feedback by the operators and the quality control laboratory. The FDS (Fault Diagnosis System) is an envelop-system which contains the above described sub-systems and the necessary procedures for complete FTA. zyxwvutsrqponmlkjihgfedcbaZYXWV

2. Methodology The methodology followed is heavily based on fuzzy multicriteria analysis performed by the technical personnel twice in the computer aided integration of procedures described subsequently in this chapter and depicted in Figure 1 (21 steps interconnected with 8 decision nodes). In step 2, application of neurofuzzy network predicts product quality (output) from treatment conditions (input); subsequently given output, in the form of a vector of accepted interval values, defined by the client or the market demand, determines input vectors in clusters by means of the input-output mapping which has been constructed in the learning section of the neurofuzzy network; last, clusters are filtered through minimal accepted width of values of input variables and the remaining form a set of alternatives among which the best is chosen by means of multicriteria analysis, applying a fuzzy modification of PROMETHEE (Geldermann et al. 2000) with the following criteria: fixed cost, fi; energy cost, fa; physical productivity (or treatment rate, dependent mainly on current density with consequences on surface structure and defect appearance),zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA f^', rest variable or operation cost, f^, environmental impact, fy, contribution to inter-lot convenience, depended mainly on the number, size, quality requirements, and priority of lots programmed, f^; contribution to intra-lot convenience, depended mainly on the ranges of treatment control variables (voltage, current density, anodizing time, electrochemical efficiency, concentration, temperature), allowed by production specifications, in relation with production facilities available, f-j. In step 10, the kind of defect observed is put as the 'top event' in the root of a fault tree, where each cause - effect link is quantified by a fuzzy index of significance of this causal relation, given by experts on the basis of (i) the relative frequency of occurrence in the past and (ii) relevance to scientific theory and experimental data; consequently, the leaves of the tree are the suggested ultimate causes which must be examined experimentally in order to find out the real cause and subsequently to make the proper remedial proposal. The order of tests to be used for this experimentation is determined by applying a fuzzy multicriteria method like the one mentioned above, with the following criteria: test significance, supported by FTA, gi; equipment availability, g2; reliability, based on analysis of variance (ANOVA) of experimental results obtained under similar treatment conditions in the past, gy, cost, g^; ratio of time required to time available due to production constraints/conmiitments, g^; expected contribution to explainability, i.e. relation to the corresponding scientific background, g^. It is worthwhile noting that criteria fi, fa, f4, g2, gs, g4 are rather objective, while criteria f3, fs, f6, fy, gi, g5, g6 are rather subjective. The row elements of the multicriteria matrix

367 used as input, which correspond to subjective criteria, are evaluated by six members of the technical staff (2 engineers/managers, 2 scientists working in the quality laboratory, 2 operators), according to a 3-stage DELPHI method incorporated within the integrated KBS. More specifically, the 2 operators evaluate (assign grades on) the elements corresponding to criteriazyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA fe, f?, gs, while the other 4 members of the staff evaluate the elements corresponding to the rest subjective criteria fs, fs, gi, ge- All 6 members of the staff evaluate the elements of the weight vector used as input. The whole KBS can be used for both support and training, even by an isolated operator, as all variables and parameter values are provided by the System in real time, depicting current conditions. Similarly, the operator can 'play' with one of the past representative cases saved in the System. During a training session, the trainee activates the KBS by introducing values/choices/situations and receives the System's response; the steps where the initiative belongs to the trainee or the System are symbolized with t or s, respectively. 11. Input of (i) product requirements/specifications set by the client or the market and (ii) raw material or semi finished product quality assessment which took place during the previous stages of production/treatment. 2t. Application of CIS or EIS if the process is chemical or electrochemical, respectively, to determine the best conditions for production/treatment by means of fuzzy multicriteria analysis. 3 t. Application of TIS, which is necessary in the case of EIS. 4 s. Chemical or electrochemical treatment, registration of (i) changes in conditions and (ii) observations of any unexpected event or failure occurring during processing. 5 s. Visual inspection of the product, accompanied with simple measurement in situ. 61. Post treatment remedial actions for eliminating recognizable light defects. 7 s. Separation of defected articles. 8 s. Sampling by the Quality Control Committee (QCC). 9 s. Offline product quality control in Laboratory. lOt. Application of (i) FT A to suggest the ultimate cause of the observed defect and (ii) fuzzy multicriteria choice of best experimental route among candidate alternatives, to confirm or reject the suggestion. 1 Is.Realization of confirmation testing via the chosen experimental route. 12t. Rejection of defected articles by the QCC; decision on recycle or disposal. 13t. Realization of special surface treatment to bring the articles back to their initial condition, according to the remedial directive issued by the Laboratory. 14t. Implementation of special treatment chosen among reconmiended practices , e.g. local plating/anodizing, on condition that it is acceptable by the client. ISs.Transient storing of additionally treated defective articles till the issue of Laboratory testing results. 16t. Sampling among apparently good items according to standard or recommended or agreed practices and dispatch to Laboratory for offline testing. 17s. Transient storing of apparently good items till Laboratory testing. 18s. Knowledge processing for support and training.

368 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA fzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA START \ f PREVIOUS STAGE/STORING t NEXT STAGE/STORINGn^—T

ENDzyxwvutsrqponmlkjihgfedcba \

2 t

f~

A

>

20^1

i X—H

18

zyxwvutsrqponmlkjihgfedcbaZYXW

»9t|-

1 ves no

I

I Activity Node

^ \

Decision Node

P. Are there defected articles after remedy? R. Is there another alternative experimental route? U. Are the quality testing results acceptable? W. Was the initial fault caused by human mistake?

(

j Initiation/Termination

Q. Is the suggestion confirmed? T. Is oxide stripping feasible? S. Is surface restoration feasible? Z. Are there defective articles?

Figure 1: Flow Chart of procedures constituting a complete process in an anodizing/electroplating plant, according to the 21-step CAPE plan described in the text (t: trainee's initiative and demand; s: System's response and supply).

369

19t. Sensitivity analysis performance for changing weight values of subjective criteria in the multi criteria input vector. 20t. Sensitivity analysis performance for changing parameter values of the generalized function in the special multi criteria method adopted. 2Is.Storing of defected articles. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE

3. Implementation and Specimen Results

The methodology described in the previous chapter has been applied successfully by the authors in the case of a complete plant of aluminium anodizing, which consists of the subsequent processes: cleaning, etching, polishing, electrolytic brightening, sulphuric acid anodizing, dyeing, sealing, finishing. A specimen run concerning the application of step 2 (see Figure 1) in the process of sulphuric acid anodizing of aluminium is presented herein, based on data provided by the Hellenic Aerospace Industry S.A. The input vector consists of the following six variables: voltage, current density, anodizing time, electrochemical efficiency, electrolyte concentration, and bath temperature, which take values within the ranges ( 9 - 2 1 V), (0.8 - 7.6 A/dm^), ( 1 0 - 8 0 min), (80 - 90 %), (5 - 30 g H2SO4/L), and ( 1 0 - 2 8 °C), respectively. The output vector consists of two variables, thickness of oxide and porosity of anodic layer, which take the values 12±0.5 jam and 11±1% respectively, as set by the client in the case of the specimen run under consideration. The input-output mapping resulted after learning gave 252 six-to-two input-output combinations satisfying the specifications set by the client. These combinations were clustered to 10 groups, which were reduced to 5 after filtering, constituting the set of alternatives Ai (i=l,2,..., 5). The criteria weight vector used in the fuzzy PROMETHEE was fj: 18, 2, 1; f2: 10, 1, 2;zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM fy 15, 1, 3; f4: 8, 1, 1; fs: 6, 1, 3; fe: 16, 1, 4: ff. 27, 2, 4, where each triadic fuzzy number appears in the usual L,R form. The generalized preference function used was the linear one, with two parameters: q for defining the indifference region (lower threshold) and p for defining the end-of-linearity in preference change (upper threshold). The results shown in Figure 2 are (a) at high sensitivity level with low p, q values (p=0.50, q=0.25) and (b) at low sensitivity level with medium p, q values (p=1.0, q=0.50). These diagrams reveal the possibility of trainees/operators to influence the choice of best alternative by changing the weight values corresponding to subjective criteria, f^, fs, fe, f?; this possibility is significant only at high sensitivity level. On the contrary, the possibility of the rest members of the staff, who participate in training and operating, is more expressed through monitoring the parameter values of the generalized preference function. In this way, all participants, although belonging to different technological cultures and hierarchical levels, learn to cooperate closely during operation and training, as they determine together the conditions of real or simulated production via the KBS. One of the problems that may appear in implementing the present training and support System is that some times technical personnel which belongs to different hierarchical levels, with different culture, uses linguistic terms with varying contextual meaning, e.g. in describing/evaluating defects, like the ones presented by Batzias and Batzias (2002).

370 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

(a)

A5

A4 A3 A1 A2

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDC

1 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA o O) 0,8 •o .9- 0,6

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

in

» E

0,4

I

0,2

FigurezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 2. Results at (a) high sensitivity level with low p, q values and (b) low sensitivity level with medium p, q values; the contribution of trainee/operator to resolution increase for distinguishing the proposed alternative may be proved decisive. A solution to this problem might be the creation of an ontological communication protocol. A similar technique has been suggested by Batzias and Marcoulaki (2002), for the creation of local knowledge bases in the fields of anodizing and electroplating, but we have not incorporated yet this technique to the System described herein.

4. Conclusions CAPE, in the form of a network of procedures/decisions, can effectively include the human factor to achieve both, technical support and staff training. We designed/ developed/implemented a Knowledge Based System (KBS) which uses, inter alia, fuzzy multicriteria analysis to determine (i) optimal conditions for chemical or electrochemical surface treatment of metals and (ii) preferable experimental routes for investigating the causes of failure in production. The specimen run of the corresponding software presented herein, based on data supplied by the department of aluminium anodizing of a large industrial firm, shows how members of the technical personnel, belonging to different technological cultures and hierarchical levels, can cooperatively learn throughout a computer integrated production system.

5. References Batzias, A.F. and Batzias, F.A., 2002, Computer-Aided Chem. Engineering 10,433. Batzias, F.A. and Marcoulaki, E.C., 2002, Computer-Aided Chem. Engineering 10, 829. Geldermann, J., Spengler, T. and Rentz, O., 2000, Fuzzy Sets and Systems, 115,45. Johannsen, G., 1997, Control Eng. Practice, 5(3), 349. Johannsen, G., and Averukh, E.A., 1993, Proc. IEEE Int. Conf, on Systems, Man and Cybernetics, Le Touquet 4, 397.

6. Acknowledgements Aluminium anodizing data supply from the Hell. Aerospace Ind. S.A. and financial support provided by the Research Centre of the Piraeus Univ. are kindly acknowledged.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

371

Sensor-Placement for Dynamic Processes C. Benqlilou\ M. J., Bagajewicz^, A. Espuna\ and L. Puigjaner^* ^Universitat Politecnica de Catalunya, Chemical Engineering Department, E.T.S.E.I.B., Diagonal 647, E-08028 - Barcelona (Spain), Tel.: +34-93-401-6733 / 6678, Fax: +34-93-401-0979. ^University of Oklahoma, 100 E. Boyd T-335, Norman, OK 73019, USA. On sabbatical stay at E.T.S.E.I.B., *Corresponding author.

Abstract This article presents a methodology to design instrumentation networks for dynamic systems where Kalman filtering is the chosen monitoring technique. Performance goals for Kalman filtering are discussed and minimum cost networks are obtained.

1. Introduction

In general, the optimal location of measurement points should take into account aspects that improve plant performance such as process variable accuracy, process reliability, resilience, and gross-error detectability (Bagajewicz, 2000). These performance indicators (e.g. estimation precision, reliability) of a sensor network represent the constraints in the sensor placement design problem. Among different monitoring techniques like Kalman filters and various Data Reconciliation schemes, Kalman filter presents good variance reduction, estimation of process variables and better tracking in dynamic changes of the process (Benqlilouzyxwvutsrqponmlkj et al 2002). This performance, however, varies with the position and quality (variance) of the sensors. This paper focuses on the determination of the optimal sensor placement for the use of Kalman filtering.

2. Kalman Filter Algorithm A linear, discrete, state-space model of a process is usually described by the following equations. X. = Ax._i +5w._i +v._i

(1)

y. =CX. +W.

(2)

being x, the n^ dimensional state vector at instant / (representing time instant t = iT), T the sampling period, u is the n^ dimensional known input vector, v the (unknown) zero mean white process noise with covariance Q^ = £[y.v^ J, and w the unknown zero mean white measurement noise with known co variance R^ -

E\A;.W^

J.

In this work it is assumed that the A, JB and C matrices coefficients are known at all times and do not change with time, that is the resulting model is a Linear TimeInvariant (LTI) system model. Given a set of measurements (>',) it is desired to obtain

372 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

the optimal estimators of the state variables x,. These estimates (X...) are obtained using all measurements from timezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA t = 1... i.By using all the measurements from the initial time onwards to derive the estimates, one is automatically exploiting temporal redundancy in the measured data. The Kalman Filter (Narasimhan S. and C. Jordache, 2000) starts by assuming an initial estimator of the state variables and an estimator of its error covariance matrix P:

XQ

= E[XQ]

(3)

Cov[xo]=Po

(4)

These quantities are used for the prediction of the state variable (no control input is considered, that is u=0) and the error covariance matrix P of the state estimate as follows:

x^/,_i=Ax,_i/,._i+Bw,_i

(5)

/^• /,-i=A/^-i//-iA^+G

(6)

The next phase is the updating of the state estimation and its error covariance matrix by using the process measurements.

Pui=(I-KC)P,n-i

(8)

where kt is the Kalman filter gain given by:

ki =Pi/i-lC^(CP^i-lC^+R)

^

(9)

The corrected values are obtained by formally minimising the sum of squares of the differences between the estimates and true values of state variables, and is thus an extension of the well-known deterministic least squares objective function. zyxwvutsrqponmlkjihgfe

3. Instrumentation Performance Measure If the Kalman filter is to be used as the monitoring paradigm, then it is necessary to choose or develop the desired performance measures. We define the performance f^p^^f of the estimation of the variable j by averaging its error variance

element [Pi/i]jj over the entire time horizon: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK

'^Perf

Kvfe'll 1=1 L^,7,Jy n zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

373 zyxwvutsrqpo Pi/izyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA has been selected as a basis for the evaluation of the performance index since it can be determined a priori and without any previous knowledge of the measurements. The parameters required for its computation are R, Q and PQ. The measurement error covariance R is given by the quality of the sensors while the process noise covariance Q is generally more difficult to determine because one does not have the ability to directly observe the process. If an ideal process is assumed, where all variability sources are included in the model, Q = 0. Finally, the value of PQ is selected to be equal to R (practical initialisation for the filtering process). Under conditions where Q and R are constant, both the estimations of Pj/i error covariance and the Kalman gain ki stabilise quickly and then remain constant. Therefore, the asymptotic value of Pi/i can also be used as performance measure. In fact, when the Kalman filter is applied to a system that is continuous and dynamic, the latter is preferred, whereas when conditions reflect short lived batch systems the former is more appropriate. It is clear that performance can be constructed for any set of sensors if and only if the variables are observed. Thus, any design model needs to be able to guarantee observability, either independently or through the model equations.

One possible global performance index can be constructed by comparing the measuring system performance with the one corresponding to the same system but in which all variables are measured. When only a few variables are of interest, only these will be considered, being S the set of variables of interest: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPON

K^ Perf,l

_ _

• Si

(11) Perf _Current

Perf _Optimum\ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQP

being ko an smoothing value. However, an alternative performance index can be constructed by adding all the indices of the variables included in S:

n \p^

^Perfa-Zj

abs\ -

seS

V



-^^S^abs\ki ^Perf _Current ]

(12)

seS

4. Observability Given the topology of the process and the placement of the sensor, variable classification procedure aims to classify measured variables as redundant and nonredundant and the unmeasured variables as observable and unobservable. This is an

374 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA important task for the performance of DR, since the presence of unobservable or nonredundant variables may generate a singular matrix that could lead to the failure of the DR procedure. Bagajewicz (2000) and other authors have considered the classification procedure in the case of linear DDR or linearised DDR.

This procedure allows obtaining the set of redundant variables, which are introduced in the Kalman filter algorithm via the matrix C. That is, for each measured variable a value of one in the corresponding diagonal element of C is introduced. Once the Kalman filter returns back the variance-covariance matrix of the adjusted variables, the variancecovariance matrix of the unmeasured but observable variables is obtained by using the observability model obtained by the classification variable procedure. In this way one can get the variance-co variance of all variables. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPON

5. Design Model The minimum cost model proposed is the following: minzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (N^)

subject to: fiKp^^nf*

(13)

Required observability where/ is certain given threshold. For a given number of sensors A^^, the determination of the optimal sensor placement is found by determining the diagonal elements of the observability matrix C (if C„ = 1 the variable / is measured and if C„ = 0 it is not measured). To obtain the best performance, matrix C is varied. One difficulty with this formulation is that the threshold values/ are difficult to assess. It is possible to substitute the required observability constraints by the required variance of the state variable estimator. The unobservable variables are then represented by a variable with a very high variance. Since the optimisation problem includes binary variables, the solution is obtained by enumeration. The optimisation strategy is as follows: 1. Determine the optimum performance, given by the case when each process variable involved in the dynamic mass balance is measured i.e. the observability matrix is the identity matrix. 2. Eliminate one sensor and obtain the list of sensor networks from the total set of combination alternatives that satisfies the constraints. 3. Obtain the system performance by selecting the minimum value of the objective function given the list obtained in step 2. 4. Repeat steps 2,3 and 4 until Ns is equal to the minimum number of sensor that allows the system observability.

375

Only when the performance index can be expressed in the same units as cost, one can construct a true cost minimization algorithm. Before that is obtained, one needs to look at a spectrum of solutions and decide the best trade-off of performancezyxwvutsrqponmlkjihgfedcbaZ vs. cost. In any case, the Pareto optimum space over the different objectives can be determined. zyxwvutsrqponmlkji

6. Case Study Figure 1 shows a process network used as a Case Study to evaluate the proposed sensor placement methodology, taken from Darouach and Zasadzinski (1991): eight streams and four nodes form it. Simulated measured data were generated from the true values that obey the balance relations with an addition of normally distributed random errors with zero mean and known variance-covariance matrix.

Q8 "^^

Q7

f

04 [• Q3 1 Q2 ) W4 ^ ' W• • Wl zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA W\

w

w

T

w

Figure 1. Process Network.

In this case study, both level and flow-rate sensors are considered with no more than one sensor per variable (multi-observation is not considered). In figure 2.a it can be seen the behaviour of system performance based on comparison with asymptotic performance (equation 11). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Number of sensor

Figure 2.a. Correlation between Ns and the system performance using Eq. 11

10 Number of sensor

Figure 2.b. Correlation between Ns and the system performance using Eq.l2.

In figure 2.b system performance is based on the sum of individual performances (equation 12). The unmeasured variables are emulated by given an initial guess value with a highly variance. It can be seen form these figures the results of both approaches and the advantage of these procedures for improving the sensor network design decision making. The first approach shows a significant jump in performance when going from 9

376

to 10 sensors, and suggest that a good choice is using 10 sensors, because the improvements afterwards are marginal. Figure 2b, however, does not detect this feature. zyxwvutsrqp

7. Conclusions A new approach to design instrumentation networks for dynamic systems based on the Kalman filtering techniques is presented. Different performance measures are proposed and compared through a Case Study. The optimisation problem is solved in an enumerative way. Also the observability was tackled by two approaches; the same results can be obtained, and the difference between them mainly affects to the mathematical characteristics of the resulting models (and the required solving procedures), as well as to the need of a classification based on observability.

8. References Bagajewicz, M., 2000, Design and Upgrade of Process Plant Instrumentation. (ISBN: 156676-998-1), CRC (formerly published by: Technomic Publishing Company) (http://www.techpub.com). Benqlilou C , Bagajewicz, M.J., Espuria, A. and Puigjaner, L., 2002, A Comparative Study of Linear Dynamic Data Reconciliation Techniques, 9th Mediterranean Congress of Chemical Engineering. Barcelona, Nov. 26-30. Darouach, M. and Zasadzinski, M., 1991, Data Reconciliation in Generalised Linear Dynamic Systems, AIChE J., 37(2), 193. Narasimhan, S. and Jordache, C , 2000, Data Reconciliation and Gross Error Detection: An Intelligent Use of Process Data, Gulf Publishing Co., Houston, TX.

9. Acknowledgements Support form the Ministry of Education of Spain for Dr. Bagajewicz sabbatical stay at E.T.S.E.I.B. is acknowledged.

European Symposium on Computer Aided Process Engineering - 13 zyxwvutsrqponmlkjihgfedcbaZYXWV A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved. 377 zyxwvutsrqp

Chaotic Oscillations in a System of Two Parallel Reactors with Recirculation of Mass Marek Berezowski, Daniel Dubaj Cracow University of Technology, Institute of Chemical Engineering and Physical Chemistry, 31-155 Krakow, ul. Warszawska 24, Poland, e-mail: [email protected]

Abstract The paper deals with the analysis of the dynamics of a system of two non-adiabatic reactors, operating in parallel. An effect of recycle degree and division of feedstock on generation of temperature-concentration chaotic oscillations in the system is investigated.

1. The Model The system of two independence chemical reactors, operating in parallel, is presented in Fig.l. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA REACTOR

9

1

Q

a ,0

1 1 1-f , —*^—^ ,zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA "s.^s

r^

REACTOR

l-q '

^

2

2*

2

/ Fig. 1. Conceptual scheme of the system. To the inlet of the system a stream of feed is introduced, with the normalized flowrate equal to 1-f. On the other hand, to the inlet to individual reactors the streams, according to the division q and l-q are introduced. The outlet streams of the products undergo mixing with each other, yielding the resulting degree of conversion and temperature according to the relation:

a^ = qa, + (1 - q)a^;

0^ = qG, + (1 - q)Q,

(1)

378 The whole system operates as a recirculating one, which enables one to recover the nonreacted mass and heat evolved in reactors. For tank reactors the corresponding balances are presented by equations: Mass balance of reactor 1: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

da,''1 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA = q{fa^-a,)^(l),{a,,@,) (2) dz Heat balance of reactor 1

^,^=9(yB.-0,)+,(«n©,)+(i-/)^(a^,0^) dt

(4)

Heat balance of reactor 2 at Kinetic of reaction of respective reactors are described by Arrhenius type relations:

(t>, {a,, 01) = (1 - f)Da{\ - a, Y e x p ( r - ^ ^ )

(6)

1+ ^ 1

(/>,(a,,e,) = {l-f)8Daa-a,rtxpiy/^

)

14-/782 As it is well known, each of the reactors can generate autonomic temperatureconcentration oscillations (M. Sheintuch and D. Luss, 1987; L.F. Razon and R.A. Schmitz, 1987; W. Zukowski and M. Berezowski, 2000). Their period depends on the values of reactor's parameters. If the reactors differ from technological viewpoint, the generated oscillations may differ in frequencies. In this situation the mixing of two outlet streams may lead to time series of multiperiodic or quasiperiodic character. It comes out that if a part of the resulting stream is mixed with the feed, such a system may generate chaotic changes of concentrations and temperature in individual reactors. This has been proved by numerical analysis. In Fig.2 the Feigenbaum diagram is

(7)

379

presented, which illustrates the character of dynamics of the system under consideration as a function of the recirculation coupling/. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIH

Fig. 2. Feigenbaum diagram; q=0.5. On the vertical axis of the diagram the extreme values of the conversion degree Ct^ are indicated. Fig.2 shows clearly the interval of periodic oscillations (lines), of quasiperiodic oscillations (shaded areas generated in the point/=0) as well as intervals of chaotic oscillations (shaded areas preceded by the scenario of doubling the period). The only one steady state is marked by broken line. It is the unstable steady state. The character of individual intervals has been confirmed both by the sensitivity of the model with respect to the initial conditions and to the corresponding Poincare sections. In Fig. 3 the analogous Feigenbaum diagram is presented, which illustrates the character of dynamics of the system under consideration as a function of the division of stream of feed degree q.

380

OJ

0.2

03

0.4zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDC 05 0.6 0.7 0.8 0.9 1

Fig. 3. Feigenbaum diagram; f =0.1. Three areas are seen in it. On the left side of the figure there is area of triple steady states. On the bottom part of this area, the unstable branch is seen (broken lines). The middle part of figure includes an area of stable (continuous line) and unstable (broken line) single steady states. Unstable steady states generate both periodic solutions (single lines) and chaotic solutions (shaded area). On the right side of the figure there is an area of triple steady states. The bottom unstable fragments (broken line) generate stable periodic solution (continuous lines). Poincare set confirming the chaotic character of the solution is presented in Fig.4. All the calculations have been performed for the following parameters values: Da=0.02, n=1.5, p = 2 . 7 , y=l5, 0=1,5, Sfj

= 0 . 0 1 , o, = ^ 2 = 1 . 1 , 6 = 1 . 1 .

381 zyxwvutsrqpo

Fig. 4. Poincare set; f^O.l

q=0.525.

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHG

2.zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Conclusions It is interesting that two reactors, operating in parallel and coupled by the recirculation loop, can generate chaotic oscillations. Joining in parallel the apparatus, fed by the streams of the same composition and identical flowrates means, namely, nothing else as the increase of volume, in which the reaction process occurs. Thus, it seems that two reactors operating in parallel are, from the model point of view, equivalent to one larger reactor. On the other hand, the application of recycle in a single apparatus does not introduce qualitative changes in the system. The situation looks like that, when the system operates as a steady-state one. In such case all inertial constants are of no importance. The situation is different when one has to do with unsteady states. Different volumes of reactors and so-different residence times in individual reactors - may generate in them various types of dynamics. In the example presented the constant S = 1 . 1 , connected with the ratio of the volume of reactor 2 to the volume of reactor 1. In the conclusion, we have to point out with reference to that, the tubular reactor with axial dispersion may be modeled by the cascade of tank reactors, the obtained in this study qualitative results can be transposed to the solutions of the system made up of a parallel tubular reactors. It means that, the parallel heterogeneous tubular system with

382

recycle may also generate temperature-concentration quasiperiodic and chaotic oscillations, although the single apparatus offers the single periodic oscillations only. zyxwvutsrqpon

3. Symbols Da f n q Ct

Damkohler number recycle coefficient order of reaction partition coefficient zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA conversion degree

)6

coefficient related to enthalpy of reaction

6

dimensionless heat transfer coefficient

8

ratio of volume of reactor 2 to volume of reactor 1

y

dimensionless number related to activation energy

O

dimensionless capacitance of reactor

0

dimensionless temperature

Subscripts 1,2 refers to reactor 7 or 2 s refers to outlet from the system H refers to heat exchanger temperature

4. References Razon, L.F. and Schmitz, R.A., 1987, Chem. Engng Sci., vol. 42, 1005: Multiplicities and instabilities in chemically reacting systems - a review. Sheintuch, M. and Luss, D., 1987, Chem. Engng Sci., vol. 42, 41: Identification of observed dynamic bifurcations and development of qualitative models. Zukowski, W. and Berezowski, M., 2000, Chem. Engng Sci., vol. 55, 339: Generation of chaotic oscillations in a system with flow reversal.

5. Acknowledgements This work was supported by the State Committee for Scientific Research (KBN-Poland) under grant number PBZ/KBN/14/T09/99/01d.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved,

383

Control Structure Selection for Unstable Processes Using Hankel Singular Value Yi Cao^ and Prabikumar Saha School of Engineering, Cranfield University, Bedford MK43 OAL, UK

Abstract Control Structure Selection for open loop unstable processes is the main theme of this paper. Hankel singular value has been used as a controllability measure for input-output selection. This method ensures feedback stability of the process with minimal control effort as well as it provides a quantitative justification for the controllability. Simulation results with Tennesse-Eastman test-bed problem justify the proposed theory.

1. Introduction One of the most important issues in Control Structure Selection is choosing appropriate screening criteria,zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA viz. controllability measure, for input and output combinations. I/O selection is performed based on a plant model and a proposed set of candidate actuators and sensors. Reasons for not using all the available devices could be the reduction of control system complexity. Various controllability measures are available in the literature; foundation is often laid by singular value decomposition such as singular vectors, relative gain array, I/O effectiveness factor etc. However, few of them address the combinatorial issue involved in I/O selection from a large number of candidates particularly for open loop unstable processes. Sometimes it may be desirable to perform I/O selection for an open-loop unstable plant that is already equipped with the devices which may be used to control certain outputs to ensure feedback stability prior to further design of control system (McAvoy & Ye, 1994). However these decisions were made solely on the basis of engineering understanding. The aim of the present work is to find a quantitative measure which can be used to select a large number of candidate inputs and outputs for open loop unstable processes.

2. Theoretical Background Glover (1986) studied the robust stabilization of a linear multivariable open-loop unstable system modelled as (G+A) where G is a known rational transfer function and A is a perturbation (or plant uncertainty). G is decomposed as G1+G2, where Gy is antistable and G2 is stable (Figure 1). The controller and the output of the feedback system are denoted by K and y respectively. Gy is strictly proper and K is proper. Glover (1986) argued that the stable projection G2 does not affect the stabilizability of the system, since it can be exactly cancelled by feedback. The necessary and sufficient condition for G to be robustly stabilized is to stabilize its antistable projection Gy. ^ Author for correspondence ([email protected])

384 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Antistahle nw projection zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK H = ^ ^ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (7)\ Stable w 1 j

rHZ

—UtFj

k.

1j

?l

->

whole process

projection

BzH [Ci D2J FAZ

Gi zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

K_ ^

^

Figure 1: Closed loop system subject to plant uncertainty A Consider, RIT will denote the space of proper rational transfer functions with no poles on s-jw with norm denoted by • I ; RH" will denote the subspace of RLT with no poles in the closed right half plane; A* is the conjugate transpose of matrix A, whereas for a rational function of 5, Gj* denotes [Gi^-s)]*. The above feedback system is internally stable iff Gi*, 5 ,

KS,

SG^, I-KSG^e

RH'

(1)

dei{l-GiK)(oo)^0

(2)

S

(3)

:={l-GiKy^

where S is the sensitivity matrix. The objective is to find a controller K that stabilizes (G7+A) for all allowable perturbations A. In other words, the problem is to achieve feedback stability by a controller K with minimal control effort i.e. to minimize IKS^^ . Francis (1986) argues that for technical reasons it is assumed that Gj has no poles on the imaginary axis; thus Gj belongs to /?L~, but not RfT". In that case, the minimum value of l^'S'l^ over all stabilizing i^s equals the reciprocal of minimum Hankel singular value of Gj , . + YI.^Ejc-Bic

IE.=Y,i.IEfA,+C^rili)

"^t'P

(1"^)

V
Yj::

80% 60% -

i

• o!

»!

If/ 1 j* '1 ^

/ y *

.E(CSAT)>0

""

/J/ -

40% -

-n=69 -n-72

20%

0% 2.50

2.75

3.00

3.50

3.75

NPV (irin. of euros)

Consumer Satisfaction (%)

Figure 5. Consumer Sat. Risk Curves.

3.25

• • n=75

Figure 6. NPV Risk Curves.

• • •'



/

H>
< t-^ o ^ R ^\ /7 p 3. \\J/ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF X

inSHi

': h ^ V

M't/-

F/^. 7.zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Methodology of wavelet packet PCA.

(1) For each column in data matrix, select wavelet packet function ^jXn{t) and wavelet packet dividing level L and compute wavelet packet decomposition coefficients {WL,0,WL,b..-,WL/-l};

(2) For each variable, use the same best full wavelet packet base algorithm to process wavelet packet decomposition tree and find best wavelet packet decomposition coefficients; (3) Select these coefficients as column vector to build wavelet packet coefficients matrixes with different tree nodes {X^oi ^L,U""XL,I^.\]^ the row number of these matrix is n/2^ and the column number is m; (4) For these coefficients matrixes, respectively use conventional PCA to confirm the retain number of principal components and compute principal components score matrixes {7^,0, 71,i,...,r^/.i} and load matrixes {PL,Q, PL,U"',PL,I-\}\ (5) Use retain score matrixes and load matrixes to rebuild wavelet packet coefficients matrixes {X\^Q, ^ ' / . b - - - ^ L / - I } ;

combines corresponding column (6) For each column in [X\^Q, X\^i,...^\^2^.x}, vectors to get rebuild wavelet packet coefficients; (7) For these coefficients, respectively use wavelet packet de-noise limit method to process these coefficients and get de-noising coefficients; (8) Use wavelet packet rebuild algorithm to get each variable samples {x\,X2,...yXm}', (9) Build new data matrix X* and use PCA to select the retain number of principal components and compute score matrix T and load matrix P\ The WPPCA combines the ability of PCA to de-correlate the variables by extracting a

457 zyxwvutsrqp

linear relationship with that of wavelet packet analysis to extract auto-correlated measurements. To combine the PCA method and wavelet packet analysis efficiently, each measurement variable, which is also the column vector of the original data matrix, are decomposed to wavelet packet coefficient column vector using the same best full wavelet packet base. That is, the original data matrixzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPON X is transformed to WpX. In which Wp is an nxn standard orthogonal matrix, and Wp denotes the transform factor which includes the filter coefficients.

In which WPi^2.i(i=0,l,...,L) denotes the wavelet packet filter coefficient matrix. The relations between principle component analysis for X and the principle component analysis for WpX are given by following two theorems: zyxwvutsrqponmlkjihgfedcbaZYXWVUTS Theorem 1: The principle component load vector for WpX is same as the principle component load vector for X, and the principle component score vector for WpX is the wavelet packet transform of the principle component score vector for X. Proving: Because each column of matrix X is analyzed with the wavelet packet using the same wavelet packet transform matrix Wp, the relativities among the columns of matrix WpX and the columns of matrix X are unchanged. From: (WpXfiWpX) = X''w\WpX=X''X We will see that when number of sampling points are large enough, the columns mean value of the matrix WpX are equal to the colunm mean value of the original data matrix X. And their covariance matrix is the same too. The above equation shows that after the transformation the principle component load vectors of the matrix WpX are the same with that of the original data matrix X. From PCA, we can get: X = TP^ So: WpX = (WpT)P'^ It shows that the principle components score vectors of the matrix WpX are the wavelet packet transformations of the principle components score vectors of the matrix X. Done. Theorem 2: When no principle component is ignored in any scales and no wavelet packet coefficient are eliminated by the threshold value, the result of WPPCAis equal to the result of PCA. Proving: From the properties of wavelet packet transformation and wavelet packet reconstruction we will see that if the wavelet packet coefficients are not treated, the wavelet packet reconstructed signals are the same as the original signals. When all principle components are reserved, the reconstructed data matrix is the same with the original data matrix. Done.

3. Algorithm Performance Simulation To test the performance of wavelet packet principle component analysis in dynamic process monitoring with noise, a 2-in-2-out three order dynamic system with noise is employed. The mathematical models are:

458 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA z{k)-.

0.087 -0.102 0.118 -0.191 z{k-\) + zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFED z(k-2) 0.847 0.264 0.638 0.147 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPON [0.053

-0.091

[0.476

0.128

z(k-3) +

1

2

3

-4

luik-V)-^

0.8 0.8

a(k-l)

y{k) = z{k)+v{k)-\-0.%o{k) where u is the input variable.

u(k) =

0.811

-0.226

0.477

0.415

M(A:-1) +

-0.328

0.143

0.096

0.188

0.455

-0.162

0.135

0.242

\u(k-3) +

\u(k-2)

0.193

0.689

-0.320

-0.749

\8(k-l)-\-

0.8 0.8

cr(k-l)

where f'is a random white noise which mean value is 0 and variance is 1, v is a random white noise which mean value is 0 and variance is 0.1, (J is a random white noise which mean value is 0 and variance is 1. 1000 normal samples are selected, and a MSPCA model and a WPPCA model are established respectively. The SPE and T^ and control limitation are calculated. When the confidence region is 95%, the control limitations are: 5P^«=1.7564, 7^«=7.0356.

jAJJiMiM

,. . ^1 A./v.^l^AAA7^A 20 40 60 80 100 120 140 160 180 200

20 40 60 80 100 120 140 160 180 200

20

Sample

40

60 80 100 120 140 160 180 200 Sample

(b) (a) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Fig. 2. SPE and f plot with variance error, (a)WPPCA,

(b)MSPCA.

200 sampling real-time data are monitored using the proposed model. To test the monitoring performance, a variance error disturbance is introduced at the 160th sampling time and canceled at the 162nd sampling time. The plots of SPE and 7^ are shown in Fig. 2. From Fig. 2(a), only few sampling points are over control limitation before the 160 time because of the noise, and the disturbance is detected successfully. It shows that the WPPCA will detect the abnormal status very well, and monitor the dynamic process with noises efficiently. Fig.2(b) is the result of MSPCA based on

459

wavelet analysis. Neither the SPE plot nor the T^ plot can detect the disturbance occurrence at the 160th sampling time. The results show that the WPPCA is better than MSPCA based on wavelet multi-frequency analysis in process monitoring. zyxwvutsrqponmlkjihgfe

4. Case Study

In this Section, the proposed wavelet PCA approach is applied to the monitoring problem of the Tennessee Eastman (TE) process. Tennessee Eastman process, which was developed by Downs and Vogel (1993), consists of five major unit operations: a reactor, a condenser, a vapor-liquid separator, a recycle compressor, and a product stripper. Some disturbances are programmed for researching the characteristics of the control system, listed in Table 1. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Table 1. Process disturbance for the Tennessee Eastman. Case IDV(l) IDV(2) IDV(3) IDV(4) IDV(5) IDV(6) IDV(7) IDV(8) IDV(9) IDV( 10) IDV( 11) IDV(12) IDV( 13) IDV(14) IDV(15) IDV(16)

Disturbance A/C feed ratio, B composition constant B composition, A/C ratio constant D feed temperature Reactor cooling water inlet temperature Condenser cooling water inlet temperature A feed loss C header pressure loss - reduced availability A, B, C feed composition D feed temperature C feed temperature Reactor cooling water inlet temperature Condenser cooling water inlet temperature Reaction kinetics Reactor cooling water valve Condenser cooling water valve Unknown

Type Step Step Step Step Step Step Step Random variation Random variation Random variation Random variation Random variation Slow drift Sticking Sticking Unknown

The reference set contains 1000 samples from normal operation with a sampling interval of 3 min. For the sufficient representation of the normal status of the TE process, these data are sampled in 10 groups, and the mean values of them are regarded as normal operation data. A WPPCA model is developed from these data. Nine principal components are selected, which capture 90.32% of the variation in the reference set. The control limits shown in every plot correspond approximately to the 95% confidence region, which is determined by using the methodology presented by Nomikos and MacGregor (1994): 5P£:«=7.1509, 7^«=75.0008. The simulation is run under the first disturbance IDV(l), which is loaded at the 300th time step. SPE plot and T^ plot are shown in Figure 4. From these plots, disturbance is quickly detected, although before the 200 time there still are some sampling points of SPE and T^ over control limit to false alarm. After the 300th time, the SPE and the T^ plots are far over the control limit. Figure 5 shows the scores plot. The figure clearly illustrates that process projection points are away from the normal situation. This result shows that for the TE process the WPPCA will get well effect in process monitoring.

460

1000 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK

1000

-10 -180

-140 -100 -60 -20 The First Principal Component

zyxwvutsrqponmlkjihg

Fig. 4. SPE and t^ plots ofWPPCAfor IDV(l).

Fig.zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ 5. Scores plot ofWPPCAfor IDV(l).

5. Conclusions In this paper, a wavelet packet PCA is proposed, which integrate performance of wavelet packet analysis de-noise and what of PCA. WPPCA algorithm and two theorems were presented. A dynamic 2-in-2-out three-order process with noise was employed to compare the WPPCA and the MSPCA based on wavelet analysis. The result shows that the WPPCA has the better performance than the MSPCA based on wavelet analysis. Finally, the WPPCA is used in the state monitoring for TE process. Application of TE process shows that the proposed WPPCA approach is effective.

6. References Bakshi, B.R., (1998), Mutiscale PCA with application to multivariate statistical process monitoring. AIChE J., 44(7), 1596-1610. Downs, J.J. and Vogel, E.F., (1993), A plant-wide industrial process control problem. Computers and Chemical Engineering, 17(3), 245-255. Jaffard, S. and Laurencot, P., (1992), Orthonormal wavelets, analysis of operations and applications to numerical analysis. Proceedings of the 3*^^ China-France Conference on Wavelet. Nomikos, P. and MacGregor, J.F., (1994), Monitoring batch processes using multiway principal component analysis, AIChE Journal, 40(8), 1361-1375. Qin, J.S., Lin, W. and Yue, H., (1999), Recursive PCA for adaptive process monitoring, IFAC Congress'99, July 5-9, Beijing, CHINA. Wise, B.M. and Ricker, N.L., (1991), Recent advances in multivariate statistical process control: improving robustness and sensitivity. Proceedings of IFAC ADCHEM Symposium, 125.

7. Acknowledgments Financial support from the National Natural Science Foundation of China (No. 29976015), the China Excellent Young Scientist Fund, China Major Basic Research Development Program (G20000263), and the Excellent Young Professor Fund from the Education Ministry of China are gratefully acknowledged.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

461

Information Criterion for Determination Time Window Length of Dynamic PC A for Process Monitoring Xiuxi Li, Yu Qian*, Junfeng Wang, S Joe Qin^ School of Chemical Engineering, South China University of Technology, Guangzhou 510640, China ^ Department of Chemical Engineering, The University of Texas, Austin TX78712, USA

Abstract Principal component analysis (PCA) is based on and suitable to analysis of stationary processes. When it is applied to dynamic process monitoring, the moving time window approach is used to construct the data matrix to be analyzed. However, the length of the time window and the moving width between time widows are often empirically tested and selected. In this paper, a criterion for determining the time window length is proposed for dynamic process monitoring. A new algorithm of dynamic monitoring is then presented. The proposed selection criterion is used in the new algorithm. Finally, the proposed approach is successfully applied to a two-input two-output process and Tennessee Eastman process for dynamic monitoring.

1. Introduction For safety and good product quality of process plant, it is important to monitor process dynamic operation and to detect upsets, abnormalities, malfunctions, or other unusual events as early as possible. Since first principle models of complicated chemical processes are in many circumstances difficult to develop, data based approaches have been widely used for process monitoring. Among them, the principle component analysis (PCA) extracts a number of independent components from highly correlated process data, has been applied successfully for process monitoring. A limitation of PCA-based monitoring is that the PCA model is built from the data, which is stationary, while most of real industrial processes are dynamic. When it is used to monitor process with dynamic characteristics, it is hard to exactly identify whether faults or normal dynamic varieties. A number of efforts have been done to improve performance of PCA based monitoring techniques (Ku, 1995, Qian, 2000, Kano, 2001, Qin, 2001). However, these articles have not pointed out the criteria how to select the time window length. In this paper we propose a criteria for determining the time window length by using Information Criteria (IC) to compare dynamic monitoring characteristics with different time window length. Equipped with the proposed window length selection criteria, a new dynamic monitoring algorithm is presented.

2. Augment Data Matrix with the Moving Time Window Approach The moving time window approach uses a length-invariable window and moves it along the time axis with a fixed step. With the approach, local dynamic characteristics of data Corresponding author, Tel: +86(20) 87112046, E-mail address: [email protected].

462 series could be captured and varieties of data series could be timely analyzed. Two major parameters of the approach are the window length and the moving width between windows. Selection of the two parameters is critical and not easy. For example, if the window length is selected too large, computational complexity would be augmented enormously. On the other hand, if it is too small, certain of dynamic characteristics might be lost. The key to choose the moving width is to balance the computation complexity and the degree of losing alarm. In practice, it is chosen with operation experience of the industrial processes.

Let raw data matrix bezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Xe3i^^. In dynamic PC A models, the augmented matrix X^eSR^^'is depicted as follows: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE x,{t-l) ^mit + ^-l)

r= x^{t+nw-w) "•

x^{t-\-nw-w-l)

x^{t-\-nw-w) '"

(1) zyxwvutsrqpo

x^{t-^nw-w-l)Jn'yn{

where / and w are the time window length and the moving width between windows, n' and m' are inferred as follows: n ' = int(( n - Z ) / w ) + 1

(2) m' = (I + l)m zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

We use AR models to denote the time-lagged variables within the X' matrix, which is represented as Xi(t)+aiiXi(t-l)+..

.-k-aiiXi(t-l)=^t)

(3)

where / is the order of the AR model, equivalent to the time window length, and / G [ 1 , m]. Thus we select the time window length in a way based on the approach of determination the order of AR model.

3. Information criteria for determining the time window length In Section 2, we introduce model identification to interpret dynamic PCA, and find that the selection of the time window length can be accordingly solved with the approach of determination the order of AR model. Identification of a system model may be regarded as confirming the probability distribution of a stochastic process. From this understanding, the principle of information criteria is to select appropriate model structure to maximize the approximate extent between the real probability distribution and the estimated probability distribution based on observation data. Shannon entropy is always used to measure the approximate extent, which is represented as:

H,=lf(x)lnfix)dx where jc is a continuous stochastic variable.

(4)

463 Let stochastic variable x,zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA y be the output variable and observation variable of the system to be identified respectively. Let / {x) be the real probability density of the stochastic process, and g (x| y) be the estimative probability density based on observation variable y. The difference between the entropy of the real probability distribution and the entropy of the estimative probability distribution is regarded as the approximate extent variable of the two probability distributions. It is given by: Uf. g) = - J / W i n f{x)dx - (-J/Win g{x\y)dx) = -\ fix)\n[--^]dx

^^^

Taking into account the model parameter vector a, which is the function of the observation variable y, then ^(x|y)=^[x|a(};)]. Thus the information criterion is defined as: select a model order / to maximize the function / (g), which is given by J{g) = E^{L\fix\g(x\a{y))}

(6)

Based on the same observation data, different estimative parameter vector ai is determined from a number of different model order, /. Accordingly, different probability density function is calculated. The model order is thus determined by maximizing the J{g), as the most appropriate order. Let ao denote the estimative parameter, which maximizes the £';c{ln^[^|^Cy)]}» and a^Lbe the maximal likelihood estimative variable within the ao. Neymann and Pearson have proposed an important asymptotic relation, which is represented as follows:

21„|lteJ|~X;

(7)

That is: 2lng\y\ dML\-2\ngly\aoh X]

(8)

where ^Qy|aAfL] is the maximal likelihood function of the observation variable 3^, and / i s the free degree of the X^ distribution. Akaika have proved that: £,{21ng[^ao]-21ng[x|a^J-X,^

(9)

From Eqns. 8 and 9, we have -2£^fe[lng(;c|a^,]}=-2£jng[y|a^,]}+2r

(10)

Let

/C = -2[lng();|a^J+2/

(11)

then computation of the maximal value of the J(g) is transformed into computation of

464 the minimal value of thezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA IC. The computation of the logarithm maximal likelihood function \ng{y\dMd follows following relationship,

ln^(y|5MJ = ~ l n < 5 |

(12)

where n is the sample number and Sj is the variance of the residual ^. Jenkins and Watts have proved 51-—^-—y^ii

(13)

For dynamic PCA, the residual can be inferred as follows: zyxwvutsrqponmlkjihgfedcbaZYXWVUT

S(^;-^;)'-S(^,-^,)' zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF

II =J^

tL

(14)

m where X is the raw data matrix and X is the recursive data matrix after processed by the conventional PCA. Then, from Eqns. 11, 12, 13 and 14, we conclude the information criteria as follows: selecting the moving time window length / to make:

/C = nln -ii^i^:

.

^

^ + 2r

(15)

m(n-2l-l) minimal, where y is the number of the estimative parameters and equal to / in the AR model.

4. A dynamic monitoring scheme using moving time window To make the proposed parameters selection algorithms useful in real plants, we propose a dynamic monitoring scheme that addresses the issues of operation steady situations varying with time. With the tolerance limits Qa and 7^a» a complete procedure for dynamic process monitoring, including moving time window approach and parameters selection criteria, is implemented in real time, as follows: 1) Use variables correlation analysis approach to determine the raw data matrix X. 2) Calculated, rand X. 3) Depends on operational experience, select the moving width w between time windows. 4) Select time window length set: [/i, I2 ... h], and respectively calculate the augment matrix [X'l ... X's]. 5) Calculate [ P ^ . . . P ' s ] , [ r i . . . r s ] and [x;.-.x:J. 6) Calculate [ICi ... IQ] from Eqns. 15. Select the minimal value ICmin from [ICi ... ICs\ and the corresponding /, X\ P\ T and X' be selected at last. 7) For real time data, calculate Q^ and f^i. If Q\>Qa or f^i>f^a^ further identify

465

which variable arises the abnormal situation by variable contribution plot. 8) Otherwise, set / = / + 1 and go to step 7. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPON

5. Application to dynamic process monitoring

In this Section, the proposed dynamic monitoring scheme is applied to the monitoring problem of the Tennessee Eastman process. Tennessee Eastman process, which was developed by Downs and Vogel (1993), consists of five major unit operations: a reactor, a condenser, a vapor-liquid separator, a recycle compressor, and a product stripper. The process has 41 measurements, including 22 continuous process measurements and 19 composition measurements, and 12 manipulated variables. Some disturbances are programmed for researching the characteristics of the control system, listed in Table 1. zyxwvutsrqpon Table 1. Process disturbance for the Tennessee Eastman. Case IDV(l) IDV(2) IDV(3) IDV(4) IDV(5) IDV(6) IDV(7) IDV(8) IDV(9) IDV(IO) IDV( 11) IDV(12) IDV( 13) IDV(14) IDV(15) IDV(16)

Disturbance A/C feed ratio, B composition constant B composition, A/C ratio constant D feed temperature Reactor cooling water inlet temperature Condenser cooling water inlet temperature A feed loss C header pressure loss - reduced availability A, B, C feed composition D feed temperature C feed temperature Reactor cooling water inlet temperature Condenser cooling water inlet temperature Reaction kinetics Reactor cooling water valve Condenser cooling water valve Unknown

Type Step Step Step Step Step Step Step Random variation Random variation Random variation Random variation Random variation Slow drift Sticking Sticking Unknown

The reference set contains 1000 samples from normal operation with a sampling interval of 3 min. Considering the performance of dealing with multivariate process of the PC A, we selects time window length / set as [1 2 3]. From Eqns. 15, we can accordingly compute the IC values of different /. Based on the proposed IC approach, the / is chosen as 3. The window moving width w is chosen as 4. Thus the data matrix is a (250x64) array, where 250 is the number of available data windows, 64 is the number of variables. A linear DPCA model is developed from the data matrix. Seventeen principal components are selected, which capture 97.7% of the variation in the reference set. The control limits shown in every plot correspond approximately to the 95% confidence region, which is determined by using the methodology presented by Nomikos and MacGregor. To verify the effectiveness of the proposed IC approach, four comparative simulations for the DPCA model and the CPCA are made. In the first case, the disturbance IDV(l) is introduced at sample 200. The result is shown in Fig. 1. Using the CPCA, many SPE values have been out of the control limit even if the process is normal. However, the DPCA is able to well detect the disturbance. The second case involves the disturbance IDV(2), and the result is shown in Fig. 2. In general, the performance of dynamic PCA with appropriate / and w should be better than conventional PCA.

466 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

250

400

600

zyxwvutsrqp

zyxwvutsrqponmlk 1000

Sample

Fig.l SPE plots of data characterizing IDV(l) projected onto the DPCA model and the CPCA model.

Fig. 2 SPE plots of data characterizing IDV(2) projected onto the DPCA model and the CPCA model. zyxwvutsrqponmlkjihgfedcbaZ

6. Conclusions Inspired from the principle of identification dynamic time-series model by determining the model order, we present in this paper an Information Criteria (IC) approach to select the time window length in the dynamic PCA. Then a new dynamic monitoring algorithm with the proposed IC approach is presented. Application of TE process proved that the proposed IC approach is effective. The dynamic PCA with appropriate time window length and windows moving width is able to detect and identify faults and abnormal events better than conventional PCA approach.

7. References Downs, J.J. and Vogel, E.F., 1993, A plant-wide industrial process control problem. Computer and Chemical Engineering, 17, 245-255. Jenkins, GM. and Watts, D.G, 1968, Spectral analysis and its applications. San Francisco: Holden-Day. Kano, M., Hasebe, S. Hashimoto, I. and Ohno, H., 2001. A new multivariate statistical process monitoring method using principal component analysis. Computer and Chemical Engineering, 25, 1103-1113. Ku, W., Storer, R. and Georgakis, C , 1995, Disturbance detection and isolation by dynamic principal component analysis. Chemometrics and Intelligent Lab. Systems 30, 179-196. Li, Weihua and Qin, S.J., 2001. Consistent dynamic PCA based on errors-in-variables subspace identification. Journal of Process Control, 11, 661-678. Lin Weilu, Qian, Y. and Li, X.X., 2000, Nonlinear dynamic principal component analysis for on-line process monitoring and diagnosis. Computer and Chemical Engineering, 24,423-429.

8. Acknowledgments Financial support from the National Natural Science Foundation of China (No. 29976015), the China Excellent Young Scientist Fund, China Major Basic Research Development Program (G20000263) are gratefully acknowledged.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

467

Tendency Model-based Improvement of the Slave Loop in Cascade Temperature Control of Batch Process Units Janos Madar, Ferenc Szeifert, Lajos Nagy, Tibor Chovan, Janos Abonyi University of Veszprem, Department of Process Engineering Veszprem, P.O. Box 158, H-8201, Hungary, [email protected]

Abstract The dynamic behaviour of batch process units changes with time and this makes their precision control difficult. The aim of this paper is to highlight that the slave process of batch process units can have a more complex dynamics than the master loop has, and very often this could be the reason for the non-satisfying control performance. Since the slave process is determined by the mechanical construction of the unit, the above mentioned problem can be effectively handled by a model-based controller designed using an appropriate nonlinear tendency model. The paper presents the structure of the tendency model of typical slave processes and presents a case study where real-time control results show that the proposed methodology gives superior control performance over the widely applied cascade PID control scheme.

1. Introduction In the pharmaceutical and food industry high-value-added products are manufactured mainly in batch process units. The heart of units is generally a jacketed stirred tank in which not only chemical transformation, but distillation, crystallisation, etc. can also be performed. As the aim of pharmaceutical industry is to produce high quality and purity products, the optimisation and control of operating conditions is the most efficient approach to produce efficiently, as well as to reach specific final conditions of the product in terms of quality and quantity (Cezerac, 1995). The 888.01 batch control standard defines three types of control needed for batch processing (ECES, 1997): • Basic control comprises the control dedicated to establishing and maintaining a specific state of the equipment and process. Hence, basic control includes regulatory control, interlocking, monitoring, exception handling. • Procedural control directs equipment-oriented actions to take place in an ordered sequence in order to carry out a process-oriented task. Hence, procedural control is characteristic of batch processes. • Co-ordination control directs, initiates and/or modifies the execution of procedural control and the utilisation of equipment entities. Examples of co-ordination control are algorithms for supervising availability of capacity of equipment, allocating equipment to batches, etc. When these control types are applied to the equipment, the resulting equipment entities provide suitable process functionality and control capability. However, lots of theoretical and practical problems have to be solved for the successful implementation.

468

Recently, the research focuses on the higher hierarchical levels (co-ordination control like scheduling and optimisation, and procedural control) (Garcia, 1995; Terwiesch, 1998; Book, 2(XX)), and the basic control is assumed to be the same as the control of continuous processes. However, in the batch environment, there may be higher requirements on the performance of basic control (Fisher, 1990). Due to the complexity of the chemical synthesis and the difficulty to estimate reactant compositions on-line, the control of reactors remains basically a temperature control problem, commonly performed directly or indirectly through heat exchangers with a heat transfer fluid circulating in the jacket surrounding the reactor. For this purpose, generally simple cascade control systems are used (Cezerac, 1995) and usually not very much attention is paid to the tuning of the slave control loop (i.e. in practice simple proportional controllers are used). However, according to our industrial experience, the performance of the complex and hierarchical solutions is primarily constrained by the performance of the split-range controller in the slave-loop. Hence, this paper focuses on the slave-loop control and presents a detailed case study where the impact of the slave-loop is illustrated by the temperature control of a pilot plant presented in Section 2. The tendency model of the jacket of the reactor is given in Section 3, while in Section 4 this model is utilized in a model-based control algorithm. Based on the real-time control results presented also in this section, some conclusions will be drawn in Section 5. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB

2. Process Description: Prototype of Pharmaceutical Process Systems To study the control strategies of multipurpose production plants a prototype of pharmaceutical process systems was conceptualized. The physical model of this system was designed and installed in our laboratory. The P&I diagram of the process is shown on Figure 1. As depicted in Figure 1, the central element of the process unit is a 50 liter stirred autoclave with heating-cooling jacket. The unit contains feeding, collector and condenser equipments. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Figure 1. The prototype of the pilot process unit.

469 Through the jacket the direct heating-cooling system allows cooling with chilled water or heating with steam in steam or hot water mode. In the hot water heating mode the water is circulated by a pump, the steam is introduced through a mixer, while the excess water is removed through an overflow. Since the system is multipurpose and operates in batch regime, the precision temperature control is not a simple problem. In the industrial practice, cascade structure is used to decompose the problem into two parts. Contrary to the classical PID-PID cascade scheme, in this paper model-based controllers are applied at both the master and slave levels. The manipulated variable at the master level is the jacket temperature; the disturbance (that can be indirectly determined in model-based solutions) is the heat flux of the processes taking place in the reactor. The manipulated variables of the slave process are the valve signals (VI: steam valve, V2: cooling water valve); the disturbance is the reactor temperature; the output is the characteristic jacket temperature. The latter can be defined as the inlet, outlet or average temperature of the jacket. This choice will influence the dynamics of both the slave and master loop processes. Based on dynamical simulation, experimental results and experience gained on other industrial systems we assigned the inlet temperature of the jacket as the output of the slave process. Applying the above decomposition, the master process can be modelled as a first order plus time delay (FOPTD), while the slave process should be modelled as a more complex nonlinear system, as it will be presented in the following section. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

3. The Model of the Jacket The jacket and the circulating liquid can be described using the common lumped parameter enthalpy or heat balances given in the chemical engineering literature. In the model, zero-volume distributors and mixers are applied and the overflow as well as the feed of the steam and the fresh cooling water are taken into account. The obtained simplified first principle model can be regarded as the tendency model (Filippi-Bossy, 1989) of the most important phenomena. Its scheme is given on Figure 2 where TH is the temperature of the cooling water coming from the environment, TF is the equivalent steam temperature calculated from the boiling temperature, the latent heat and specific heat. The valve characteristics are given in form of second order polynomials. The two first order transfer functions are obtained from the lumped parameter model of the jacket and the model of the thermometer at the jacket inlet. The Kl, K2, and K3 constants can be calculated from the parameters of the first-principles model of the process or can be identified using experimental data. For this purpose open-loop experiments were conducted on the whole operating range of the composite manipulated variablezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA u (because of the split-range control: VI and V2 are not opened at the same time, u=100% means that VI is opened fully, u=0% means that V2 is opened fully). For comparison, beside the proposed tendency model (Ml) a FOPTD (M2) model with was identified based on the data shown of Figure 3. The significant deviation between the two models supports our practical experience that controller design based on linear models cannot provide good control performance in the slave loop. To prove this assumption in the following section real-time control experiments will be presented where the previously presented tendency model will be applied in model-based temperature control of the batch process unit.

470 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

reactor temperature

T1,T2 DT K KAR

Time constant Time-delay Gain Nonlinear valve characheristic zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Figure 2. Schematic model of the reactor jacket.

Figure 3. Comparison of slave models and the process. zyxwvutsrqponmlkjihgfedcbaZYXWVUTS

4. Temperature Control The proposed simple, but structurally transparent model can be easily utilised in several model-based control algorithms. We chose the so called Predictor-Corrector Controller (PCC) scheme which was developed and tested by us in many pharmaceutical applications in the previous decade (Abonyi et al., 1999). The basic scheme of the PCC algorithm is shown in Figure 4. Similarly to the Internal Model Control (IMC) scheme, the model is modified in every time-instant according to the measured process variables (correction). The control signal is calculated analytically based on this corrected tendency model without solving any optimisation problem. The operation of the proposed slave loop controller is demonstrated as a part of the complete reactor temperature control, where the master model-based controller was based on an FOPTD model. The parameters of the models were determined using the experimental data given of Figure 3. For comparison, a classical PID-PID cascade controller was also designed to the process and its parameters were optimized on the models of the master and slave processes and fine-tuned experimentally. The performance of this cascade PID-PID controller in a heat-up operation is shown on Figure 5. (The manipulated variables are constrained because of the physical limitations of the system). The same experiment was conducted using the model-based PCC-PCC controller scheme, and the

471

result is given on Figure 6. This result shows that the proposed methodology gives superior control performance over the widely applied cascade PID control scheme. As the master process is a simple FOPTD system that is easy to control, we have realized that mainly the slave-controller determines the overall control performance. zyxwvutsrqponmlkjihgfed Model Linear dynamic, Steady-state nonlinearity, Physical constrains

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF

I I PCC . ^ Predictor z l j zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Corrector •

P-inverse model

handling modelling error

^

!

I

measurement

Process

control input

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM

Figure 4. Scheme of PCC algorithm. -Y_mastef

—SP_»live

—Y_slava

—VI

—V2

|

Figure 5. Reactor temperature control using a PID controller. I

—SP_nw»tf

Ev^

Figure 6. Reactor temperature control applying a PCC controller.

472 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

This is because the proposed model-based slave loop controller is able to effectively handle the nonlinearities and the constraints of the valves and accurately describes the dynamics of the jacket of the reactor. This observation confirms our practical experience gained during the installation and tuning of model-based controllers in the Hungarian pharmaceutical industry in the last ten years. The advantages of PCC are the following: superior performance over PID control, effective constraint handling (no windup), the parameters of the controllers can be easily determined by simple process experiments, and the complexity of the controller is comparable to that of a well furnished PID controller. Furthermore, the analysis of the modelling error gives the possibility of process monitoring. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF

5. Conclusions The reason for the non-satisfying control performance of batch process units very often is the slave process that can have a more complex dynamics than the master loop has. As the slave process is determined by the mechanical construction, it is straightforward to design a model-based controller based on a nonlinear tendency model of the slave process. It has been shown that the parameters of the model-based slave controller (namely the parameters of the tendency model) can be easily determined by simple process experiments, and the complexity of the controller is comparable to that of a well furnished PID controller. Real-time control results showed that the proposed controller effectively handles the constraints (no windup) and gives superior control performance.

6. References Cezerac, J., Garcia, V. Cabassud, M., Le Lann, M.V. and Casamatta, G., 1995, Comp. Chem.Eng., 19,S415. European Committee for Electrotechnical Standardisation, 1997, S88.01, EN 61512-1. Garcia, V., Cabassud, M., Le Lann, M.V., Pibouleau, L. and Casamatta, G., 1995, The Chem. Eng. Biochem. Eng. J., 59, 229. Terwiesch, P., Ravemark, D., Schenker, B. and Rippin, D.W.T., 1998, Comp. Chem. Eng., 22, 201. Book, N.L. and Bhatnagar, V., 2000, Comp. Chem. Eng., 24, 1641. Fisher, T.G., 1990, Batch Control Systems - Design, Application and Implementation, Instrument Society of America. Abonyi, J., Chovan, T., Nagy, L. and Szeifert, F., 1999, Comp. & Chem. Eng., 23, S221. Filippi-Bossy, C , Bordet, J., Villermaux, J., Marchal-Brassely, S. and Georgakis, C , 1989, Comp. Chem. Eng., 13, 35.

7. Acknowledgements The authors would like to acknowledge the support of the Cooperative Research Center (VIKKK) (project 2001-1-7 and II-IB), and founding from the Hungarian Ministry of Education (FKFP-0073/2001 and 0063/2000) and from the Hungarian Research Found (OTKA T0376(X)). Janos Abonyi is grateful for the financial support of the Janos Bolyai Research Fellowship of the Hungarian Academy of Science.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

473

Consistent Malfunction Diagnosis Inside Control Loops Using Signed Directed Graphs Mano Ram Maury a", Raghunathan Rengaswamy^, and Venkat Venkatasubramanian^'* "Laboratory for Intelligent Process Systems, School of Chemical Engineering, Purdue University, West Lafayette, IN 47907, USA. ^Department of Chemical Engineering, Clarkson University, Potsdam, NY 13699-5705, USA.

Abstract

Though signed directed graphs (SDG) have been widely used for modeling control loops, due to lack of adequate understanding of SDG-based steady state process modeling, special and cumbersome methods are used to analyze control loops. In this article, we discuss a unified SDG model for control loops (proposed by Maury azyxwvutsrqponmlkjihgfedcbaZYXWVUTSR et al. (2002b)), in which both disturbances (sensor bias etc.) as well as structural faults (sensor failure etc.) can be easily modeled. Various fault scenarios such as extemal disturbances, sensor bias, controller failure etc. have been discussed. Two case studies are presented to show the utility of the SDG model for fault diagnosis. A tank-level control system is used as first case study. The second case study deals with fault diagnosis of a multi-stream controlled CSTR.

1. Introduction Various types of faults occurring in a chemical plant can be broadly categorized as (1) faults originating outside control loops and (2) faults originating inside control loops. A properly functioning controller (emulating integral control action) masks the faults since the effect of disturbances is passed to the manipulated variable even though the error signal is zero. Thus, presence of control loops makes fault diagnosis more challenging. Among many models such as fault trees, completely numerical models etc., signed directed graphs (SDG) have been widely used for modeling control loops. Iri et al. (1979) were the first to use SDG for modeling a chemical process. Later, Oyeleye and Kramer (1988) discussed about SDG-based steady state analysis and prediction of inverse response (IR) and compensatory response (CR). Recently, Chen and Howell (2001) presented fault diagnosis of controlled systems where SDG has been used to model control loops. Previous researchers used special methods to deal with control loops since SDGbased steady state modeling was not fully explored. Recently, Maurya et al. (2(X)2c) presented a systematic framework for SDG-based process modeling and analysis. We also proposed a unified SDG-model for control loops in which both disturbances (e.g.

* Author to whom all correspondences should be addressed, e-mail: [email protected], phone: (765) 494 0734, fax: (765) 494 0805.

474 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

sensor bias) as well as structural faultszyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFED (e.g. sensor failure) can be easily modeled and analyzed (Maurya et ai, 2002Z?). In this article, we explore the fault diagnostic applications of the proposed model. This article is organized as follows. In the next section, we present a brief overview of our SDG-related work. In section 3, we discuss the unified SDG model for control loops. Various fault scenarios are also analyzed. In section 4, two case studies are presented to show the diagnostic efficiency of the proposed framework for fault diagnosis of systems with control loops. The article is concluded with discussion on future work. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONML

2. Overview of SDG-based Process Modeling and Analysis A brief discussion on SDG-based process modeling and analysis is presented here. A detailed discussion can be found in (Maurya et al, 2002c). A SDG for a process can be developed from the model equations of the process. A process can be described by differential equations (DE), algebraic equations (AE) or differential algebraic equations (DAE). The initial response of a DE system can be predicted by propagation through the shortest paths from the fault node to the process variables. The initial response of a DAE system with only one perfect matching (or only negative cycles) in the algebraic part can be predicted by propagation through shortest paths assuming that arc length for differential arcs and algebraic arcs are 1 and 0, respectively. A perfect matching between the equations and dependent variables is a complete matching in which each equation is matched with a variable and no variable or equation is left unmatched. The response of an AE system can be predicted by analyzing the corresponding qualitative equations or propagation through the SDG (provided that the AE system has only one perfect matching or there are no positive cycles in the SDG for the AE system). The steady state response of a dynamic system can be predicted by using the SDG for the corresponding steady state equations. Maurya et al. (2002^) have shown that the qualitative equations ensure correct prediction of the steady state behavior of systems that exhibit CR. This leads to considerable simplification in SDG-based steady state analysis of control loops. A succinct discussion is presented next. For detailed discussion, see (Maurya et al, 2002a).

3. Control Loops- a Unified Framework Most of the control loops used in the industry are PI control loops (as far as their behavior is concerned). All control loops considered in this section are assumed to be PI control loops unless otherwise stated. Oyeleye and Kramer (1988) have analyzed PI control loop as a system exhibiting CR. It has already been shown that CR is implicitly handled by the steady state equations, no special method is needed to analyze control loops (Maurya et al, 2002b). Depending upon the qualitative state of the error signal, appropriate changes in the perfect matching handle the underlying scenario. 3.1. Model and preliminary analysis of a control loop The fault variables considered are the bias in the measurement, the controller signal and the valve position, set point (in the context that set point can be changed by an external agent), and failure in the measurement, the controller signal and the valve position. The model for a PI controller is given below.

475 External system

External system

CS fail 0 csfaiizyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA fM0 kc>0 kv>0 8([CSfail])'''' (a) Information flow for e;^0

CS,

x^

(b) Information flow for e=0

Figure 1. SDGfor a PI control loop. ,bias m

(2)

~ ^m

CSp

(3) KQ.S

dt CS VP

(1) zyxwvutsrqp

^

ri S{CSfail).{CSp +

(4)

CSi)-hCStia

^{VPfail).K.CS^VPuas

(5) (6)

In the above equations, X, CS and VP refer to the controlled variable, controller signal (controller output) and valve position (or manipulated variable), respectively. The subscripts m, P and / refer to measurement, P control and I control, respectively, kc and ky are the controller gain and the valve gains, respectively. Failure inside the control loop is modeled by introducing non-zero deviations in the corresponding failure and bias variables. For example, sensor failing high is modeled as Xmjaii = 1 and Xm,bias ='+' etc. The effect of VP and external variables (say Xj') on X is modeled as: X = k.VP -h ttijfXjf

(7)

Usually the above equation is matched with X = Xiin the absence of a control loop and k represents the process gain (kp). The system described by equations 1-7 has two perfect matchings. One of the SDGs of this DAE system for [kc] = [k] = [ky] ='+' is shown in Figure 1(a). There is only a negative feedback cycle in the AE part of the SDG and hence propagation is valid (Maurya et ah, 2002c). Now we discuss initial and steady state response of the PI controller. Initial response: Figure 1(a) shows the SDG for the DAE system. Initial response can be predicted by propagation through the shortest paths in this SDG. The controller effectively behaves as a P controller. Thus equation 4 and CSj can be eliminated from analysis. The remaining equations constitute model of a P controller. For non-zero disturbances or bias, the error, e^O (corresponds to imperfect control). Information flow is as expected. The SDG also shows the interaction between the control loop and the extemal system.

476 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Steady state response: Figure 1(b) shows the SDG for the corresponding AE. Ideally speaking, steady state refers to the state when all the derivative vanish i.e. e = 0 in equation 4. The arc (e -> CSp) becomes ineffective. Now X^^ = Xm and as observed in the previous section, perfect matchings change (the AE system has only one perfect matching) and information flow is exactly opposite to that of the previous case. Perfect control is achieved under such conditions. Figure 1(b) also shows the interaction of the control loop with the extemal system in steady state. Notice the signs of the arc Xjf —> X (sign is [aijf]) in Figure 1(a) and the arc Xjf —> VP (sign is [—aijf/k]) in Figure 1(b). In another situation, controller and (or) valve opening saturate(s). When X has reached a constant value, in a loose sense one can say that the system has reached steady state. Still, e ^ 0. For all practical purposes, controller saturation can be modeled through P control action with suitably modified gain (that corresponds to the ratio of the saturated value of {CSp + CSi) to thefinalvalue of e). This situation corresponds to imperfect control and is explained through Figure 1(a).

Thus the concept of perfect matching handles everything within the proposed framework guaranteeing completeness. Further, no spurious solutions are generated and such a framework is useful for large-scale applications. Now a number of fault (disturbance) and failure scenarios are discussed with respect to perfect control and imperfect control. zyxwvutsrqpo 3.2. Perfect control The controller can be modeled as integral controller (Figure 1(b)) because e = 0. Various faults are discussed below. Changes in external disturbances: Integral action keeps the controlled variable at its set point. The information flow is:

X^' -^Xm-^X

-^VP and xy -^VP^CS^

CSj.

Faults inside the control loop: Three types of faults are: • Sensor bias: The propagation is Xm,bias —> X. Qualitatively, [X] = -[Xm,bias]' • CS bias: The propagation is CSuas -^ CSj. Further, C5='0' and X ='0\ • VP bias: [CS] = -[VPbias] and VP = X ='0\ 3.3. Imperfect control Imperfect control is exhibited due to large extemal faults or control loop failure. The error signal is non-zero (Figure 1(a)). CS builds up till the controller saturates. Two scenarios are: (i) large extemal disturbance, set point change or sensor bias in which the controller behaves as a P controller and, (ii) failure inside the control loop in which the control loop is open. Three types of failure are: • Sensor failure: The arc X —» Xm is cut. Propagation yields [X] = -[Xm^bias]• Controller failure: The paths from e to CS are cut and CS = CSbias• Control valve failure: The arc CS —» VP is cut and X = VP = VPtias-

4. Case Studies Case study 1 deals with fault diagnosis in a tank level-control system. Case study 2 deals with fault diagnosis of a CSTR system.

477 zyxwvutsrqpon

Jo ^ n zyxwvutsrqponmlkjihgfedcbaZYXWV zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPON

VP fail 0

^ kca. P r o o f . N o t e that Wa^Pa=l a n d Wa^Sa = Wa^ (Sa.i - da Pa Pa^) = da Pa^ " da (Wa^Pa) Pa^ = 0.

Pb=Sb.iWb/db. We write Sb-i as Sb-i

= Sb.2 - db-i Pb-i Pb-i^ = Sb.2(I - Wb-i Pb-i^ ) = Sb-iUo = SaUi

Here Ui is some matrix that is not used. This gives Wa^Pb= Wa^Sb-lWb/db = Wa'^ Sa UiWb/db = 0 .

This completes the proof. The important property of the algorithm is

satisfyR^P=D^. Proposition 2. The matriceszyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA P=(pi,...,PK) ^f^dR=(r],...,rf^ Proof. The vectors ra are defined as Pa=Sra. If a=l we get Pi = SoWi/di = S Wi/di, or ri= Wi. This gives Pi'^ri = pi^ wi/di = wi'^ S wi/di^ = 1/di. For a=2 we get P2 = SiW2/d2 = (So - di Pi Pi"^) W2/d2 = S(I - Wi Pi^) W2/d2 or r2=(I - Wi pi^) W2/d2 This gives Pi^r2 = (pi'^ W2 - (Pi Vi)(pi'^ W2))/d2 = ( p / W2 - (pi'^ W2))/d2 = 0 P2^r2 = (P2'^ W2 - (P2Vi)(pi'^ W2))/d2 = (p2'^ W2) /d2 = l/d2 since (p2^Wi) = 0 fi*om Proposition 1. For higher values of the indices a and b we proceed in a similar way as in Proposition 1. Proposition 3. The weight vectors (wj are mutually orthogonal, Wiy^Wa=0,forb^. Proof. Suppose that a>b. Note that Xa-i^YY^Xa.iWa = A^aWa. It gives W a V = Wa'^ Xa.i^YY'^Xa-iWb.

From definition of Xa-i we get Xa-l = X b . i - (tbPb^ + . . . + ta.iPa-1^).

From Proposition 1 we get Xa-lWb = (Xb-i - (tbPb'^ + . . . + ta.iPa-l'^)) Wb = Xb-i Wfe- tb(Pb'^ Wb) = t b " tb = 0 .

This shows that the weight vectors are mutually orthogonal.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

503

Data Based Classification of Roaster Bed Stability Bjom Saxen^ and Jens Nyberg^ ^Outokumpu Research Oy, P.O. Box 60, nN-28101 Pori, Finland, bjom. saxen @ outokumpu.com ^Outokumpu Zinc Oy, P.O. Box 26, FIN-67101 Kokkola, Finland, jens.nyberg @ outokumpu.com

Abstract An on-line application of a self-organizing map (SOM) has been developed for detecting and predicting instability of a fluidised bed furnace. In the application, which has been in use at the roster of Outokumpu Kokkola zinc plant for over one year, SOM is utilised for compressing multi-dimensional process measurement data and for visualising bed stability changes as a path on a two-dimensional map. Instability, which normally causes operational problems and lowered production, has been detected quite reliably. A rule-based system for proposing correcting actions is being developed as an extension to the SOM.

1. Introduction Roasting of zinc concentrates at Outokumpu Kokkola plant is carried out in two fluidised bed furnaces. Occasionally, the bed of a furnace moves into instability, which leads to operational problems and lowered production. Reasons, indicators and remedies for bed instability have been investigated during the last few years. Generally, instability is a consequence of changes in the chemical or physical properties of the concentrate feed, but the causal connections are complex. There is a fairly large amount of real-time data available at the roaster, e.g. temperature, flow and pressure measurements, and also less frequent off-line analyses of the chemical composition of the streams. Although a period of instability can be recognised from history data, real-time interpretation of the high-dimensional data is difficult and there is a need for a data refining and compressing tool. The self-organizing map (SOM) is a method for visualisation, clustering and compressing data (Kohonen, 2001). The SOM can effectively refine multi-dimensional process data as reported by e.g. Alhoniemi et al. (1999) and has been shown suitable for process monitoring and fault diagnosis also in mineral and metal processing (Rantala et al., 20(X), Jamsa-Jounela et al., 2001, Laine et al., 2000). In addition to detection of roaster bed instability, there is also a need for identification of the underlying reasons. This task requires expertise knowledge, since there are many factors to

504

be considered. Rule based systems represent a straightforward approach for applying apriori knowledge in supervision and fault detection (Isermann, 1977). zyxwvutsrqponmlkjihgfedcbaZY

2. The Roasting Process Roasting is an essential part of a zinc electrowinning process. In Kokkola zinc plant, the process contains departments for roasting, leaching and purification, electrowinning and melting & casting. There are two roasting furnaces, both of fluid bed type (Lurgi), with a grid area of 72 m^. The mix of zinc concentrates is fed to the furnace by rapid slinger belts, and air is fed from the bottom of the furnace. Around 22 t/h concentrate and around 42 000 NmVh air is fed to each furnace. The reaction between sulphides in the concentrate and oxygen is exothermic and heat is generated; the furnace temperature is kept at about 920 950 C by cooling. The products are solid oxide material, called calcine, and sulphur dioxide gas. The gas, which also contains solids, is lead to a waste heat boiler, cyclones and electrostatic precipitators before it is cleaned in a mercury removal step and in a sulphuric acid plant. Along with the roasting, some of the concentrates are directly leached. This enables higher flexibility in the acquiring of concentrates; some concentrates are more suitable for roasting and others for direct leaching. 2.1. Challenges and development Roasting is in principle a simple process, but there are many influencing variables and sometimes it is very difficult to control the furnace. The main difficulty is that every concentrate behaves differently, because of its specific mineralogy. The move to concentrates with finer grain size influences the furnace behaviour, and impurities like Cu and Pb have a great impact. High impurity level can lead to sintering of the bed; i.e. molten phases and sulphates are formed. Another problem is that the bed sometimes becomes very fine (no coarsening occurs) and this hinders the fluidisation. To master the process, it is essential to maintain a stable bed with good fluidising properties and good heat transfer. During last years many plant test runs have been carried out with the aim to better understand the roasting mechanism and to find out optimal run conditions (Metsarinta et al., 2002). Among the tested parameters are impurity levels (Cu, Pb), concentrate particle size, water injection to different spots, oxygen use etc. The number of measurements has been increased, which has brought more information about the state the furnace. New control strategies and advising systems have been developed by utilising knowledge gained theoretically and through tests, but also by data based studies of the process behaviour. 2.2. Process control Basic measurements and control loops of one furnace line are shown in Figure 1. The furnace control can roughly be divided into three levels: 1. The Conventional level includes standard controller for flows, pressures, etc.

505

ThezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Advanced level includes sophisticated use of the large amount of measurement data; fuzzy temperature control by concentrate feed, oxygen enrichment control by 02-feed and furnace top temperature control by water addition. Also, this level includes process monitoring by means of SOM, as well as an advisory system based on expert rules (under development). The Ultimate level implies changing the concentrate mix into a "safe" region, i.e. a composition with coarse particles and little impurities. zyxwvutsrqponmlkjihgfedcbaZYXWVU

CONCENTRATE MIX

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHG

mi FU2ZY CONTROL

T

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

m^hCHEMICAL ANALYSIS GRAIN SIZE DISTRIBUTION

^

zyxwvutsrqponmlkjihgfedcbaZYXW

Figure 1. Flow chart and basic instrumentation of one furnace line.

3. Data Based Methods To support the investigations on the chemical and physical mechanisms of the roasting process, plant data have been analysed mathematically. This has resulted in tools for process control, especially for bed stability monitoring and management. To verify earlier observations and to serve as a base for the development of control methods, a correlation analysis was carried out with process data from one year of operation. Variables included were measurements of flow, temperature and pressure, origins of the concentrates in the feed mix, chemical analyses of feed and product compositions and grain size distribution of the product. In addition, some calculated quantities used in the furnace operation were also included. The study was carried out using linear correlation analysis and by time series plots of selected variables. Although the bed stability was the main focus of

506 the analysis, the correlations were analysed in general and no single response (output) variable were selected. Separate analyses were carried out for both furnace lines.

Along with the correlation analysis, SOM was used as a tool for data mining and correlation exploring. The SOM algorithm quantizes the observations into a set of prototype vectors and maps them on a two-dimensional grid. The updating method forces the prototype vectors of neighbouring map units to the same direction so that they will represent similar observations. The number of prototype vectors is chosen essentially smaller than the number of observations in training data. Thus, the algorithm generalizes correlations in the data and clusters the observations, being thereby suitable for data visualisation. zyxwvutsrqponmlkjihgfedcbaZY

4. Results Based on process data studies with the SOM, an on-line tool for detecting instable furnace behaviour from process measurements was developed. The software was developed and implemented in MATLAB® using version 2.0beta of SOM Toolbox (2002). Models for both furnace lines were made. The measurement variables were concentrate feed, air feed, oxygen feed, oxygen coefficient, water feed, windbox pressure, furnace and boiler temperatures at different spots, boiler steam production, boiler offgas oxygen content, calcine composition (S^", Na, Si, K, Cu, Pb) and fraction fine particles in calcine. The temperature measurement signals were pre-treated by mean value calculation and a rulebased exclusion of non-representative signals. One-day mean values from around two years of operation were used for training. Observations from process shut-downs were excluded from the data, occasional erroneous values (due to probe or data transfer failures) were labelled as non-available and the variables were normalised by linearly scaling to unit variance. The dimension of the data vector fed to each SOM was 20. The SOM grid size was set to 9x6, i.e. 54 prototype vectors was set to represent around 600 observations in the data set. In the training data, the algorithm clustered most periods of instability close to each other and a rough classification into stable and instable areas of the map could be made. The classification was based on the knowledge that low concentrate feed, low windbox pressure and large fraction of fine particles in the calcine correlate with instability. The SOM component planes in Figure 2 show how these variables are represented in the prototype vectors. It should be noted that although these variables correlate with bed stability, none of them could alone be used as stability indicator. For the on-line interface, the map units correlating with instability was coloured red, the units close to this area yellow, and the other units green.

507 zyxwvutsrqp Windbox pressure (mbar) n270

Concentrate feed (t/h) Fraction vs.«^v^N n24.6 - - - - - n37

21.9

22.6

1242

^^^^^^^^

• 20.5

^-^-"-'-• ^''^^^'^

-6.75

zyxwvutsrqponmlkjihgf

Figure 2. Component planes for three important variables in the SOM for furnace 2. In the on-line application, the feed data is 8-hour mean values achieved from the history database at the plant. The interface shows changes in bed stability by a five-days path (5x3 observations) of the best-matching unit (BMU) on the map. The BMU is the unit representing the prototype vector with shortest distance to the input vector. Also, the application outputs a plot of the quantization error (Euclidean distance between BMU and input vector) for the same period, which can be used as an indicator of model reliability. The on-line SOM tool has been in use for over one year, and has detected bed instability tendencies quite reliably. The BMU path on the map is easy to interpret, and gives a quick generalization of the situation in the furnace. Figure 3 shows the SOM interpretation of the stability of furnace 2 during five days in September 2002. During this period, the bed was moving from instability back to normal behaviour. The quantization error plot in Figure 3 indicates that the explanation of the first observations in the period is unreliable.

18-Sep-2002 14:00:00 - 23-Sep-2002 06:00:00 w:1 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIH

Quantization error Normal (green)

I Dubious (yellow)

I Unstable (red)

Figure 3. SOM visualisation of a bed stability path of furnace 2; the smallest circle represents the match of the first observation and the star shows the latest match. A plot of the quantization error for the same period is given to the right.

508 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

5. Conclusions and Further Work Detection of fluidised bed instability requires multidimensional data and appropriate methods for its analysis. The SOM-application at the roaster performs data analysis which, in opposite to a human observer, is systematic and consequent. The application reliably monitors bed stability, and gives valuable support for operation.

However, most of the process variables clearly correlating with instability show only consequences, and some of them are manipulated variables in control loops. The underlying reasons for a particular instability period may be so nested that they are hard to detect. Hence, the development of a rule based system for isolating instability reasons and providing correcting actions has been started. The rules are based on metallurgical and practical process know-how. Known inappropriate combinations of feed composition and process parameters are checked through, and when such a combination is found, the system gives advises for correcting actions. For instance, one rule is: IFzyxwvutsrqponmlkjihgfedcbaZYXWVU calcine Cu > 0.6% AND Oxygen coefficient < 1.2 THEN Increase oxygen coefficient! Further work will include refining and tuning of the rules based on upcoming process situations.

6. References Alhoniemi, E., Hollmen, J., Simula, O. and Vesanto, J., 1999, Process Monitoring and Modeling using the Self-Organizing Map, Integrated Computer Aided Engineering, vol. 6, no. l,pp. 3-14. Isermann, R., 1977, Supervision, fault-detection and fault-diagnosis methods - an introduction. Control Engineering Practice, vol. 5, no. 5, pp. 639 - 652. Jamsa-Jounela, S-L., Kojo, I., Vapaavuori, E., Vermasvuori, M. and Haavisto, S., 2001, Fault Diagnosis System for the Outokumpu Flash Smelting Process, Proceedings of 2001 TMS Annual Meeting, New Orleans, USA, pp. 569-578. Kohonen, T., 2001, Self-Organizing Maps, volume 30 of Springer Series in Information Sciences. Springer, Berlin, Heidelberg. Laine, S., Pulkkinen, K. and Jamsa-Jounela, S-L., 2000, On-line Determination of the Concentrate Feed Type at Outokumpu Hitura Mine, Minerals Engineering, vol. 13, no. 8-9, pp. 881-895. Metsarinta, M-L, Taskinen, P., Jyrkonen, S., Nyberg, J. and Rytioja, A., 2002, Roasting Mechanisms of Impure Zinc Concentrates in Fluidized Beds, accepted for Yazawa International Symposium on Metallurgical and Materials Processing, March 2003, CaUfomia, USA. Rantala, A., Virtanen, H., Saloheimo, K. and Jamsa-Jounela, S-L., 2000, Using principal component analysis and self-organizing map to estimate the physical quality of cathode copper. Preprints of IFAC workshop on future trends in automation in mineral and metal processing, Helsinki, Finland, pp. 373-378. SOM Toolbox, 2002, http://www.cis.hut.fi/projects/somtoolbox [18 October 2002].

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

509

A Two-Layered Optimisation-Based Control Strategy for Multi-Echelon Supply Chain Networks p. Seferlis and N. F. Giannelos Chemical Process Engineering Research Institute (CPERI) PO Box 361, 57001 Thessaloniki, Greece, email: [email protected], [email protected]

Abstract

A new two-layered optimisation-based control approach is developed for multi-product, multi-echelon supply chain networks. The first layer employs simple feedback controllers to maintain inventory levels at all network nodes within pre-specified targets. The feedback controllers are embedded as equality constraints within an optimisation framework that incorporates model predictive control principles for the entire network. The optimisation problem aims at adjusting the resources and decision variables of the entire supply chain network to satisfy the forecasted demands with the least required network operating cost over a specified receding operating horizon. The proposed control strategy is applied to a multi-product supply chain network consisting of four echelons (plants, warehouses, distribution centres, and retailers). Simulated results exhibit good control performance under various disturbance scenarios (stochastic and deterministic demand variation) and transportation time lags. zyxwvutsrqponmlkjihgfedcbaZYXW

I. Introduction A supply chain network is commonly defined as the integrated system encompassing raw material vendors, manufacturing and assembly plants, and distribution centres. The network is characterised by procurement, production, and distribution functions. Leaving aside the procurement function (purchasing of raw materials), the supply chain network becomes a multi-echelon production/distribution system (Figure 1). The operational planning and direct control of the network can in principle be addressed by a variety of methods, including deterministic analytical models, stochastic analytical models, and simulation models, coupled with the desired optimisation objectives and network performance measures (Beamon, 1998; Riddalls et al., 2000). Operating network cost, average inventory level, and customer service level are commonly employed performance measures (Thomas and Griffin, 1996; Perea et al., 2001). In the present work, we focus on the operational planning and control of integrated production/distribution systems under product demand uncertainty. For the purposes of our study and the time scales of interest, a discrete time difference model is developed. The model is applicable to networks of arbitrary structure. To treat demand uncertainty within the deterministic supply chain network model, a receding horizon, model predictive control approach is suggested. The two-level control algorithm relies on a

510 decentralised safety inventory policy, coupled with the overall optimisation-based control approach. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Figure 1. Multi-echelon supply chain network. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONML

2.zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Supply Chain Model Let DP denote the set of desired products (or aggregated product families) of the system. These can be manufactured at plants, P, by utilising various resources, RS. The products are subsequendy transported to and stored at warehouses, W. Products from warehouses are transported upon customer demand, either to distribution centres, D, or directly to retailers, R. Retailers receive time-varying orders from different customers for different products. Satisfaction of customer demand is the primary target in the supply chain management mechanism. Unsatisfied demand is recorded as back-orders for the next time period. A discrete time difference model is used to describe the supply chain network dynamics. The duration of the base time period depends on the dynamic characteristics of the network. The inventory balance equation, valid for warehouses and distribution centres, is:

>'a(0-ya('-l)+S^a',*('-V*)-E%.*'(0

V ke^,D}t^T,ie

DP

(1)

yi^k is the inventory of product / stored in node k . x-j^^j^ and x^^^-r denote the amounts of the i-th product transported through routes (k^,k) and {k,k^), respectively, where k^ supply k and A:^^are supplied by k. L^^j^ denotes the transportation lag for route (k^,k). The transportation lag is assumed to be an integer multiple of the base time period. For retailer nodes, the inventory balance considers the actual delivery of product / attained, denoted by J^-

y>At)=y>A'-})+I,h^'A-W,)-daif)

V keR,teT,ieDP

(2)

511

ThezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA balance equations for unsatisfied demand (e.g., back-orders) take the form: zyxwvutsrqponmlkjihg BO,,{t)^BOj-l)+R,,{t)-d,,{t)-LO,,{t)

ykeRjeTJe

DP

(3)

where Rtjt) denotes the demand for product / at retailer k and time period t. LOtJt) denotes the amount of cancelled back-orders (lost orders). At each node capable of carrying inventory (nodes of type W, D, and R), capacity constraints are in effect that account for a maximum allowable inventory level: y,{t)=J,oc,y,,it)^vr

yke^,D,R}tET

(4)

i

where Y^ denotes the actual inventory of the node, Oi the storage volume factor for each product, and Vk^^ the maximum capacity of the node. A maximum allowable transportation capacity, T/"/^, is defined for each permissible transportation route within the supply chain network: 5;Ax,,,,.(f)^k-m )

(3)

Assume that the following training data set:

(4) And the input patterns is:

Q = {y.

=(yi,yi_,.,..,yi_,,ui,ui_,,.,.,u[_j\.^,...,,]

(5)

The concept of Parzen-Rosenblatt probability density function (Haykin, 1999) is used and extended as an index to measure the reliability of the model prediction. The Parzen-Rosenblatt density estimate of a new event,

"^^, based on the training data

set, ^^, is defined as:

where the smoothing parameter, /i, is a positive number called bandwidth, which controls the span size of the kernel function,

'^

and mo is the

dimensionality of the event set, ^ . The kernel functions, K, are various and, however, both theoretical and practical considerations limit the choice. A well-known and widely used kernel is the multivariate Gaussian distribution:

(7)

535 Once

"^^ is close to some

' , the relative kernel functions will give higher values

and those ' which are not in the neighborhood will give lower values in the summation. The above probability density function (6) is denoted aszyxwvutsrqponmlkjihgfe regional knowledge index. N A C s che m e

^ ^ neijral adaptive VzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA n contro 11 er zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB U^.

.U.... coordinator zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF process

u^ ^

M A2SIN m o d e l

optimizer

^

-*

M P C s c he m e Figure 1. Architecture of the robust model predictive control

(RMPC).

2.2. Model predictive control The proposed robust model predictive control shown in Figure 1 is reduced to a standard model predictive control if we set u = Uj^p^ in the coordinator. 2.3. Neural adaptive control If we set u = Uj^^^ in the coordinator. Figure 1 is reduced to the neural adaptive controller, Figure 2, by Krishnapura and Jutan (2000).

n

hidden node

^ , Output node

-eontmltef Mgmenl'edriHwofk'

Figure 2. Neural adaptive controller

structure.

-^y

536 The whole system works by updating all the four connecting weights in the network to minimize the deviationzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (E) of the process output from its set-point value at current time instant k: Ek =-{yd,k

-JkJ

(8) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDC

This error signal is generated at the output of the plant and is passed backward to the neural network controller through the plant and is minimized with the steepest descent method. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 2.4. Coordinator In the proposed architecture as shown in Figure 1, the model predictive controller and the neural adaptive controller run in parallel. A coordinator is designed to make the final decision based on the outputs of the above two controllers. As a preliminary test, the following equation is used to combine the outputs of the MPC and the NAC:

wzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA ^w^mvc ^ { i - y j ) u ,

(9)

where ^ is a model-reliability index that weights the control actions from model predictive controller and neural adaptive controller. For simplicity, the following linear form denotes the decision factor yj is

implemented in this work: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDC ¥

^(/a(^„e. for

)) =

a < f^{a)

• / Q ( ^ . ..

(10)

)

^^ ) < b

where a and b are constants, and

/ Q {CO^^^ ) b , l/J = 1,

as show in figure 3.

v^

^ y, h{0)^ Figures. ^W vs.

)

/QK.W)

537 zyxwvutsrqp

3. Example (pH Control)

The simulated pH control system is adopted from Palancar (1998). There are two inlet streams to the continuous stirred tank of reaction (CSTR), the acid flow,zyxwvutsrqponmlkjihgfedcbaZ Qp,, an aqueous solution of acetic acid and propionic acid, and the base flow, Q^, an aqueous solution of sodium hydroxide. A primary test against a step change of the set point from pH = 7 up to pH = 10 at the 700^ seconds and a further test against a sequence of step changes in set point are performed. The results are depicted in figure 4, figure 5 and figure 6. zyxwvutsrqponmlkjihgfedc pH Control of CSTR

O g

^ 200

400

600

800

10OO

1200

1400

1600

1800

2000

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

5

XJLii Zopm in 10OO 1200

200

400

600

800

(S O ^o.4

200

400

600

800

10OO

^

200

400

600

800

10OO Time

O

1400

1600

1800

2000

1200

1400

1600

1800

2000

1200

1400

1600

1800

2000

Figure 4. RMPC against set point change (solid line: response; dotted line: set point). X 10^

M P C control action Real control action 3.5 I

,

LZlSZX N A C control action

2.5

690

700

710

720 Time

Figure 5. Zoom-in of the control actions.

730

740

750

760

538 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA pH Control of CSTR

0 , -3 500 1000 1500 2000 2500 3000 3500 4000 4500 X 10 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA ?:

5h

500

1000

1500

2000

2500

3000

3500

4000

4500

500

1000

1500

2000

2500

3000

3500

4000

4500

1000

1500

2000 2500 3000 3500 4000 4500 Time zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK

1Gr--

5h-

zyxwvutsrqp

1 ro) 0.5

^

0

/^K^ 500

Figure 6. RMPC against a sequence of step changes in set point.

4. Conclusion The incompleteness and inaccuracy of artificial neural network models generally exist and deteriorate the performance of the above control scheme. In solving this problem, regional knowledge analysis is proposed in this study and applied to analyze artificial neural network models in process control. A novel control approach is proposed to combine the neural model based MPC technique and NAC. The regional knowledge index in the coordinator determines the weights by considering the present state of the controlled processes. This approach is particularly useful for ANN dynamic model with rough training and low accuracy. The excellent control performance is observed in this highly nonlinear pH system.

5. References Haykin, S., 1999, Neural Networks: A Comprehensive Foundation. Prentice Hall International, Inc., 2"^ edition. Krishnapura, V.G. and Jutan, A., 2000, Chemical Engineering Science, 55, 3803. Leonard, J.A., Kramer, M.A. and Ungar, L.H., 1992, Computers & Chemical Engineering, 16, 819. Lin, J.S. and Jang, S.S., 1998, Ind. Eng. Chem. Res., 37, 3640. Palancar, M.C., Aragon, J.M., Miguens, J.A. and Torrecilla, J.S., 1998, Ind. Eng. Chem. Res., 37, 2729.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

539

Optimisation of Automotive Catalytic Converter Warm-Up: Tackling by Guidance of Reactor Modelling J. Ahola\ J. Kangas^T. Maunula^ and J. Tanskanen* ^University of Oulu, Department of Process and Environmental Engineering, P.O. BOX 4300, FIN-90014 UNIVERSITY OF OULU, Finland ^Kemira Metalkat Oy, P.O. BOX 171, FIN-90101 OULU, Finland

Abstract In this paper, the ability of the developed reactor model to predict converter's performance has been evaluated against experimental data. The data is obtained from the full-scale tests and the performed European driving cycle vehicle tests. The focus is on the warm-up period of catalytic exhaust gas converters and especially on the prediction of catalyst light-off.

1. Introduction Understanding of the dynamic behaviour of exhaust gas catalytic converters is becoming more important as the environmental regulations tighten. A large portion of the total emissions are forming during the first few minutes of a drive when the catalytic converter is considerably cold and the reactions are relatively slow and kinetically controlled. Thus, it is essential to design the converter in such a way that the warm-up occurs optimally leading to high performance in the purification of unwanted emissions, such as oxides of nitrogen, hydrocarbons and carbon monoxide. Mathematical model, which would describe different physico-chemical phenomena during the warm-up period accurately enough, would be irreplaceable for the design process of a catalytic converter. The importance of the catalytic converter modeling is well recognised in the current literature (e.g. Koltsakis, Konstantinidis & Stamatelos, 1997; Koltsakis & Stamatelos, 1999; Brandt, 2000; Lacin & Zhuang, 2000; Mukadi & Hayes, 2002). In this paper previously built warm-up model's ability to describe these phenomena is investigated by comparing its predictions to experimental work. Furthermore, influences of different design constraints are inspected.

2. Model

The purpose of the developed model is to describe the 3-way catalyst warm-up behaviour by predicting time dependent NOx, THC and CO conversions and temperature profiles. The model has been presented in an earlier paper (Kangaszyxwvutsrqponmlkjih et aL, 2002). The converter is assumed to be adiabatic with uniform radial temperature and flow rate distribution. Thus, the profiles of the whole converter are obtained by modeling one channel of the monolith. The channel model equations consist of gas phase mass and energy balances, solid phase energy balance and heat transfer model

540

between these two phases. Laminar type flow in the small channels is approximated using a plug flow model with axial dispersion. Accumulation of heat and mass in the gas phase has been taken into account. The solid phase model includes the accumulation of heat and the axial heat conductivity. The chemical reactions take place only on the surface of the solid phase. The reactor model has been implemented in MATLAB® programming language. The system of partial differential equations is converted to ordinary differential equations by applying the numerical method of lines (Schiesser, 1991). Resulting ODEs are solved by the MATLAB®'s odel5s function, which is a quasi-constant step size implementation of the NDFs in terms of backward differences (Shampine & Reichelt, 1997). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFED

3. Experimental Different type of experimental data has been exploited during the model construction and its performance evaluation. Particularly, bench scale converter experiments by using synthetic exhaust gas streams, full-scale experiments where the converter is mounted on the exhaust gas stream of an engine, and the European driving cycle (NEDC) vehicle tests have been carried out. An underfloor converter, which is quite far from the engine, has been employed as a demonstrative catalytic aftertreatment system in the vehicle test. Thus, the inlet gas temperature is relatively low leading to difficulties to pass the most tight emission limits and, modifications of the whole aftertreatment system might be necessary. The effects of the structural changes on the performance of the aftertreatment system have been evaluated by testing ten structurally different converters. The most significant characteristics of the converters are shown in Table 1. The warm-up of a converter can be attempted to speed up by structural changes like increasing the proportion of the active component or decreasing the mass to be heated. Thermal mass can be simply decreased by using shorter catalyst monoliths or lower cell densities, but to obtain the same performance other structural changes are needed to replace the loss of geometrical surface area. As another consequence shorter residence times of the components in the reactor would reduce the attainable conversions, which has to be compensated by increasing the loading of the active component in the substrate. However, the increase in loading is always limited due to the growing costs. The mass of active components Pd and Rh (7:1), the thickness of washcoat and the diameters of converters were kept constant in the ten tested prototypes. These selections will give rise to the following features: the prizes of the converters are approximately same; pore diffusion does not vary between the converters and the inlet gas flow distribution is constant in converter inlet. In this study fast warm-up has been selected as the most important design criterion. The converters are compared by the time needed to achieve 50 percent conversion in the NEDC test. The criterion is fulfilled only if the conversion continues the rice after it achieves 50 percent. Thus, the criterion excludes cases where the 50 percent conversion is only temporarily exceeded. Light-off for catalyst #2 is complicate. In the beginning conversion oscillates between zero and over 80 percent several seconds until it stabilises. The results are shown in table 2.

541 zyxwvutsrqpo Table 1. The most significant characteristics of the converters used in evaluation. Catalyst Catalyst volume [dm^] Monolith length [mm] Thickness of metal foil [jum] Cell density [Cpsi] GSA [m-/dm^] PGM on washcoat mass [%] Active component mass [g] Washcoat mass [g] Metal foil mass [g] Inner shell + pins mass [g] Total mass [g]

1 0.85 120 50 260 2.93 0.93 1.35 146 511 458 1116

2 0.85 120 50 419 3.59 0.85 1.35 159 566 458 1184

3 0.85 120 30 735 4.39 0.63 1.35 216 493 458 1167

4 0.85 120 50 705 4.2 0.65 1.35 207 780 458 1445

5 0.85 120 80 664 3.95 0.69 1.35 195 1159 458 1812

Catalyst Catalyst volume [dm^] Monolith length [mm] Thickness of metal foil [jLim] Cell density [Cpsi] GSA [m^/dm^] PGM on washcoat mass [%] Active component mass [g] Washcoat mass [g] Metal foil mass [g] Inner shell + pins mass [g] Total mass [g]

6 0.85 120 30 1125 5.0 0.55 1.35 245 579 458 1282

7 0.85 120 50 1073 4.76 0.58 1.35 234 908 458 1600

8 0.85 120 30 1687 5.0 0.47 1.35 285 713 458 1456

9 0.5 58 50 705 4.2 1.4 1.6 122 459 222 802

10 0.3 28 50 705 4.2 2.8 1.7 61 230 105 396

Table 2. The time to achieve 50 percent conversion in NEDC vehicle test. Converter # 1 2 3 4 5 6 7 8 9 10

Time to 50% conversion [s] CO HC 94 90 93-99 91-95 80 88 95 95 95 96 91 89 95 96 95 95 80 90 73 76

Rating 4 5 2 7 9 5 9 7 3 1

Prototype converters can be evaluated based on the moderate cost engine bench tests. A rating criterion, which is commonly used, is the light-off temperature, i.e., the temperature in which a certain conversion is achieved, typically 50 percent. These kind of engine bench tests have been done for four converters. The light-off temperatures of the selected converters are shown in Table 3.

542 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Table 3. Light-off temperatures of CO, HC and NOxfor converters on engine test. Converter # 2 4 6 9

Tso.co [°C] 304 300 300 313

Tso.HC [°C]

Tso.NOx [°C]

313 305 305 320

322 313 316 344

The light-off temperature in the engine bench test does not correlate with the time to light-off in the NEDC tests, which is clearly seen by comparing tables 2 and 3. Thus, the light-off temperature does not give any direct evidence of the fast warm-up. However, the engine bench tests can be exploited in the construction of the converter model. Note that the experiments have been carried out with a slow input temperature rise where near steady-state conditions are prevailing. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR

4. Results The ability of the model to predict converter's performance has been evaluated against the experimental data, which is obtained from the full-scale tests where the converter has been mounted on the exhaust stream of an engine, and from the performed European driving cycle vehicle tests. In this work the values of the reaction kinetic parameters have been estimated using a part of the engine bench data. The obtained model gives a good agreement with the measurements and with the unused bench test data. The model has been used to predict the behaviour of converter during the first few minutes of the NEDC vehicle test. Fast variations in input concentrations are difficult to handle numerically. Especially, in socalled fuel-cut conditions, where an extremely lean or rich exhaust gas is suddenly inserted into the converter, the solving time of the model increases remarkably and the solution for a near complete conversion becomes unstable. Simplified inlet stream dynamics has also been studied by simulation. Firstly, stepwise change from ambient temperature (298 K) to 650 K has been assumed while the other inlet stream variables are kept constant. Secondly, stepwise change to 600 K temperature, 60 s after stepwise change from 298 K to 440 K in the inlet gas temperature, is done. The above stream temperature dynamics mimics the main features of the temperature dynamics of the NEDC vehicle test. Finally, the inlet stream dynamics of the NECD test is modified in such a way that the concentrations are kept constant and approximately same as in the engine bench tests. The inlet stream temperature and flow rate are varying similarly as in the NEDC vehicle test. In table 4 the rating of the converters is shown based on these simulations. The simulations with simplified inlet dynamics give a valuable guidance for the design of catalytic converter structure. Practically the same warm-up is predicted than obtained from the NEDC vehicle test. The differences between the predicted times to light-off by the modified NEDC and by the double stepwise inlet stream dynamics against the measured ones are relative small. The oxygen storage components of the catalysts reduce the effect of concentration variations, which might be the reason why the simulations assuming constant inlet concentrations give good results.

543 zyxwvutsrqpo Table 4. Simulated time when the converter has achieved 50 percent conversion with one and two stepwise temperature change as well as modified NECD input dynamics. Double stepwise

Stepwise Converter #

i

2 3 4 5 6 7 8 9 10

Modified NEDC

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA tso.co [s] tso.co [s] t50.HC [S] tso.Hc [s] t50.HC [S] tso.co [s] 16.8 17.2 15.5 20.4 27.0 17.0 22.2 19.6 12.0 7.3

21.3 22.0 20.3 26.6 35.2 22.4 29.4 26.2 15.4 9.2

85 86 84 91 100 86 94 90 79 72

89 90 88 96 107 90 99 95 82 74

93 93 89 93 97 93 96 93 84 74

93 93 93 95 106 93 99 96 92 78

The thermal mass has the most significant influence on the warm-up. Thus, the two heaviest converters (#5 and #7) have slowest warming up whereas the shortest and lightest converter #10 has the fastest warm-up. The structure of the next lightest converters #9 and #3 differ from each other. Converter #9 is shorter but converter #3 was made of thinner metal foil. Both of them have approximately same overall warm-up behaviour, but especially the results of converter #3 indicate a disadvantage of fast thermal response. The converter is not only heating up fast but it also cools down fast. The impact of this can be seen on the late HC light-off taking place according to the modified NEDC simulation and on the oscillation of conversion during the NEDC vehicle test. The same effect can be seen when the results of converters #4 and #8 are compared. In stepwise simulations the lighter converter #8 is better, but in the modified NEDC simulation and in the NEDC vehicle test converters behaves equally good. Once again the reason for loosing the better functioning of converter #8 seems to follow from the larger heat transfer area that gives a fast response to the inlet temperature variations. In the demonstrative aftertreatment system the inlet gas temperature is in the catalytic light-off region, i.e. reaction rate is very sensitive to temperature during the warm-up of the catalytic converter. The boosting of exothermic reactions is needed in moving onto the higher operation temperatures. Thus, the converters are sensitive to temperature variations and heat transfer rates. In some another applications inlet gas temperature might be higher and temporally decreasing temperature has slighter effect on the catalyst light-off. If it is possible to mount the converter closer to the engine higher inlet gas temperature naturally follows. Thus, the converter structure should be optimised separately in each application and, if possible, together with the careful selection of the converter position.

544 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

5. Conclusions In this work reactor modeling and experimentation have been combined to study the warm-up period of exhaust gas catalytic converters. The influence of structural modifications may be effectively studied numerically provided that an accurate model for the catalytic converter is available. Such a model can guide the converter design provided that the description of the chemical reaction kinetics is tuned for the catalyst at hand. Clearly the thermal mass has the most significant influence on the catalytic converter warm-up. The heat transfer area between gas and solid phase has an effect on the warmup. This is most crucial when the inlet gas temperature is on the catalyst light-off region. The real gas stream has several input variables that change simultaneously in a complicated way, which will lead to numerical problems. Thus, the presented simplified alternatives are attractive in the preliminary rating of catalytic converters. In the studied converters a good prediction of warm-up has been obtained even if the concentration variations at the converter inlet have been ignored.

6. Preferences Brandt, E.P., Wang, Y. & Grizzle, J.W. (2000) Dynamic modelling of a three-way catalyst for SI engine exhaust emission control. IEEE transactions on control systems technology, 8, 767. Kangas, J., Ahola, J., Maunula, T., Korpijarvi, J. & Tanskanen, J. (2002) Automotive exhaust gas converter model for warm-up conditions. 17* International Symposium on Chemical Reaction Engineering, Hong Kong, China. Koltsakis, G., Konstantinidis, P. & Stamatelos, A. (1997) Development and application range of mathematical models for 3-way catalytic converters. Applied Catalysis B: Environmental 12, 161-191. Koltsakis, G.C. & Stamatelos, A.M. (1999) Modeling dynamic phenomena in 3-way catalytic converters. Chemical Engineering Science 54, 4567-4578. Lacin, F. & Zhuang, M. (2000) Modeling and Simulation of Transient Thermal and Conversion Characteristics for Catalytic Converters. SAE Technical Paper Series 2000-01-0209. Mukadi, L.S. & Hayes, R.E. (2002) Modelling the three-way catalytic converter with mechanistic kinetics using the Newton-Krylov method on a parallel computer. Computers and Chemical Engineering 26, 439-455. Schiesser, W.E. (1991) The Numerical Method of Lines. Academic Press, London. Shampine, L.F & Reichelt, M.W. (1997) The matlab ODE suite. SI AM Journal on Scientific Computing, 18, 1-22.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

545

Gas-Liquid and Liquid-Liquid System Modeling Using Population Balances for Local Mass Transfer Ville Alopaeus^ Kari I. Keskinen^'^, Jukka Koskinen^ and Joakim Majander^ ^ Neste Engineering Oy, POB 310, FIN-06101, Porvoo, Finland ^ Helsinki University of Technology, Laboratory of Chemical Engineering and Plant Design, POB 6100, FIN-02015 HUT, Finland ^Enprima Engineering Oy, POB 61, FIN-01601 Vantaa, Finland

Abstract Gas-liquid and liquid-liquid stirred tank reactors are frequently used in chemical process industries. The design and operation of such reactors are very often based on empirical design equations and heuristic rules based on measurements over the whole vessel. This makes it difficult to use a more profound understanding of the process details, especially local phenomena. The scale-up task often fails when the local phenomena are not taken into account. By using CFD, the fluid flows can be taken into closer examination. Rigorous submodels can be implemented into commercial CFD codes to calculate local two-phase properties. These models are: Population balance equations for bubble/droplet size distribution, mass transfer calculation, chemical kinetics and thermodynamics. Simulation of a two-phase stirred tank reactor proved to be a reasonable task. The results revealed details of the reactor operation that cannot be observed directly. It is clear that this methodology is applicable also for other multiphase process equipment than reactors.

1. Introduction Computational fluid dynamics (CFD) approach has become a standard tool for analyzing various situations where fluid flow has an effect on the studied processes. Numerous studies using CFD for chemical process industry have also been reported. Mostly, they have been simple cases as the system is non-reacting, contains only one phase (liquid or gas), or physical properties are assumed constant. When we are dealing with multiphase systems like gas-liquid or liquid-liquid systems we must take into account some phenomena which are not of importance for one-phase systems. The vapor-liquid or liquid-liquid equilibrium is one of these that are needed in order to model the system. In addition to that, mass and heat transfer between the phases must generally be taken into account. Also, the two-phase characteristics of fluid flow need to be taken into consideration in the CFD models. Because CFD originates outside the field of chemical engineering or reaction engineering, the CFD program packages as such are normally not particularly well suited for modeling of complex chemical reactions or rigorous thermodynamics. Fortunately, the CFD program vendors have noticed this as they have provided the

546

possibility to include user code with their flow solver. In some cases, however, this is not quite a straightforward task. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE

2. Population Balances In the population balances, the local bubble size distribution is modeled. In practice, it means that the numbers of bubbles of various sizes are counted. The bubble size distribution is discretized into a number of size categories, and the number of bubbles belonging to each of the size categories is counted in each of the CFD volume elements. The dispersed phase is here referred as bubbles, but it may be liquid droplets or solid precipitates as well. The source terms for the bubble numbers are due to breakage and coalescence of bubbles, and mass transfer induced size change. Other sources (such as formation of small bubbles through nucleation mechanisms) were neglected in this study. The discretized population balance equation can then be written in the following form

"*

y=i+l

;=;

(1) 7=1

Various models for bubble breakage and coalescence rates are presented in the literature. These rates usually depend on physical properties, such as densities, viscosities and surface tension, and on turbulence properties, most commonly the turbulent kinetic energy dissipation rate. To calculate local bubble size distributions, also local physical properties and turbulence level should be used. This can be done via CFD (Alopaeus et al. 1999, 2002; Keskinen and Majander 2000). For the breakage frequency, the following function was used:

-^ f zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Mc zyxwvutsrqponmlkjihgfedcbaZYXWV g(a,)=C,f"'erfJ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (2) '^2 _ , 2 / 3 „ 5 / 3 • ' " ^3 V

For the daughter number distribution, P, the beta function is used

/?(a,,a^.)= 90at a,

( a. ^^\(

,V

^3

(3)

For the bubble coalescence rate, the following function is used. (4)

547

In the population balance equations, the number density of the bubbles is counted. This approach has been used in the simulation of two-phase processes in flowsheet simulators and in testing of the population balance models. However, in the CFD, the bubbles are divided into size categories according to mass fractions. Thus an additional interface code is needed between the user population balance subroutines used in a flowsheet simulator and that used in CFD. To calculate local mass transfer rates, local mass transfer area is obtained from bubble size distributions. Mass transfer fluxes are calculated in separate subroutine, and the mass transfer rate is obtained by multiplying mass transfer area by mass transfer fluxes. zyxwvutsrqp

3. Example 1: Gas-Liquid Stirred Tank Reactor

In the first example, a stirred tank reactor with gaseous reactant feed was modeled. In the following figures, a detail of a stirred tank reactor is shown. The reactor is modeled with CFD, and user subroutines are implemented for population balance calculation, local mass transfer flux calculation, chemical kinetics and thermodynamics. All these were solved simultaneously with a CFD code. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONML

Aui f, i 1 ^ -; ' *

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Figure 1. Gas volume fraction distribution (left) and Sauter mean bubble diameter (right) in a gas-liquid stirred tank reactor.

4. Example 2: Liquid-Liquid Stirred Tank for Drop Size Measurements In the second example, a stirred tank with two liquid phases was modeled. The tank was used to measure liquid drop size distributions for fitting of the population balance parameters. The tank was modeled with CFD in order to examine the flow patterns and turbulence levels in the tank.

548 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

I

95.0 94.4

"zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFED '"^^^^'^S

^ ^ ^ ' I ' ^ C T ^ S ^ ^ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB 93.8 93.2 92.6 92.0 91.4 90.8 90.2

i

89.6

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 89.0 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Figure 2. Sauter mean drop diameters for a liquid-liquid system. Measuring probe is located at the front.

5. Conclusions CFD has become a standard tool for analyzing flow patterns in various situations related to chemical engineering. In many cases related to multiphase reactors, mass transfer limits overall chemical reaction. In these cases the accurate calculation of local mass transfer rates is of utmost importance. This is best done with the population balance approach, where local properties are used to model bubble or droplet breakage and coalescence phenomena. It has been proven that these rigorous models along with other multiphase and chemistry related models can be implemented in the CFD code, and solved simultaneously with the fluid flows.

549 zyxwvutsrqpon

6. Symbols A(3 a a^i C1...C4 F(ai, aj) g(a) Yi £ A(ai, aj) JLI p a

width of droplet class (m) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA drop diameter (m) Sauter mean diameter, ^32 = S aj^ / Z a-^, (m) empirical constants binary coalescence rate between droplets a, and Uj in unit volume (m's-^) breakage frequency of drop size a (s"^) number concentration of drop class / (m'^) turbulent energy dissipation (per unit mass) (m^ s""^) collision efficiency between bubbles a, and a^ viscosity (Pa s) density (kg m"^) interfacial tension (N m'^)

7. References Alopaeus, V., Koskinen, J., Keskinen, K.I., and Majander, J., Simulation of the Population Balances for Liquid-Liquid Systems in a Nonideal Stirred Tank, Part 2 Parameter fitting and the use of the multiblock model for dense dispersions, Chem. Eng. Sci. 57 (2CX)2), pp. 1815-1825. Alopaeus, V., Koskinen, J., and Keskinen, K.I., Simulation of the Population Balances for Liquid-Liquid Systems in a Nonideal Stirred Tank, Part 1 Description and Qualitative Validation of the Model, Chem. Eng. Sci. 54 (1999) pp. 5887 5899. Keskinen, K.I., and Majander, J., Combining Complex Chemical Reaction Kinetics Model and Thermophysical Properties into Commercial C F D Codes, presentation at AIChE 2000 annual meeting, Los Angeles, CA.

This Page Intentionally Left Blank

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

551

Robust Optimization of a Reactive Semibatch Distillation Process under Uncertainty H. Arellano-Garcia, W. Martini, M. Wendt, P. Li, G. Wozny Institute for Process and Plant Technology, Technische Universitat Berlin D-10623 Berlin, Germany

Abstract Deterministic optimization has been the common approach for batch distillation operation in previous studies. Since uncertainties exist, the results obtained by deterniinistic approaches may cause a high risk of constraint violations. In this work, we propose to use a stochastic optimization approach under chance constraints to address this problem. A new scheme for computing the probabilities and their gradients applicable to large scale nonlinear dynamic processes has been developed and applied to a semibatch reactive distillation process. The kinetic parameters and the tray efficiency are considered to be uncertain. The product purity specifications are to be ensured with chance constraints. The comparison of the stochastic results with the deterministic results is presented to indicate the robustness of the stochastic optimization.

1. Introduction In the chemical industry, operation policies of batch processes are mostly determined by heuristic rules. In the previous studies, deterministic optimization approaches have been used (Low et al., 2002; Li et al., 1998; Arellano-Garcia et al. 2002), using a model with constant parameters. Since the operation policy developed is highly sensitive to the model parameters and boundary conditions, product specifications may often be violated when implementing it in the real plant. For a reactive batch distillation, the chemical reaction kinetic parameters in the Arrhenius equation are usually considered as uncertain parameters, since they are often determined through a limited number of experimental data. The tray efficiency is another uncertain parameter which is important for the batch operation. Furthermore, the amount and composition of the initial charge are also uncertain, since they are mostly product outputs of a previous batch. To achieve a robust and reliable operation policy, a stochastic optimization approach has to be considered. This work is focused on developing robust optimal operation policies for a reactive semibatch distillation process, taking into account the uncertainties of the model parameters. Under the uncertainties, the product specifications are to be satisfied with a predefined confidence level. This leads to a chance constrained dynamic nonlinear optimization problem. Wendt et al. (2002) proposed an approach to nonlinear chance constrained problems, where a monotone relation of one uncertain input to the constrained output is exploited. We extend this approach to solve dynamic nonlinear problems under uncertainty and apply it to the batch distillation optinndzation problem.

552 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

2. Problem Description

We consider an industrial reactive semibatch distillation process. A trans-esterification of two esters and two alcohols takes place in the reboiler. A limited amount of educt alcohol is fed to the reboiler to increase the reaction rate. The product alcohol is distillated from the reboiler to shift the reaction in the product direction. In the main cut period the product alcohol is accumulated with a given purity specification. In the offcut period, the reaction proceeds to the end of the batch and results in a mixture of the product ester and the educt alcohol in the reboiler. The composition of the educt ester is required to be smaller than a specified value, so that a difficult separation step can be avoided. The aim of the optimization is to minimize the batch time. The independent variables of the problem are the feed flow ratezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJI F of the educt alcohol and the reflux ratio Rv. The deterministic and stochastic nonlinear dynamic optimization problem can be formulated as follows min t,(F(t),Rv(t),t„,tJ

min t, (F (t),R^(t), t,, t J

s.t. the model equation system and

s.t. the model equation system and

Dj>Dr"

(PI)

Di>D7

^ A,NST ( t f ) ^ X A,NST

" l ^ A,NST (^ f ) — ^ A,NST J — ^ 2

rF(t)dt=Mi

rF(t)dt = M,

F'',,«,_,+ij^,_i V.., «,-,,•, «,>i,^^^^

(4)

Image 3: r = 2.

Image 2: r= 1. NW

N

Image 4: r = 3.

Image 5: r = 4. zyxwvutsrqpon

NE

sw s SE Figure 1: Scan lines.

Image 6: r = 4 (increased continuity).

607 zyxwvutsrq 2.2. Border thinning Thickness of the borders identified by local intensity minima method need to be reduced, however with care not to loose connectivity and continuity of the borders. An iterative algorithm, with a series of conditions, is developed for this procedure to clearly mark the edge points. Each edge point in an image has 8 neighboring pixels, which are numerated from 1 to 8 as shown in Figure 2. Two values N and S, which are used in the conditions, are defined as the number of edge points and the number the transitions of edge-nonedge (vice-versa) points in the ordered sequence of the neighboring pixels, respectively. Every edge point in the image, which satisfies all conditions in the first series are marked/flagged. Once the whole image has been checked, the pixels flagged are removed. The second stage of the procedure is similar to the first but with a different condition series. These two stages are repeated iteratively, where no further pixel satisfies the conditions, in other words may be removed. With the suggested modifications, border thinning process achieved the real skeleton image with all nonedge points deleted. The resultant image is given in Image 7. 2.2.7.zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA First condition series (i)2 LU 15 38 • o «» 65 70 " is b 80 & ^ ^' * ?tt 7!2 6D e D ee 64 68 68 7D 72 o o Conversion QGCG5 Q003

4.4

4.3

CXCE5

QOGB

QCOB QOOt

CXXXE

•;

0

Conversion

e -a

Conversion

6B0

530 525

640

520

eao

*^% *^ * *40^

515

^f^lMl

*

a^

620

510

610

505' 500

eoo • • • |* l

495

590

580

490

D

62

64

66

68

Converelon

70

72

eb

62

'M '^

68

Conversion

7D

72

Conversion

Figure 3 Pareto optimal solutions obtained for problem 1 using five lump model.

zyxw

628 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

4. Conclusions Two different kinetic lumping models are tuned in order to simulate an industrial FCC unit. Operational insights are developed by performing multiobjective optimisation study using non dominated sorting genetic algorithm. Pareto optimal solutions are obtained for different objective functions and constraints considered, which are expected to help process engineer to locate favoured solution.

5. References Ancheyta, J.J.; Lopez, I.F; Aguilar, R.E.; Moreno, M.J., 1997, A Strategy for Kinetic Parameter Estimation in the Fluid Catalytic Cracking Process, Ind. Eng. Chem. Res., 36, 5170-5174. Avidan, A.A.; Shinnar, R., 1990 Development of Catalytic Cracking Technology. A Lesson in Chemical Reactor Design, Ind. Eng. Chem. Res., 29, 931-942. Dave, D.J. and Saraf, D.N., 2002, A model suitable for rating and optimization of industrial FCC units, selected for publication in Indian Chemical Engineer. Deb, K. and Srinivas, N., 1995, Multiobjective optimizatoin using nondominated sorting in genetic algorithm Evol. Comput., 2, 106-114. Gary, J.H. and Handwerk, G.E., 1993, Petroleum refining, technology and economics, 3, Marcel Dekker. Gupta, S.K., 1995, Numerical Methods for Engineers, Wiley Eastern/New Age Intl. Jacob, S.M.; Gross, B.; Voltz, S.E.; Weekman, V.M., Jr., 1976, A Lumping and Reaction Scheme for Catalytic Cracking, AIChE J., 22(4), 701-713. Krishna, A.S. and Parkin, E.S., 1985, Modeling the Regenerator in Commercial Fluid Catalytic Cracking Units, Chem. Eng. Prog., 81(4), 57-62.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

629

Novel Operational Strategy for the Separation of Ternary Mixtures via Cyclic Operation of a Batch Distillation Column with Side Withdrawal D. Demicoli and J. Stichlmair Lehrstuhl fur Fluidverfahrenstechnik, Technische Universitat Munchen, Boltzmannstr. 15, D-85748, Germany, email: [email protected]

Abstract In this paper we introduce a novel operational policy for the purification of an intermediate boiling component via batch distillation. The novel operational policy is based on feasibility studies of a cyclic distillation column provided with a side withdrawal. The process is validated via computer based simulations. Furthermore, the effects of the most important process parameters are investigated.

1. Introduction

Batch distillation is a very efficient and advantageous unit operation for the separation of multicomponent mixtures into pure components. Due to its flexibility and low capital costs, batch distillation is becoming increasingly important in the fme chemicals and pharmaceutical industries. Nevertheless, there are intrinsic disadvantages associated with conventional batch processes. These are: long batch times, high temperatures in the charge vessel and complex operational strategies. Hence, alternative processes and operating policies which have the potential to overcome these disadvantages are being extensively investigated. S0rensen and Skogestad (1996) compared the operation of regularzyxwvutsrqponmlkjihgfedcbaZYXW (fig. la) and inverted (fig. lb) batch distillation columns for the separation of binary mixtures. In a later work S0rensen and Prenzler (1997) investigated the cyclic, or closed operation for the separation of binary mixtures. Warter et al. (2002) presented simulations and experimental results for the separation of ternary mixtures in the middle vessel column (fig. Ic). The cyclic operation was applied also in this case. Multicomponent mixtures can be separated in the multi-vessel distillation column. This might also be operated in closed operation (Wittgens et al. 1996). In this paper we introduce a novel process for the separation of ternary mixtures via cyclic operation of a batch distillation column provided with a side withdrawal {fig. Id). This consists of a distillation column equipped with sump and distillate vessels, to which the charge is loaded at the beginning of the process, and a liquid withdrawal section placed in the middle of the column.

630 Charge

(d) zyxwvutsrqponmlkjihgfedcbaZYXWVU

Feed zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Fig. 1: Different column types — (a) regular (b) inverted (c) middle vessel and (d) novel cyclic batch distillation column with side withdrawal. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR

2. Feasibility The column shown in figure Id can be visualised as an inverted batch distillation column placed on top of a regular batch column, the two being connected at the withdrawal stage. Hence, feasibility studies for the regular and inverted batch distillation columns may be applied to the novel process provided that the concentration of the withdrawal tray lies on the column's profile. Therefore, it is possible to obtain pure intermediate-boiling product b from an infinite column operated at infinite reflux ratios, only if the distillate and sump vessels contain the binary mixtures a-b (lightintermediate boilers) and b-c (intermediate-heavy boilers), respectively.

3. Process The charge was initially equally distributed between the sump and distillate vessels. The column was than operated in a sequence of two process steps: a) Closed operation mode. During this step, the light and heavy boilers were accumulated in the distillate and sump vessels respectively (fig. 2a, b). Hence, the column was operated at total reflux and no side-product withdrawal until the concentration of the high boiler in the distillate vessel and that of the low boiler in the sump were sufficiently low. b) Open operation mode. During this step the withdrawal stream divided the column in an inverted (top) and a regular (bottom) batch column. Hence, the reflux ratio of the lower column was used to control the heavy boiling impurity c in the withdrawal stream. The reboil ratio of the inverted column was analogously used to control the light boiling impurity.

631 The internal reflux and reboil ratios are related to the flow rate of the withdrawal stream through the mass balance around the withdrawal stage: (1)

W/V = 1/RB-RL

W=LU-LL=V-(1/RB-RL);

At the end of the process, the internal reflux ratios were equal to unity and the flow of

(fig. 2d). the withdrawal stream equal to zero i.e. RL = RB = 1; W = 0zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR 3000 1

3000 1

(a) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA r * " " ^ " " " i

mol i

Total Hold-up i

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK

^ n^ A ^\ ^ \V

2 ? 1000 ^ to -^ -J

\\

-bi \V ;

j

10

'i™' ' ' " n

20

• ^

»—.i.

• •!

50 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHG

30

Time

Time

5000

1.0 IzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA M (d)

~J>^^

0.8 Reboil Ratio, d2

0.6

tti

0.4

Reflux Ratio 10

20

30

min

50

10

20

30

min

50

Time

Fig. 2: Hold-up in (a) distillate vessel, (b) sump, (c) side product accumulator and (d) internal reflux and reboil ratios.

4. Composition of the Charge To study the effect of the composition of the charge, equal amounts of feeds of different compositions were processed in the same column operated in closed loop. The separation was carried out in the shortest time when the charge was rich in intermediate boiling component (fig. 3a). Furthermore, even though the duration of the start-up step increased with decreasing concentration of b in the feed, its effect was of minor importance with respect to the increase of the duration of the production step (open operation mode). This was due to the fact that both the light/intermediate and the intermediate/heavy separations, at the beginning of the process, could be carried out at low reflux and reboil ratios (fig. 3b) for feeds rich in b. On the other hand, if the charge contained low amounts of the intermediate boiler, both the light/intermediate and the heavy/intermediate separations required high reflux and reboil ratios. This is in agreement with the results obtained by S0rensen and Skogestad (1996) in their

632 comparative studies on the regular and inverted batch columns. For feeds containing low amounts ofzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA b, the recovery dropped significantly hence, the process time decreased for very low concentrations of b in the charge. Therefore, our investigations were limited to the case in which the feed was much richer in b than in a and c. In such cases the relative content of a and c played a minor role and influenced mainly the duration of the start-up of the process i.e. the closed operation mode. (a)

90 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

30

Start-up " 0 0.2

0.4

0.6

0.8

Concentration of b in Feed

Fig. 3: Effect of composition of charge on (a) duration of the process, (b) internal reflux and inverse of the internal reboil ratios. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJ

5. Effects of the Geometric Parameters The geometric parameters of the process were identified as the total number of stages and the position of the withdrawal tray. 5.1. Number of stages The total number of stages was varied while the position of the withdrawal feed was kept in the middle of the column and the composition controllers were placed two stages

7

9

11

13

Number of Stages

15

17

19 C>

7

9

11

13

Number of Stages

15

17

19

—o

Fig. 4: Effect of total number of stages on (a) recovery and (b) purity of the products.

633 below and two stages above the withdrawal tray. The set-points to the two controllers were not varied during this investigation. Hence, the concentration profile around the withdrawal tray was fixed by the two control loops and the composition of the intermediate boiler was independent of the number of stages. With increasing number of stages, lower reflux ratios were required to achieve high purityzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA b hence, the recovery rate of the intermediate boiler increased and the process time decreased. The concentration of b in the top and sump vessels at which the process became infeasible decreased with increasing number of stages. Hence, the recovery of b (Gb) (and the purity of the light and heavy boilers) increased with increasing number of stages (fig. 4). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 5.2. Position of withdrawal tray The position of the withdrawal stage determined the relative size of the two column sections. Hence, by shifting the withdrawal tray upwards, the upper column section got smaller and the purity of the light boiling product decreased while that of the heavy boiler increased, and vice-versa. Since the control loops fixed the concentration profile around the withdrawal stage, the purity of the intermediate boiler was not affected. On the other hand, as the withdrawal tray was moved away from the middle of the column, the recovery rate of the intermediate boiler (6 ) decreased.

6. Termination Criteria for the First Process Step Increasing the duration of the first process step reduced the concentration of the light boiler present in the sump of the column and that of the heavy boiler in the top vessel at the beginning of the second step {fig. 5 b). Hence, with increasing duration of the first process step, the concentration of b in the column at the beginning of the second process step increased. This led to an increased concentration of the middle boiling product and to an increased recovery of the light and heavy boiling products {fig. 5). 1.0 (a) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Purity 6

moi mol

w

^^^^^

p 0.6

0.4 0

10 20 Duration of Start-up

min

40

0

10

20

Duration of Start-up

min

40 [^>

Fig 5: Effect of the duration of the start-up on (a) purity and recovery and (b) moles of c in the distillate vessel at the end of the start-up phase.

634 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

7. Set-Point to Composition Controllers The control loop of the upper column section controlled the composition of the low boilerzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA a, in the liquid phase two stages above the withdrawal stage, while the lower control loop controlled the composition of the high boiling impurity c, two stages below the withdrawal stage. Hence, the concentration of impurities in the withdrawal stream increased with increasing set-points. The duration of the process increased with decreasing set-point. This was due to the fact that higher reflux and reboil ratios were required to reach the lower set-points i.e. high purity b. Set-points lower than the concentration reachable at infinite reboil and reflux ratios were unfeasible.

8. Conclusion In this paper we have introduced a novel operational policy for the purification of an intermediate boiling component via the cyclic operation of a batch distillation column with a side withdrawal. The feasibility of the process was investigated by considering the novel column configuration as an inverted batch distillation column placed over a regular batch column. A novel operating strategy, based on the feasibility studies, was developed and verified by computer aided simulations. Furthermore, the influence of most important parameters on the performance of the process was systematically investigated.

9. Notation B Bottom fraction g Flow rate of bottom product [mol/s] a Low boiling component L Liquid flow rate [mol/s] RL Reflux ratio 6 Recovery rate [mol/s]

M D Distillate fraction Flow rate of distillate b W product [mol/s] c b Intermediate boiling component X V Vapour flow rate [mol/s] SP RB Reboil ratio o Recovery [mol/mol]

Middle vessel fraction Flow rate of withdrawal stream [mol/s] High boiling component Molar fraction Side product accumulato

10. References S0rensen, E., Skogestad, S., 1996, Comparison of regular and inverted batch distillation, Chem. Engng. Sci., Vol. 51, No. 22,4949-4962. S0rensen, E., Prenzler, M., 1997, A cyclic operating policy for batch distillation Theory and practice, Comp. Chem. Engng., Vol. 21, Suppl., S1215-S1220. Warter, M., Demicoli, D. Stichlmair, J., 2002, Batch distillation of zeotropic mixtures in a column with a middle vessel, Comp. Aided Chem. Engng., Vol. 10, 385-390. Wittgens, B., Litto, R., S0rensen, E., Skogestad, S., 1996, Total reflux operation of multivessel batch distillation, Comp. Chem. Engng., Vol. 20, Suppl., S1041S1046.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

635

Modelling and Optimisation of a Semibatch Polymerisation Process Ludwig Dietzsch*, Ina Fischer BTU Cottbus, Lehrstuhl Prozesssystemtechnik, Postfach 101344, D-03013 Cottbus Stephan Machefer BTU Cottbus, Lehrstuhl Chemische Reaktionstechnik, D-03013 Cottbus Hans-Joachim Ladwig BASF Schwarzheide GmbH, D-01986 Schwarzheide

Abstract This paper focuses on the modelling and optimisation of an industrial semibatch distillation process with a polymerisation reaction taking place in the reboiler. The dynamic model presented here is implemented in CHEMCAD and validated with experimental data from the industrial plant. An approach to optimise the economical performance of the semibatch process is discussed. As a result of the work so far the control structures and operation policies were improved and it was shown that further optimisation is worthwhile.

1. Introduction Because of the increasing trend toward the production of low-volume/high-cost materials batch and semibatch processes become more and more important. In today's competitive markets, this implies the need for consistent high quality and improved performance. Over the last few years there has been growing interest in techniques for the determination of optimal operation policies for batch processes. Dynamic simulation has become a widely used tool in the analysis, optimisation, control structure selection and controller design. Some of the most recent work has been concerned with the mathematical optimisation of batch process performance (Li, 1998, Li et al., 1998). In this paper an industrial semibatch polymerisation process is considered. In order to guarantee the product quality particularly controlled reaction conditions are necessary. The general aim of this work is to ascertain optimal state and control profiles and to develop a model-based control scheme. As a first step, this paper introduces the dynamic model, which is validated with experimental data, and describes the optimisation approach. An aim of the work is to assess the possibilities of the commercial flowsheet simulator CHEMCAD in the optimisation of the performance of semibatch polymerisation processes. Finally the formulation of the mathematical optimisation problem, solution strategies and their implementation in CHEMCAD are discussed.

* To whom correspondence is to be adressed. Fax: ++49 355 691130. Phone: ++49 355 691119. E-mail: [email protected]

636 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

2, Process Description The industrial process (Figure 1) consists of a reactor (acting as the reboiler), a packed column, a total condenser and two distillate vessels. The polymer is manufactured through reversible linear polycondensation or step-growth polymerisation. The overall reaction can be characterised by the following scheme: dialcohol (A) + dicarboxylic acid (B) ^^ polyester (P) + water (C)

Actually the reaction mechanism is much more complex (Chapter 3.1). It leads to a polymer chain length distribution. The polyesterification is an exothermic reaction. At the beginning dialcohol and dicarboxylic acid are charged to the reactor. Then the reactor is heated up to operating temperature. A further amount of dialcohol is fed to the reactor during the batch. Water is distilled from the reboiler and an excess of dialcohol is used to shift the reaction equilibrium to the product side. In the first period the pressure is kept constant. The distillate, nearly pure water, is accumulated in the first vessel. As the reaction progresses it gets more difficult to remove the condensate. Hence, the pressure is reduced in the second period to evaporate the remaining water. The concentration of dialcohol in the distillate increases. In this period the distillate is accumulated in the second vessel. The end of the batch is reached when the product shows a required acid value, carboxyl number and viscosity. Temperatures, pressures and flow rates are measured on-line (Figure 1). Furthermore the reaction is followed by on-line determination of viscosity, acid value and carboxyl number. The water content in the liquid polymer is found by off-line analysis. The major costs arise from the raw materials and the costs per hour of energy and wages. Thus a reduction of the batch time and the loss of dialcohol through the distillate is desirable. Besides a stable operation and less varying batch times are aimed to be achieved by better control. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

COOLING WATER

dialcohol dicarboxylic acid water polyester

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK

Figure 1. The semihatch process.

637 zyxwvutsrqp

3. Modelling and Simulation

3.1. Rigorous modelling The model is built in CHEMCAD with additions CC-DColumn and CC-Reacs. Different control loops are implemented. The characterisation of the complex kinetics of the polymerisation reaction is a very important part of the modelling. zyxwvutsrqponmlkjihgfedcbaZYX

Reaction Kinetics According to Flory (1937, 1939, 1940), self-catalysed polyesterifications follow thirdorder kinetics with a second-order dependence on the carboxyl group concentration and a first-order dependence on the hydroxyl group concentration. Experimental verifications show deviations for conversions less than 80%. The reaction then follows a second-order kinetics with a first-order dependence on the carboxyl group and the hydroxyl group concentration, indicating a bimolecular reaction between one carboxyl group and one hydroxyl group. A simplified approach is chosen for the dynamic CHEMCAD model. Following Flory's investigations the polyesterification is described by a consecutive-parallelreaction scheme: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJ k[

(I)

v;, -A + v^ -BzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA v\, O + vl C (1)

(II)

v g - 0 + v;[-A ^

^ vg-P + vg-C

(2)

Two model components are introduced, an oligomer (O) as intermediate product and the polymer (P) as final product. The chain length distribution is not considered in the model. The oligomer and the polymer are characterised by the average molecular weight and chain length. The following rate equations are considered for the polyesterification:

dX dt dt

= k« • [ 0 r " - [ A r " - k f . [?]="" .[C?"

(4)

The kinetic parameters of the polyesterification were determined from literature (Beigzadeh and Sajjadi, 1995, Chen and Hsiao, 1981, Chen and Wu, 1982, Kuo and Chen, 1989) and process data. Thermodynamic model The vapour liquid equilibrium is described by the NRTL equation considering a nonideal liquid phase. The NRTL parameters for the system dialcohol/water are taken from Gmehling (1991, 1998). Because of missing experimental data for carboxylic

638

acid-systems and, of course, for the model components, the oligomer and the polymer, UNIFAC with different modifications (Gmehling and Wittig, 2002, Larsen and Rasmussen, 1987, Torres-Marchal and Cantalino, 1986) is used to predict the vapour liquid equilibrium and to determine NRTL parameters of it. Since there are considerable differences between the prediction methods (Figures 2-3) this choice has an important effect on the simulation results. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFED

molefractionoligomer

mote fraction oligomer

Figure 2. Vapour liquid equilibrium oligomer-dialcohol at 5 kPa.

zyxwvutsrqponmlkjihgfedcbaZYXW

Figure 3. Vapour liquid equilibrium oligomer-dialcohol at 101 kPa. zyxwvutsrqponmlkj

3.2. Simulation results The model was validated with experimental data from the industrial site. Figures 4-5 show selected simulation results in comparison with the measured profiles. The results are satisfactory.

' ^V / ^ " ^ • ^ v^

f

r [COOlf vrrrr*-*-** • • "' L[OH]

100009500 90008500 —, 80005 * 7500 «J 7000-

9500 -9000 8500 -8000 7500 -7000 6500 ;6000 -1000 500

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA /^• '*' v' VJ^5*"*»»«**»»»»« /• "^ ^ •^ • ^. ^ ^ ^ i i i *. /• ^ °h ! = esoo ' \» ^**^ r/ • Experiment [COOH]zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFED t • [COOHl - ExperimenT*****-.., /•zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR Simulation : 1-5

k-

• =• 0.01 , c

S1"^ c

8

§'"' 1E-6-,

• A •

[OH] - Experiment [COO] - Experiment [HjO] - Experiment ICOOH] - Simulation [OH] - Simulation [COO] - Simulation [H,01 - Simulation

~ "-^

f

"^• ^ . l[H,01"

1E-6-

Figure 4. Simulated and measured concentration profiles in the reactor.

.2 6000» T3 1 Z

500

3 O i

10050

-100 50

105

-10 • 5

1-

-1



Figure 5. Simulated and measured amount of distillate.

The dynamic model is employed to analyse the batch performance, to investigate different control loops and to determine potential of improvement. Suggestions for improvement of the control structures and the operating policies can be derived from the

639 dynamic simulation, which leads to better performance, shorter and less varying batch times. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

4. Optimisation Approach The objective of the optimisation is to minimise the batch time. Feed and reflux ratio profiles are considered as decision variables within the optimisation problem. Constraints to be taken into account are product specifications (acid value, carboxyl number, viscosity, water content), feed amount and limitation of the feed flow rate. The model DAE's are discretised and the resulting algebraic system is optimised with a NLP algorithm (e.g. a SQP solver). The objective function and the constraints can be defined as VBA macros and then be computed by CHEMCAD. Present work is concerned with the implementation of the optimisation algorithm.

5. Conclusions In this paper a dynamic model for a semibatch polymerisation process was presented. It was validated with experimental data from the industrial site and used for simulating the process. The simulation results show that the model can adequately describe the process and therefore constitute the base for the optimisation. The flowsheet simulator CHEMCAD has proved an efficient and powerful tool for the modelling, simulation and optimisation of semibatch polymerisation processes. Through findings gained by the dynamic simulation the batch operating time can be shortened and its variation can be reduced already. So the economic performance of the industrial process was improved. A mathematical optimisation approach is now being implemented to determine optimal operating policies. Future work will deal with the implementation of the optimal trajectories considering disturbances of the process and on-line optimisation as well.

6. Nomenclature [...] = concentration, mole/kg A = dialcohol B = dicarboxylic acid C = condensate (water) O = oligomer P = polymer k = reaction rate constant t = time, s a = reaction order zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Greek Letters X = reaction rate, mole/kg,s V = stoichiometric coefficient Subscripts f = forward reaction r = reverse reaction

640 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Superscripts I = first reaction, equation (1) II = second reaction, equation (2) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF

7. References Beigzadeh, D. and Sajjadi, S., 1995, J. Polym. Sci. Part A: Polym Chem., 33, 1505. Chen, S.A. and Hsiao, J.C., 1981, J. Polym. Sci. Part A: Polym Chem., 19, 3123. Chen, S.A. and Wu, K.C., 1982, J. Polym. Sci. Part A: Polym Chem., 20, 1819. Flory, P.J., 1937, JACS, 59, 466. Flory, P.J., 1939, JACS, 61, 3334. Flory, P.J., 1940, JACS, 62, 2261. Gmehling, J., 1991, Vapor Liquid Equilibrium Data Collection, Vol. 1: Aqueous Organic Systems, Chemistry Data Series, DECHEMA, Frankfurt. Gmehling, J., 1998, Vapor Liquid Equilibrium Data Collection, Vol. la: Aqueous Organic Systems, Chemistry Data Series, DECHEMA, Frankfurt. Gmehling, J. and Wittig, R., 2002, Ind. Eng. Chem. Res., 28,445. Kuo, C.T. and Chen, A.S., 1989, J. Polym. Sci. Part A: Polym Chem., 27, 2793. Larsen, B.L. and Rasmussen, P., 1987, Ind. Eng. Chem. Res., 26 (11), 2274. Li, P., Garcia, H.A., Wozny, G. and Renter, E., 1998, Ind. Eng. Chem. Res., 37 (4), 1341. Li, P., 1998, Entwicklung optimaler FUhrungsstrategien fiir Batch-Destillationsprozesse, VDI Verlag, Dusseldorf. Logsdon, J.S. and Biegler, L.T., 1993, Ind. Eng. Chem. Res., 32 (4), 692. Reid, R., Prausnitz, J.M., Poling, B.E., 1987, The Properties of Gases and Liquids, McGraw-Hill, New York. Torres-Marchal, C. and Cantalino, A.L., 1986, Fluid Phase Equilibria, 29, 69.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

641

A Global Approach for the Optimisation of Batch Reaction-Separation Processes S. Elgue^, M. Cabassud^ L. Prat^ J.M. Le Lann^ J. Cezerac'' "Laboratoire de Genie Chimique, UMR 5503, CNRS/INPT(ENSIACET)/UPS, 5 rue Paulin Talabot, B.P. 1301,31106 Toulouse Cedex 1, France ^Sanofi-Synthelabo, 45 Chemin de Meteline, B.P. 15,04201 Sisteron Cedex, France

Abstract Optimisation of fine chemistry syntheses is often restricted to a dissociated approach of the process, lying in the separated determination of optimal conditions of each operating step. In this paper, a global approach of syntheses optimisation is presented. Focusing on the propylene glycol synthesis, this study highlights the benefits and the limits of the proposed methodology compared with a classical one.

1. Introduction The synthesis of fine chemicals or pharmaceuticals, widely carried out in batch processes, implies many successive reaction and separation steps. Thus, synthesis optimisation is often restricted to the determination of the optimal operating conditions of each step separately. This approach is based on the use of reliable optimisation tools and has involved the development of various optimal control studies in reaction and distillation (Toulouse, 1999; Furlonge, 2000). Nevertheless, such an approach does not definitely lead to the optimal conditions for global synthesis. For instance, optimising the conversion of a reaction for which separation between the desired product and the by-products is more difficult than with the reactants, will involve an important operating cost, due to further difficulties in the separation scheme. Thus, necessity to simultaneously integrate all the process steps in a single global optimisation approach clearly appears. Recent issues in the dynamic simulation and optimisation have been exploited to accomplish this goal. Thus, in literature optimisation works based on a global approach recently appears (Wajge and Reklaitis, 1999). These works because of the global process configuration (e.g. reactive distillation process), because of the modelling simplifications and because of the optimisation procedure do not allow grasping the benefits linked to a global approach. The purpose of the present study lies in the comparison between a classical and a global optimisation approach, by the mean of a global synthesis optimisation framework. Applied to a standard reaction-separation synthesis of propylene glycol production, this comparison emphasises the characteristics of each approach.

642 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

2. Optimisation Framework The present work is based on the use of an optimisation framework dedicated to optimal control of global syntheses (Elgue, 2001). This framework combines an accurate simulation tool with an efficient optimisation method. Because of the step by step structure of global syntheses, the simulation tool is based on a hybrid model. The continuous part represents the behaviour of batch equipments and the discontinuous one the train of the different steps occurring during the synthesis. Non-linear programming technique (NLP) is used to solve the problems resulting from syntheses optimisation. This NLP approach involves transforming the general optimal control problem, which is of infinite dimension (the control variables are timedependant), into a finite dimensional NLP problem by the means of control vector parameterisation. According to this parameterisation technique, the control variables are restricted to a predefined form of temporal variation which is often referred to as a basis function: Lagrange polynoms (piecewise constant, piecewise linear) or exponential based function. A successive quadratic programming method is then applied to solve the resultant NLP.

3. Propylene Glycol Production Industrially, propylene glycol is obtained by hydration of propylene oxide to glycol. In addition to monoglycol, smaller amounts of di- and triglycols are produced as byproducts, according to the following reaction scheme: C3H6O + H2O

II—•

C3H8O2

C3H6O + C3H8O2zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA '-^-^ C6H14O3 (^) C3H6O + C6H14O3

'-^—>

C9H20O4

Water is supplied in large excess in order to favour propylene glycol production. The reaction is catalysed by sulfuric acid and takes place at room temperature. In order to dilute the feed and to keep the propylene oxide soluble in water, methanol is also added. The reaction is carried out in a 5-litre stirred-jacketed glass reactor. Initial conditions described by Furusawa et al. (1969) have been applied: an equivolumic feed mixture of propylene oxide and methanol is added to the reactor initially supplied by water and sulfuric acid, for a propylene oxide concentration of 2,15 mol/L. In agreement with previous works reported in literature, kinetic parameters of the reaction modelled by an Arrhenius law are summarised in table 1. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF

Table 1: kinetic model of propylene glycol formation. Reaction 1 2 3

Pre-exponential factor

(L.moH.s"^) 1.22 10^ 1.39 10^^ 9.09 10^^

Activation energy (KcaLmol^K-^) 18.0 21.1 23.8

Heat of reaction (KeaLmor^) -20.52 -27.01 -25.81

643 zyxwvutsrqpo

Table 2: Components separation characteristics. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO Component Bubble point Component Bubble point Propylene oxide (reactant) 34 °C Propylene glycol (product) 182 T Methanol (solvent) 65 °C Dipropylene glycol (by-product) 233 °C Water (reactant) Tripropylene glycol (by-product) 271 °C 100 °C

According to components bubble point (table 2), the distillation involves the separation of the methanol and the resultant reactants (for the most part water) from the reaction mixture. Propylene glycol and by-products are then recovered from the boiler. The overhead batch distillation column consists of a packed column of 50 cm in length and 10 cm in diameter. A condenser equipped with a complex controlled reflux device completes this process. A heat transfer fluid supply reactor jacket with a temperature varying from 10 to 170°C according to the operating steps.

4. Reaction Optimisation

Optimal control of reaction generally involves two contradictory criteria: the operating time and the conversion. In this paper, the study amounts in the determination of the optimal profiles of temperature and reactants addition for an operating time criterion with an acid conversion constraint set up to 95.5%. Within the context of an industrial reactor, the considered temperature consists in the heat transfer fluid temperature. Two different optimal control problems have been studied with or without a production constraint on the by-products amount: by-products amount inferior to 3.5% of the total production. In these problems, the temperature profile of the heat transfer fluid is discretised in five identical time intervals. Piecewise constant parameterisation of the temperature has then been adopted. Reactant addition flow rate has also been discretised in five intervals, but only the four last ones have the same size. Then, the time of the first interval and the value of the piecewise constant constitute the optimisation variables of the feed flow rate. The results associated to an optimal reaction carried out with a by-products constraint are given on figure 1. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR 60

60

600

Without by-pro ducts constraint

W i t h by-products constraint

[

50

/""•^

—-—-

1

450

g40

1 300 S ,... •

i^^

^ X

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Cf.class e B.patch \/i e {1,2,... ,n} Ci.class = Cj.class \/i,Je{l,2,...,n} U Ci.cmp = B.cmp

^ • [ C i , C 2 , . . . , C „]

^

(3)

Ci.cmp = 21 B.cmp

i=l,n

i=l,k

where class is a built-in Ruby function returning the class name of the object, and chain and patch are user defined methods retuming a set of friend classes for the H- and • operations respectively. Finally, cmp is a user defined method which returns the component set of the object. The + operator adds the results calculated by object 5 to a similar data structure in object A. Example: A = ideal and B = excess properties. The • operator spawns the results calculated by object C, into a dissimilar data structure in object B. This allows for structural changes in the calculated results, but the left operand B must of course know exactly how to perform the patch. Example: B = Helmholtz energy and C, = standard state of component /. The power operator ^ will be used to modify the behavior of a particular function, e.g. to specify which of the many existing m (cy^z)-correlations to be used in a cubic EOS. Note the difference between = (set operation) and = (list operation) as used in the last two comparisons. The first test ensures that the union of the component sets of Ci=i^„ equals the component set of B, while the second test ensures that each component is repeated k e {1,2, n} times in the cumulated list. For pure component standard states k = 1, for geometric activity models like e.g. the Kohler expression (Bertrand, Acree & Burchfield 1983), A: = 2, andfinallyfor a full mixture contribution k = n. zyxwvutsrqponmlkjihgfedc

3. Results Two examples are provided to illustrate the benefits of algebraic description of thermodynamic frameworks. The first example is based on a traditional cubic equation of state approach: jmix A

_

JO T

I I

jig

I

jSRK,m_soaoe

(4)

7^

t^i (/J ^^^ '^^ - ^/J T^^^) + Z^^ (^/^^ - ^^°) n

(5)

668 The annotations refer to a collection of thermodynamic classes needed for the semantics control: Helmholtz energyzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA {A)y standard state (T), ideal gas (J), equation-of-state residual (7^), heat capacity integral (C), and enthalpy and entropy reference (7i). The number of these classes is (perhaps disappointingly) large, but one must realize that thermodynamics is an old subject and that some of the conventions made over the past 100 years are quite inadequate for computer implementation, hi addition to the physically reasoned classes, one extra class *S is needed to faciUtate a general interface to the Gibbs energy and Helmholtz energy objects (and maybe grand canonical potential objects, etc.). This root class represents a mathematical quantity called a surface which is responsible for wrapping the canonical state variables of Gibbs and Helmholtz energy objects into a completely generic thermodynamic state function. Similarly, an extra class M is used to provide a layer between the Helmholtz class and the model classes (the need for this class depends somewhat on the implementation details). The algebraic representation of the Helmholtz model can now be written as, Si.Aic{TicCicH-{-Mi.I

+ Mi.R'")

(6)

where each symbol S, A,... represents an instance of the classes 5 , A • • • (the function modifier '" tells srk to use Soave's w-factor correlation). The syntax of the expression is easily verified, but it is harder to prove that the semantics is correct as well. This would require: a) a sound mathematical basis for the algebra, or, b) a full discussion of all the participating classes. It can be appreciated, however, that provided the semantics is under control the algebraic expression is a compact representation of the Helmholtz model. Note by the way that one further simplification is possible if 7Z is implemented as a chain "friend" of I: Si.Ai.{TicCicH+{Mic{l-i-

R'")))

(7)

A grand overview of the model structure can be obtained by viewing the expressions as data trees (Eq.6 to the left and Eq.7 to the right): •

A



T

•zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA S i^ A i^ T i< C • H C • H

+

+ M



+ M



M • / / zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (8) ) j^m

+ R^

Here, the chains are arranged vertically and the patches horizontally. This makes it easier to remember that the chains add information at the same level of complexity (standard state plus equation of state contributions), while the patches add information at an increasing level of complexity when going from left to right (the surface being more general than the Helmholtz function, which is more general than either the standard state or the equations of state, and so on). The two algebraic expressions are completely equivalent in

669 the sense that the computed results will be the same, but the actual calculation paths will of course differ. At this point it is of interest to see what the Ruby code looks Hke. Analytical conciseness does not guarantee a simple computer code, but in this case the code snippet is seen to be very close to the analytical expression: ,'Oxygen','Argon'] mix = ['Nitrogen'zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA gas = Surface.new(mix) * ( Helmholtz.new(mix) * ( StandardState.new(mix) * ( MuT_cp.new(mix, :poly3 , ' ig') * ( MuT_hs.new(mix, :hOsO,'ig') ) ) + EquationOfState.new(mix) * ( ModTVN_ideal.new(mix,:idealgas) + ModTVN.new(mix,:srk).tell(:m_soave) ) ) ) Here, new is the standard Ruby constructor (overloaded in each class). The function names/7o/y5, hOsO, idealgas, srk and msoave do all refer to actual model implementations, while ig is a phase tag used for the data base binding. If no function is specified in the constructor then anonymous will be called. The role of this function (which is also overloaded in each class) is simply to patch the calculation results from the underlying nodes without adding any private functionality. Finally, the ^ operator is not implemented as such but rather as a call to method tell (again due to implementation details). The second example is made more complex to show how the patch operator can be employed to obtain quite sophisticated model arrangements: .

. = ^ ^ + ^^+^kohleA^

i=2,3

-^Pi

1,2

'^1,3

'^2,3

)

W

i=2,3 V zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Z^^ (/J^^'^^^ - ^/JzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJI '-f'")+z^' i^f^i - ^^n (10) c

n

The algebraic equivalent is ^ • G ^ ( / ' ^ [ F 2 3 ^ F 2 3, Fi^Ci*7^l] + M * / - h M ^ ^ ^ [ £ l 2 , £ l 3 , ^ 2 3 ] )

(11)

where F/y is an instance of the Poynting integral class V appUed to the components / and j . Similar interpretations hold for instances of the excess Gibbs energy class S, the heat

670 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA capacity integral classzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA C and the standard state class H. Note that V\ lacks a concrete implementation but that it is still needed in order to fulfill the semantics. However, since no function name is specified the properties of water will be calculated without a Poynting contribution (the anonymous function takes care of the data propagation).

4. Discussion The possibility of using algebraic operators to build thermodynamically consistent model frameworks has been addressed. By defining a thermodynamic class hierarchy in Ruby, where each class knows about its "friend" classes and how to deal with their calculation results, it is shown that a set of rules can be established to make the framework description bothflexibleand consistent. The outcome of the description is a Ruby object that can be hnearized into XML-format before it is sent to a calculation engine written in C++. A second possibiUty (not illustrated) is the exportation of Matlab code. In this case a standalone Matlab function is produced which encapsulates the entire framework, including all the model parameters, into one black-box state function returning the function value, the gradient and the Hessian of both Gibbs energy and Helmholtz energy models. A third possibility (under development) is the exportation of L^TgX-code for self documenting model description.

5. References Bertrand, G. L., Acree, Jr., W. E. & Burchfield, T. E. (1983), 'Thermochemical excess properties of multicomponent systems: Representation and estimation from binary mixing data', J. Solution Chem. 12(5), 327-346. Braunschweig, B. L., Pantehdes, C. C, Britt, H. I. & Sama, S. (2000), 'Process modeling: The promise of open software architectures', Chem. Eng. Prog. 96(9), 65-76. Castier, M. (1999), 'Automatic implementation of thermodynamic models using computer algebra', Comput. Chem. Eng 23(9), 1229-1245. Rudnyi, E. B. & Voronin, G. F. (1995), 'Classes and objects of chemical thermodynamics in object-oriented programming. 1. a class of analytical functions of temperature and pressure', CALPHAD 19(2), 189-206. Thomas, D. & Hunt, A. (2001), Programming Ruby: The Pragmatic Programmer's Guide, Addison-Wesley, Boston. Uhlemann, J. (1995), *An object-oriented environment for queries and analysis of thermodynamic properties of compounds and mixtures', Compw/. Chem. Eng. 19(Suppl.), 715-720. Uhlemann, J. (1996), 'An object oriented software environment for accessing and analysing thermodynamic pure substance and mixture data', Chem-Ing-Tech 68(6), 695698.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

671

Short-Term Scheduling in Batch Plants: A generic Approach with Evolutionary Computation Jukka Heinonen and Frank Pettersson [email protected], [email protected] Heat Engineering Laboratory, Department of Chemical Engineering Abo Akademi University, Biskopsgatan 8, FIN-20500 Abo, Finland

Abstract This paper presents a program that uses a graphical user interface through which an enduser can describe a batch process using simple building blocks and then develop schedules for it. By allowing the end-user to build schematics of the target process hardware and write product recipes we can evade the all too common need for a software vendor to build specifically targeted releases for different batch processes; processes that might use common batch-specific hardware which is just connected and/or used in a different manner. Using genetic algorithms as an instrument a software package is presented that is able to target and solve large scheduling problems.

1. Introduction In e.g. the chemical engineering industry, manufacturers are faced with ever growing demands on profitability. Using multi-purpose batch mode production is one effort to introduce more profits through increased process flexibility. Expanding and renewing processes through new hardware acquisitions is another common solution, yet it is of utmost importance to use existing hardware as efficiently as possible. That is where efficient scheduling and utilization of the available hardware enter. Scheduling problems occurring in batch mode production have been studied extensively. Methods including mixed integer linear programming (e.g. Kondili et al. 1993) in either pure or hybrid MILP-models are popular but sadly these lack some edge when it comes to NP-hard problems. Since an industrial sized scheduling problem typically expresses an exponentially growing number of candidate solutions, this leaves such approaches somewhat wanting. Probabilistic optimisation and scheduling techniques like simulated annealing (e.g. Kirkpatrick et al. 1983) or genetic algorithms (e.g. Goldberg 1999) are good alternatives, and in this paper the genetic algorithm is the preferred approach to tackling large scale multi-purpose batch processes and their scheduling.

2. Methods Figure 1 shows an overview of the scheduling procedure. There are three main steps; specifying the available hardware, telling the system how it is used when manufacturing various products and then generating the schedule and presenting it to the user. The

672

initial states of the hardware can be entered, i.e. storage tank levels, reactor states etc., alternatively default (empty) values can be used. An interesting question is if the program can be made to automatically get the initial values from, for instance, an automation system or a database located on a LAN-server. At present time, however, the values are inputted by hand. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF

j

HARDWARE SPECIFICATION

k)==>

HARDWARE DATABASE

RECIPE SPECIFICATION

kj==>

RECIPE DATABASE

INITIAL VALUES FOR HARDWARE

L

EVOLUTIONARY COMPUTATIONS PHASE SIMULATION PHASE

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO

Figure 1: Overview of the scheduling procedure. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO 2.1. Hardware specification The first step is to specify the hardware found at the manufacturing plant. This is done by using a graphical user interface and a simple select & drop mechanism. An object is selected and placed on the drawing board. By opening an "object properties "-page one can further refme and shape the object. For instance, if the user wants to specify a storage tank the object is given a volume and preferred safety limits for the cistern profile. Next a general method of input for the cistern is defined: On-demand, continuous feed etc. which helps the simulation part of the program to understand the behaviour of the cistern. If several similar cisterns are needed it is a simple matter to select the defined cistern and make copies of it. The underlying mechanisms are completely object-oriented so the copies inherit all properties of the parent. A principle behind the program is to keep the number of building blocks small. At this stage typical hardware included in the program are reactors, storage tanks, concentrators, mixers, splitters and filters. The development continues so more hardware will be included in the future. The specified hardware can be saved and reloaded from a database for future use. 2.2. Specifying a recipe The second step is to specify how different products are manufactured using the declared hardware. The resulting model is called a "recipe", and tells us how much to take of necessary substances, where to find them, what to do with them and how long process steps in e.g. a reactor will take. Eventual recycles and feeds are also defined at

673 this stage. The input procedure of a recipe is through a GUI, with needed information entered through drop-down menus, edit boxes and other common controls. A recipe is written for each different product manufactured. Different recipes are collected in a database for quick access and modification. Having written a recipe, the user can get a connection-based view of the hardware needed for the defined product and verify that the recipe is written correctly.

When planning a schedule the objective function must be specified. That is, in a multiproduct plant, do we want to meet some production criteria like "Produce 10 units of A, 25 units of B and minimise makespan or setup times", or do we want to make sure all customer orders are manufactured and delivered within given due-dates, i.e. to prioritize production slightly different depending upon due-date penalties. zyxwvutsrqponmlkjihgfedcbaZYXW 2.3. Building a schedule With all the necessary information present (including initial states for the hardware), a schedule can be calculated. The mechanism is divided into an evolutionary computation phase, where the order of events in the system is sought, and, using current recipes and information encoded in the chromosomes, the cistern profiles and reactor states are simulated in a simulation phase. With information from the simulation step, chromosome fitnesses can be calculated. These phases continue to loop until an end criterion is met. The criterion might be e.g. "loop until system converges", "loop for a specific time", "loop a specific number of generations" or the user can simply stop the calculations, check the resulting schedules and continue the calculations if they are found lacking. During runtime, the state of the current population is presented to the user by means of average population fitness-curves as well as fitness distribution plots along with various other helpful information.

2.4. The GA engine From previous trials (Heinonen and Pettersson 2002), the permutation encoding approach was found to be suitable. Exponential ranking was chosen as selection method, i.e. an exponential expression is used in calculating the parental selection probabilities. Individuals with larger fitness values receive a proportionally larger probability to be selected as parents. The following expression by Blickle (1995) is used to calculate the selection probabilities: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE

c

—\

where /?, is the selection probability for chromosome /. The number of chromosomes in the population is N. Note that 0 < c < 1, and the closer c is to 0 the higher the "exponentiality" of the method and the higher the selective pressure. A c-value of 0.65 is used here. After trials two-point crossover was chosen as the crossover operation with a crossoverrate of 60%. Mutation was kept at 1%.

674 Since some multiproduct batch plants can be very large and contain lots of different products a question that arises is calculation time. The genetic algorithm is very suitable for distributed networking and a mechanism is to be tested where a client-server structure can be used to distribute the simulation workload. The calculation clients can run on any NT-operating PC and the main idea is that the PC-user can allocate how much of their CPU-time is used for client-side calculations. Alternatively, more efficient chromosome structures might allow for scheduling of some very large problems. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

3. Example Problem Figure 2 shows a process with two different continuous raw material feeds into storage tanks (note that the picture shows only specified hardware, not their connections or process flow, which is defined by the recipe). Two concentrators concentrate the chosen material into a batch-operated reactor battery. A concentrator can only accept one type of material stream until the concentration task is completed. The reactors are made up of a prereactor and a mainreactor which form a pair. Processing times in the reactors vary, depending upon chosen raw materials. Once a prereactor has finished, the batch is transferred to the mainreactor where it undergoes further processing. After a finished reaction the batch is separated (thus yielding product) and some of the unused material is transported back as a recycle through a continuously operating reactivation step. The separation step can only accept one batch at any given time, and the time of separation varies depending upon which type of raw material was used in the batch. The reactors have different volumes and configurations and the concentrators have different capacities. The storage tasks are given minimum and maximum safety limits. In this particular example, 4 reactor pairs are locked to use only raw material A, and 6 pairs to use only material B. Cleaning a used reactor takes 30 minutes, no other setup times occur. There are no due-dates since the objective is to maximise production while fulfilling the hardware constraints. The scheduling task is to calculate a one-month schedule, manufacturing as much product from the different feeds without violating any cistern safety limits.

675

Vohano: 25.00 ra3 1 ^ Mar 90.00% m Min: 10.m% • %* Cuirent: 16.00 % Storage tank A,Constant Fk)w

zyxwvutsrqponmlk • m

Volume: 50.00 in3 Max: 90.00%

• fent:ffl^

Storage tankB.Constant F b w

a

1

H

1

1

1

a

1

[Reactors |

Rjeactmtion step^plitter

n

• :k)neentmtor l,Diy-Content Converter

• •

Concentrator 2,Dry-Content Converter

• • a •

a a • a

B 1

m

Volimw: 35.00 m3

n

|Rec3rcle ar>d Separation^

zy



*—' Separation step^pbtter

"^J Concentrator 3,Dry-Content Converter

• ^ M?: i Current: 38.50% Storage tank C,On-DenMnd

zyxwv

Figure 2: A screenshot of the example process specified using the available building blocks.

4. Results In figure 3, a resulting gantt-chart can be seen. Concentrators, reactors, storage tanks and the separation step all have their separate schedules which can be viewed, saved and/or printed. Utilization figures are presented which makes identification of process bottlenecks easier. Figure 4 shows cistern profiles for the storage tanks, and as can be seen, they are well between the defined safety limits of 10% and 90% respectively. Computational time was around 15 minutes on a 600 MHz PC for the complete onemonth schedule.

zyxwvut

z

zyxwvutsr

iBBDDDDDDQDaDBEEfc Fiir^ir^iF^-rinFiPiiPiRPj' wrm ITI

§

HBBBBBBjQBBBEBIH BOBBBBEBBEEBBB !h

IH

Figure 3: A resulting one-month schedule for the reactor pairs in the example process. Similar charts are constructed for every piece of hardware used.

676 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA CISIKRN PROHLKS

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPON

Figure 4: Cistern profiles for the three storage tanks in the example process.

5. Conclusions zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA The results look promising. Due to the GUI and existence of simple building blocks, it is easy to defme a variety of common batch-type processes and construct schedules that give a good solution relatively fast. With some further tuning and introduction of more hardware this has the potential to become a very helpful tool for scheduling purposes and/or bottleneck identifications. In future work, some classic scheduling examples from the literature are to be used for benchmarking purposes.

6. References Blickle, T., Thiele, L., ETH Zurich, 1995. Goldberg, D.E., 1999, Genetic algorithms in search, optimization & machine learning. Addison Wesley, California. Heinonen, J., Pettersson, F., Scheduling a batch process with a genetic algorithm: comparing different selection methods and crossover operations. Submitted to European Journal of Operational Research, 2002. Kirkpatrick, S., Gelatt, CD., Vecchi, M.P., Science 1983, 220, 671-680. Kondili, E., Pantelides, C.C. and Sargent, R.W.H., Comput. Chem. Eng. 1993, 17, 211.

7. Acknowledgment The financial support from TEKES, the Finnish Technology Agency, is gratefully acknowledged.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

677

Model of Burden Distribution in Operating Blast Furnaces Jan Hinnela and Henrik Saxen Faculty of Chemical Engineering Abo Akademi University Biskopsgatan 8, FIN-20500 Abo, Finland E-mail:[email protected], [email protected]

Abstract A model for estimation of the burden distribution is developed for blast furnaces with bell-type charging equipment. The model uses a radar measurement of the local burden level, combined with equations for the dump volume and the repose angle of the charged material, which are solved by least squares. The burden surface is described by two half lines intersecting in the point where the trajectory of the falling dump hits the burden surface. The model is tuned and applied to data from a Finnish blast furnace, and the results are in general agreement with findings reported in the literature.

1. Introduction The blast furnace is the main process in the world used to produce iron for primary steel production. Preprocessed ores, in the form of sinter or pellets, are charged with coke in alternate layers into the furnace top and the iron-bearing burden is reduced and smelted by the hot gases formed in the combustion that is maintained lower down in the furnace by injecting hot blast (preheated air). The ore melting and coke combustion and gasification make the bed descend slowly, and a new layer is charged as the stock descends below a level set-point. The radial distribution of the burden materials plays a significant role in the operation of the process (Poveromo 1995, 1996). The coke and ore layers exert considerably different resistance to gas flow, so variations in the layer thicknesses in the radial direction affect the gas flow distribution and the pressure loss in the dry part of the process, therefore affecting both thermal and chemical phenomena in the shaft. The distribution of the coke layers also determines the size of the "coke windows" in the softening-melting (cohesive) zone, through which the gas enters from the lower regions (Omori 1987). Devices have been developed for estimation of the burden distribution, either by contact (lizuka et al. 1980, Iwamura et al. 1982, Kajiwara et al. 1983) or non-contact (Iwamura et al. 1982, Mawhinney et al. 1993, Grisse 1991) techniques, but such equipment is very expensive and generally requires considerable maintenance efforts. Models for indirect estimation of the burden distribution have been proposed (lizuka et al. 1980, Kajiwara et al. 1983, Nicolle et al. 1987, Itaya et al. 1982, Nikus 2001, Saxen et al. 1998, Hinnela and Saxen 2001). This paper presents a model that estimates the burden distribution from a single radar measurement of the vertical level of the burden surface (stock level) in combination with conditions for the shape of the stock line and the volume of the charged dump.

678 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

2. The Model and Its Numerical Solution Inspired by the findings from burden distribution experiments reported in the literature and by the approach made by Kajiwara et al. (1983), the surface of the charged dump is approximated by two main linear segments (cf. Fig. 1) (1) (2)

where z is the vertical distance from a reference level (often taken to be the lowest position of the edge of the large bell) to the stock line,zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQP rcj is the radial coordinate of a (possible) crest, RT is the throat radius of the furnace, and / is the running number of the dump. If the stock level is known at the moment when the dump is charged, one may solve the falling trajectory (Omori 1987) of the burden and use the intersection of the trajectory with the burden surface as the radial coordinate of the crest. In case movable armors are applied, the trajectory after hitting the armors can be calculated from the armor position and angle, assuming a certain deflection from the plates.

4(r) = aur + a,. 4(r) = a,,r + a^,

^r(o=^%/

Fig. 1 Schematic picture of a charged layer with a crest at r = rcj and a bending point of the layer-surface at r = rb, where the angle of the surface is reduced. The burden surface before charging is depicted by the thick solid line.

In order to determine the parameters, a,=(flfi,j,a2,i>^3,i»«4,i)^ of CQS. (1) and (2), the following equations are stated: Eq. (3) expresses continuity of the stock line at r = r^,/, and eq. (4) that the radar (subscript r) measurement, z,^z{rr), should be satisfied after the dump has entered (superscript a). The volume of the charged dump should satisfy eq. (5), where superscript/? denotes the conditions prior to the dump, m is the mass and p is the bulk density of the material. Finally, the angle of the burden surface given by

679

the center-side half-line should agree with a given value,zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ a, determined by the angle of repose of the material (cf. eq. (6)). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHG «M'"c,i+«2,=«3..'"c,,+«4,r

(3)

'• -'" (4) . . . , . ( 0 = 1 " " ^ ' " " " ''zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHG 2ft Rj

RT

V,=m,lp, = J j{zf(r)-zfir))rdrd0 0 0

= 2nj(zt(r)-z:ir))rdr

(5)

0

a,. =~atan(a3p •

(6)

The level and shape of the burden surface prior to the dump, zf ( r ) , can be determined from the previous solution, zf_i(r), and the local stock level measured by the radar after the previous dump, z%_i, under certain assumptions (Saxen and Hinnela 2002). In burden distribution experiments the surfaces of the dumps on the wall side of the crest and the coke-layer surfaces in the very central part of the furnace have been found to exhibit angles clearly smaller than the corresponding repose angles. The former observation has been considered by adding condition (7), where /5 is a constant, while the latter has been accomplished by modifying eq. (2) by an additional condition (8), where 0 < ^ < 1 and r^, < rcj (cf. Fig. 1). Finally, to deal with the possibility that a heavier material is charged on top of a lighter material in the same dump, as is done to implement center-coke charging (Uenaka et al. 1988), the crest of the first material (coke) is shifted by a quantity, Ar^, towards the furnace center (cf. eq. (9)). y6=atan(ai.) Ziir) = Cci3,ir + a,. if

(7) r < r,

r; = r , - A r ,

(8) (9)

The model has been implemented in Madab, solving equations (4)-(8) by least squares but treating the condition of continuity at the crest (eq. (3)) as an equality constraint. Initial guesses, a^^\ of the unknown variables are readily obtained from the vertical level of the stock measured by the radar and from the desired surface angles (Saxen and Hinnela 2002). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

3. Tuning and Results The model has been applied on data from the blast furnace No 1 of Rautaruukki Steel in Raahe, Finland. This medium-sized bell-top furnace is equipped with movable armors with ten possible positions (MA=1,...,10), has a throat radius ofRj = 3.15 m and a radar that measures the burden level 0.6 m from the furnace throat wall. The furnace burden consists of sinter (S), pellets (P) and coke (C). The data evaluated are from two distinct periods where the furnace was operated with ten-dump charging sequences. The average burden-layer thicknesses, Azn estimated from the radar signals, as well as the main characteristics of the charging programs are given in Table 1.

680 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

3.1. Tuning Before the model can be applied, the values of some open model parameters have to be set. In accordance with findings reported in the literature (Kajiwara et al. 1983, Michelsson 1995) the change in inclination of the coke surface was set as ^ = 0.3 (cf. eq. (8)), and the effect of the radial position of the bend point, r/,, was determined by minimization of the square sum of residuals over a 80-dump data set fromzyxwvutsrqponmlkjihgfedcb Period 1 (cf. Table 1). The results showed that the error was very insensitive to the position of the Table 1. Main characteristics of the charging programs of the two periods. Period 1 MA Viva' 9.56 4

Period 2 MA V/ m' 2 11.3

J

Type S

2

P

10

3.15

0.216

C+P

10

5.73

0.158

3

C

2

13.0

0.515

C

7

11.4

0.495

4

P+S

6

13.0

0.317

P+S

2

14.1

0.442

5

C

2

13.0

0.471

C

2

14.0

0.497

6

S

7

9.56

0.324

S

2

11.5

0.358

7

P

10

3.14

0.247

C+P

10

5.69

0.144

8

c

2

13.0

0.483

C

7

11.6

0.518

9

P+S

5

13.0

0.359

P+S

2

14.2

0.445

c

2

13.0

0.520

C

2

14.0

0.456

Dump

10

Azr/ m 0.429

Type S

Azr/ m 0.397

bending point, with a flat minimum at r^ = 0.8 m, in general agreement with the findings of other authors (Kajiwara et al. 1983, Michelsson 1995). The effect of the coke-push parameter of eq. (9) was studied by a similar procedure applied on a uniform 200-dump data segment from Period 2, where two coke dumps in the sequence were center-charged. The analysis showed minimum error at Ar^ = 0.3 m. 3.2. Evaluation In the analysis of the model it was found that it was able to produce a reasonable description of the layered structure of the burden in the shaft. Figure 2 shows the estimated burden distributions in the upper part of the shaft for segments of Periods 1 and 2, where coke is depicted by light, ore by gray and pellets by dark layers. In estimating the descent of the charged layers in the shaft, a procedure described in Saxen and Hinnela (2002) was applied. The radial distributions of the layer thicknesses, presented in the lower panels of the figure, illustrate that the charging program of Period 1 enhances peripheral gas flow by putting more coke towards the wall (e.g., to clean the wall from accretions), while the charging program of Period 2 corresponds to those applied at normal efficient furnace operation. In the latter period, the center-charged coke dumps are seen to give rise to a pronounced "chimney" in the central part of the shaft. This distribution is in general agreement with the results reported in literature (Kajiwara et al. 1983).

681

zy

4. Discussion and Conclusions A model for studying the formation of burden layers in the ironmaking blast furnace has been developed on the basis of a single-point measurement of the stock level by radar. The model, which, furthermore, makes use of geometrical conditions of the problem at hand, has been kept conceptually simple so it can be applied to track the burden distribution in operating blast furnace. The model has been tuned to data from a

r(m)

r(m)

zyx

Fig. 2 Estimated distribution of the burden layers in the shaft (upper panels) and the corresponding distribution of the materials (lower panels) for Period 1 (left) and Period 2 (right). Light regions refer to coke, gray to sinter and dark to pellets.

682

Finnish blast furnace by adjusting a set of model parameters, and the results of the tuned model are in general agreement with results presented in the literature and with findings from pilot-scale models. In the future, the model will be evaluated on more extensive data sets and the relatively large variation in the measured layer thicknesses from sequence to sequence will be analyzed. An off-line version model will be used in the design of novel charging programs. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJI

5. References Grisse, H.J., 1991, Steel Times International, Nov. 1991, 612. Hinnela, J. and Saxen, H., 2001, ISIJ International 41, 142. Hinnela, J., Saxen, H. and Pettersson, F., 2002, Modeling of the Blast Furnace Burden Distribution by Evolving Neural Networks, submitted manuscript, lizuka, M., Kajikawa, S., Nakatani, G., Wakimoto, K. and Matsumura, K., 1980, Nippon Kokan Technical Report, Overseas 30, 13. Itaya, H., Aratani, F., Kani, A. and Kiyohara, S., 1982, Rev.Met., CIT 79,443. Iwamura, T., Sakimura, H., Maki, Y., Kawai, T. and Asano, Y., 1982, Trans. ISIJ 22, 764. Kajiwara, Y., Yimbo, T. and Sakai, T., 1983, Trans. ISIJ 23,1045. Mawhinney, D.D., Presser, A. and Koselke, T.G., 1993, Procedings of Ironmaking Conf., ISS, Vol. 52, 563. Michelsson, M., 1995 "Investigation of the effect of movable armors and the falling trajectory of coke". Internal report 9501, Fundia Wire Oy Ab, Koverhar, Finland, (in Swedish). Nicolle, R., Thirion, C , Pochopien, P. and Le Scour, M., (1987), Proceedings of Ironmaking Conf., ISS, Vol. 46, 217. Nikus, M., 2001, A set of models for on-line estimation of burden and gas distribution in the blast furnace. Doctoral dissertation. Heat Engineering Laboratory, Abo Akademi University, Finland. Omori, Y. Ed., 1987, Blast Furnace Phenomena and Modelling, The Iron and Steel Institute of Japan, Elsevier, London, UK. Poveromo, J.J., 1995-1996, Burden distribution fundamentals. Iron & Steelmaker 2223, No. 5/95-3/96. Saxen, H., Nikus, M. and Hinnela, J., 1998, Steel Research 69,406. Saxen, H. and Hinnela, J., 2002, Model for burden distribution tracking in the blast furnace, accepted for Mineral Processing and Erxtractive Metallurgy Review, Special Issue on Advanced Iron Making. Uenaka, T., Miyatani, H., Hori, R., Noma, F., Shimizu, M., Kimura, Y. and Inaba, S., 1988, Proceedings of Ironmaking Conf., ISS, Vol. 47, 589.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

683

Environmental Impact Minimisation through Material Substitution: A Multi-Objective Optimisation Approach A. Hugo, C. Ciumei, A. Buxton and E.N. Pistikopoulos * Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College, London, SW7 2BY, U.K.

Abstract This paper presents an approach for identifying improved business and environmental performance of a process design. The design task is formulated as a multi-objective optimisation problem where consideration is given to not only the traditional economic criteria, but also to the multiple environmental concerns. Finally, using plant-wide substitution of alternative materials as an example, it is shown how potential stepchange improvements inzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA both life cycle environmental impact and process economics can be achieved.

1. Introduction Over the past decade, the awareness created within the chemical processing industry to design production facilities that operate within acceptable environmental standards has increased considerably. In their comprehensive review of approaches that include environmental problems as part of the process design task, Cano-Ruiz and McRae (1998) identify the need to formulate problems with environmental concerns as design objectives rather than constraints on operations. They argue that the traditional approaches where pollutant flows in waste streams are bounded by regulatory limits do not capture the underlying environmental concerns. Instead designers should balance environmental objectives against the economic ones and should formulate suitable objective functions that accurately represent the environmental performance of the process. Subsequently, the design task is often formulated as a multi-objective optimisation problem where the trade-off between environmental and economic criteria can be explored. Although the value of multi-objective optimisation to environmental process design has been widely recognised, most of the applications only consider the process in isolation from its supply chain. The majority of approaches also use so-called burden or physical indicators based upon the mass of waste and emissions released instead of quantifying the potential environmental impact of pollutants. Within this context Life Cycle Assessment (LCA) provides a more holistic approach for quantifying environmental * To whom correspondence should be addressed. Tel.: (44) (0) 20 7594 6620, Fax: (44) (0) 20 7594 6606, E-mail: [email protected]

684

performance measures (BS EN ISO, 2000). Accordingly, the Methodology for Environmental Impact Minimization (MEIM) was developed to incorporate the environmental impact assessment principles of LCA into a formal multi-objective process optimisation framework (PistikopouloszyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM et al., 1994). Since its original formulation the methodology has also been successfully extended to include molecular design techniques for the design of substitute solvents (Pistikopoulos and Stefanis, 1998; Buxton ^^ a/., 1999).

In this paper, the MEIM is revisited and applied to an illustrative example to highlight its potential at identifying potential step-change improvements in both process economics and life cycle environmental performance. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR

2. Illustrative Example - Process Description The industrial manufacturing facility for the production of 152 tons per day crude acrylic acid (95% pure on a weight basis) using propylene, air and steam as raw materials acts as the illustrative example (Figure 1). The process consists of a reactor followed by a cooler, an absorption column (using demineralised water as absorbent), a solvent extraction column (using di-isopropyl ether as the base case solvent) and a distillation column for solvent recovery. Each unit is modelled using established design equations and assumptions as presented in various chemical engineering process design handbooks. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 7£^ DEMIN WATER

M

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA AIR COMPRESSOR

SOLVENT RECOVERY DISTILLATION COLUMN 10

^ > - > ^

PROPYLENE

L-L EXTRACTOR

14 SOLVENT^ ' p j ^ ^ MAKE-UI

r ^

12 SOLVENT RECYCLE

W

11 WASTE WATER

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Figure 1. Illustrative Example Process Flow Diagram.

3. System Boundary and Inventory Analysis The first step in the methodology is the expansion of the foreground system (the acrylic acid plant) to include up-stream/input processes within the boundaries of the background system. This allows the traditional waste minimization techniques to be extended by considering a more complete description of the environmental impact of the process. Since a constant production rate is used as a design basis, downstream (product use) stages are not accounted for and the study is, therefore, a "cradle-to-gate''

685 assessment. The scope of the life-cycle study considers the following processes associated with the production of acrylic acid: • • • •

Propylene production and delivery Steam generation Thermal energy generation Electricity generation.

For the chosen background processes, emissions inventories are taken from standard literature sources. The mass and energy balance relationships of the various unit operations provide the emissions inventory for the acrylic acid plant (foreground system). Grouping all of these wastes from both the fore- and background systems together into a global waste vector represents the total environmental burden of the entire life cycle. This burden vector typically consists of a large number of polluting species and transformation of the pollutant mass flows into corresponding environmental impacts allows the vector to be aggregated into a more manageable system of lower dimensionality. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE

4. Impact Assessment - The Eco-Indicator 99 Methodology Although a number of methods exist for performing the transformation of burdens into impacts, very few account for temporal and spatial behaviour of pollutants. Instead of considering only the point-source conditions of each emission, it is necessary to consider a larger region. One recently developed method that addresses this issue is the Eco-Indicator 99 (Pre Consultants, 2000). A damage-oriented approach is adopted to assess the adverse environmental effects on a European scale according to three main damage categories: Human Health, Ecosystem Quality and Resources. The result is the classification of the impacts according to 12 indicators - some of which are concerns common to most impact assessment methods. From the complete set of 12 indicators, the following 9 were found to be most relevant to chemical process design: •zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Damage to Human Health: Carcinogenic, Respiratory Effects (Organic), Respiratory Effects (Inorganic), Climate Change, Ozone Layer Depletion, • Damage to Ecosystem Quality: Ecotoxic Emissions, Acidification and Eutrophication, • Damage to Resources: Extraction of Minerals, Extraction of Fossil Fuels. Next, the three main categories are put on a common basis through normalisation and aggregated into the three main headings. Finally, a score of relative importance is assigned to each category to arrive at the single Eco-Indicator 99 value. The developers of the Eco-Indicator 99 method stress that this step of impact category valuation is naturally a subjective exercise that largely depends on cultural perspectives. Three different value systems are, therefore, presented according to social types.

5. Multi-Objective Process Design Model In the most general case, the environmentally conscious process design problem addressed by the MEIM involves both continuous and discrete decision variables and

686

can be formulated as a multi-objective Mixed Integer Nonlinear Programming (moMINLP) problem. For the illustrative example though, the topology of the production process is known and no discrete decisions are considered. The problem, therefore, reduces to a multi-objective Nonlinear Programming (moNLP) problem: zyxwvutsrqponmlkj \f\ (^) = Total Annualised Cost min U i ^ I/2(jc) = Eco - Indicator 99 hix) = 0

The goal of multi-objective optimisation is to obtain the set of efficient solutions (noninferior or Pareto optimal solutions). By definition, a point is said to be efficient when it is not possible to move feasibly so as to decrease one objective without increasing at least one other objective. In the MEIM, this set of efficient solutions is obtained by reformulating the original multi-objective optimisation problem as a parametric programming problem. Discretisation of the parameter space into sufficiendy small intervals then allows the application of the e-constraint method (Miettinen, 1999). However, since the e-constraint method can neither guarantee feasibility nor efficiency, both of these conditions need to be verified once a complete set of solutions has been obtained. An algorithm for detecting efficiency based upon the definition of domination sets is, therefore, included as a final processing step. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ

6. Base Case Results As the set of efficient solutions highlights (Figure 2), a conflict exists between a design achieving minimum cost and a design achieving minimum environmental impact. It shows that no improvement in the environmental performance can be achieved unless the economic performance is sacrificed and more money is invested. The only way to improve both the economic and environmental performance is to structurally modify the process topology through equipment and/or material substitutions. Owing to the high environmental burden resulting from the operation of the liquidliquid extractor and its downstream distillation column, substitution of the organic solvent offers an ideal opportunity for modifying the process in search of step-change improvements. The task is, therefore, to identify potential candidates that can be used as substitutes to the existing solvent, di-iso propyl ether (DIPE).

7. Single Solvent Identification - Separation Task Level A wealth of techniques exists for designing molecules with desired characteristics. Despite the plethora of approaches, most approaches focus on designing materials with the required processing properties without explicitly considering the plant-wide implications. In an attempt to address this limitafion, Buxton (2002) proposed that only with an expanded process boundary, the selection of materials can lead to consistent cost optimal and environmentally benign improvements. Within this context, he developed a procedure for designing solvents using process and plant-wide environmental objectives.

687

Binary variables are used to represent the occurrence of molecular structural groups (e.g. -CH3, -CHO, -OH ...) found in the group contribution correlations. This allows molecules to be generated according to a set of structural and chemical feasibility constraints. In addition, a variety of pure component physical and environmental property prediction equations, non-ideal multi-component vapour-liquid equilibrium equations (UNIFAC), process operational constraints and an aggregated process model form part of the overall procedure. Finally, the solvent identification task is solved as a mixed integer non-linear programming (MINLP) problem (BuxtonzyxwvutsrqponmlkjihgfedcbaZYXWV et al., 1999). Applying the procedure to the illustrative example, results in 43 structurally feasible solvents being identified as possible substitutes. 21 of these had to be excluded though owing to the existence of an azeotrope in the extract mixture, while another 9 failed to achieve the desired level of extraction. Analysing the 13 remaining feasible candidates, it is interesting to note that the two best performing solvents, diethyl ether and methyl isopropyl ether, are structurally very similar to the base case solvent, di-iso propyl ether. For illustrative purposes, the solvent with the fourth best performance during the identification task, n-propyl acetate (NPA), is taken into to the expanded process boundary to verify its behaviour in terms of the multiple performance criteria.

Figure 2. Base Case Efficient Set of Solutions (DIPE as Solvent). zyxwvutsrqponmlkjihgfedcbaZY

8. Plant-Wide Verification - Step-Change Improvements Substituting the existing solvent with one of the potential candidates, NPA, and resolving the multi-objective process design problem, results in a shift in the set of efficient solutions as shown in Figure 3. Changing the solvent used in the liquid-liquid extractor from DIPE to NPA can potentially result in an optimal plant-wide design requiring 14.8 % less in total annual cost while simultaneously achieving a 9.3 % reduction in environmental impact.

688 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

9. Conclusions

A multi-objective optimisation methodology is presented for the design of a chemical plant with consideration being given to not only the business incentives, but also the environmental concerns. Application to an illustrative example, illustrated how the substitution of alternative materials can potentially shift the set of trade-off solutions, resulting in step-change improvements inzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJ both the economic and life cycle environmental performance.

2094.4

2094.8

/

/

2308

2309

f2 = Eco Indicator 99 [Pointslhr] Figure 3. Shifting the Pareto Curve through Solvent Substitution.

10. References BS EN ISO, 2000, ISO 14040, British Standards Institution. London. Buxton, A., 2002, Solvent Blend and Reaction Route Design for Environmental Impact Minimization, PhD Thesis, Imperial College, London. Buxton, A., Livingston, A.G. and Pistikopoulos, E.N., 1999, AIChE J., 45(4), 817-843. Cano-Ruiz, J.A. and McRae, G.J., 1998, Annu. Rev. Energy Environ., 23,499-536. Miettinen, K.M., 1999, Nonlinear Multiobjective Optimization, Kluwer, Netherlands. Pistikopoulos, E.N. and Stefanis, S.K., 1998, Comp. & Chem. Eng., 22(6), 717-733. Pistikopoulos, E.N., Stefanis, S.K. and Livingston, A.G., 1994, AIChE Symposium Series, 90(303), 139-150. Pre Consultants, 2000, The Eco-Indicator 99, 2°"* ed., Amersfoort, Netherlands.

11. Acknowledgements The authors are grateful for the financial and administrative support of the British Council, the Commonwealth Scholarship Commission and BP p.l.c.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

689

Genetic Algorithms as an Optimisation Toll for Rotary Kiln Incineration Process Inglez de Souza, E.T.; Maciel Filho, R.;Victorino, I.R.S. State University of Campinas - Chemical Engineering College, Campinas, SP. - Brazil e-mail: [email protected]

Abstract Since incineration process has showed great importance in chemical industries a number of developments were published. This work can be categorised in two different aspects, control strategy and steady state optimisation. The former acts after the process is somehow considered with its parameters properly set, with the mission of keeping the variables around a set point. On the other hand, the steady optimisation seeks such parameters that are used in control strategies. In many real processes, especially with rotary kilns equipment, the above mentioned considerations do not work in harmony. In this gap the present paper drives its efforts in order to achieve process integration in real time, at least for a simple level, amalgamating steady state optimisation and control strategy to enhance process performance. The main tool used as the optimiser in a genetic algorithm, and the control strategy has been formulated with the same concept of predictive control.

1. Introduction

This work has, as the main goal, to offer new options of performance enhancing of such process. One seeks to optimise temperature profiles along the combustion chamber according to pre-established conditions. Discrete internal and the outlet flow positions have been computed in the objective function for further optimisation, which results were used with an on-line optimiser and as a predictive control strategy. The incineration process consists of a rotating cylinder modelled by a 2D deterministic model, simulating heat transfer - including radiation. Sparrow and Cess (1970) - mass transfer, species generation and consumption due to combustion, Inglez de Souza (2000), Tomas (1998). Since it has not been easy to use the entire deterministic model because of its expensive computation time in association with a genetic algorithm, a factorial design was proposed. The referred statistical tool has been applied to obtain a reduced model for the optimisation procedure. Temperature has been calculated with a zyxwvutsrqpo 2^ factorial design, and optimisation based on pre-determined spatial profile along the central axis of the kiln. In this study the phenomenological model operates as if it were the real plant, while the statistical derived one act with the on-line genetic algorithm optimiser to produce the best profile, suitably setting the inlet flows. Elaborated with the same approach, a predictive control strategy has been formulated. The central idea remains identical, an analytical expression obtained with the above-mentioned technique foresees the set

690 points. The control structure receives information from the process, send it to the controller, which manipulates the operational parameters in order to minimise the output controlled variable with the set point. Not only the optimiser but also the control strategy has been optimised with a genetic algorithm model, which is described below. Fundamental theory about factorial design can be found in Bruns (1995). Based on genetic and evolutionary principles, GAs work by repeatedly modifying a population of artificial structures (chromosomes) through the application of selection, crossover, and mutation operators. The evaluation of optimisation happens with the adaptation function (fitness), which is represented by the objective problem function in study (involving the mathematical model of the analysed system), that determines the process performance. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

2. Methods

2.1. Factorial design In the last item it has been cited that a factorial design was applied. The mathematical model has been used, as it was the real plant. The basic idea is to obtain an analytical expression, statically representative of the plant, to foresee plant outputs. Chosen the desired output with its respective inputs influences, a cluster of 16 data has been collected according to the following schema. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJI Table 1. Factorial design for 4 input variables - 2 scheme. Sample 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Variable 1 + + + + + + + +

Variable 2 + + + + + + + +

Variable 3 + + + + + + + +

Variable 4 + + + + + + + +

The symbol "+" mean that the output result of the studied variable has been considered for an upper reference, while "-" is the opposite, the variable is calculated with it lower reference. The lower and upper limits are fixed according to the interest area, because this range of values will determine the region that it is possible to apply the model. Equation (1) represents the commented factorial design mathematical model.

691 zyxwvutsrqpo y = y + a^.x^+a2.^2

+ a^.^3 + a^,x^ + a^.x^.X2 + a^.x^.X3 H— +

^n\ '^\ '^2 '^3 "^ ^n2 '^\ • ^2-^4 "^

^ ^n3 '^l '^2 '^3 -^4

y - Output variables. y - Mean output. Xi, X2, X3, X4 - Input variables In the above equation, all variables influences and interactions among then have been considered. These parameters plus the mean value are calculated with the following equations. I order to let the further procedure clear, and extra column has had to be added in Table 1, which would be supposed to contain the "y" values. For calculation purposes and starting from now these values are labelled as being (yi, yi, • •. yie)16 V-zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

y^^iizyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (2) The remain parameters (ai, a2, ..., ans) are calculated as a functions of its influences in the output result. If one is computing variable " 1 " influence, the signs of the first column are entirely used, as well as for variables 2,3 and 4 the columns number 2,3 and 4 are fully adopted. Equation (3) and (4) elucidate it. 16

^sign(x,),,y, aj =-^

;;=l,n3

(3)

0

^sign(x) = +l if'V \sign(x) = -\ if"-"

^^^

If the parameter is being calculated for terms with more then one variable, then a signal analysis must be carried out. A "+" with "+" combination, as well as "-" with "-", will result in "+" general signal. Opposites signals "+" with "-", will result in "-" general signal. An example calculation for ai leads to:

a,

^ , - y i i + y i 2 - y i 3 + y i 4 - y i 5 + yi6 g

^ (5)

2.2. Genetic algorithms The results, which will be further presented, have been optimised with a binary code genetic algorithm. This kind of representation codify the variables in a sequence of "0" and " 1 " named chromosome, that expresses a real value. A population, where the

692 chromosomes are distributes, evolves up to an optimum established by the processes configurations. A binary chromosome is presented in equation (6). (OOllOO

OOl)

(6)

First of all, it is necessary a selection algorithm to proceed with the genetic algorithm. In this work a tournament has been used. This technique compare, in a random fashion, the chromosomes up to the best-fitted organisms has been selected. The others genetic operators, rather then selection, are crossover and mutation. A crossover operation, as it is performed in this work, interchange genetic information, which are "crossedover" at random points of the chromosome. This genetic operation is executed according to equation (7). vj = (oOl 1001000 |l lOOl)

vj = (oOl 1001000 |01011)

V2= (ill 1001000 |0101l)

v; = (ill 1001000 |llOOl)

The last operation, mutation, is performed in the same way that crossover do. Along the chromosome, a gene (binary number), is selected in mutation probability proportion. The gene has its characteristic mutated, "0" to " 1 " , or vice-versa. zyxwvutsrqponmlkjihgfedcbaZYX

3. Results The operational parameters used to evaluate the proposed technique are depicted in Table 2. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Table 2. Operational parameters used in the simulations. Zones number: 10 Conductivity: 0.61 W/m/K (insulation) Solid residence time: 1.800 s

Intern and extern diameters : 1,40/1,25 m (primary) 1,40/1,25 m (secondary)

Kiln slope: 2,0 degrees

Chamber distance: 12,0 m (primary) Rotational velocity: 0,033rpm. Inlet flows rate (kg/s): residue: 0,20 primary air: 0,85 secondary air: 3,65 fuel: 0,10 Inlet temperatures(K): residue: 303 primary air: 333 secondary air 333 fuel: 333 Reference and ambient 298 K

temperatures:

693

The objectives functions used to generate the results are quite simple in its essence. Every prescribed internal profile compared with the factorial design prediction model gives a deviation error. The sum of these deviations completes the objective function. zyxwvutsrqponm e = ym'yp

(8)

e-Deviation error. ym - Output from the fundamental mathematical model. Yp - Predicted output from the factorial design model. In this case, the outputs are represented by the combustion chamber temperature profile. The number of discrete selected points is three. Thus, the objective function is represented as follow^: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA r^ref

p

/ -zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA ^ref

(9)

T^^^ - Desired spatial profile. The final part of this work is devoted to a predictive control strategy. It makes use of a factorial design equation in order to predict the exit gas temperature. Then, after detecting a perturbation the genetic algorithm is called to minimise the effect of it calculating the new manipulated variable value. In Figure 1 this strategy is presented. Thermocouple Fuel zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA I Air -J •-

Rotary kiln

Outlet gas

• Residue GA.C GA.C: Genetic Algorithm Control Figure 1. Control Strategy Scheme. The input analysed variables are respectively: primary air, secondary air, fuel and solid residue. Their non-optimised, initial values are: 0,85 kg/s, 2,65 kg/s, 0,10 kg/s and 0,20 kg/s. The upper and lower values are +10% and -10%. After the genetic algorithm computations a new operational parameter profile has been achieved, which is showed in Table 3.

694 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Table 3. Optimised operational parameters. Parameter New value Primary air (kg/s) 0,77 Secondary air (kg/s) 2,385 0,0925 Fuel (kg/s) Solid residue (kg/s) 0,183 In control situations, equation (9) is reduced to just one discrete point, controlled variable. In this case the objective function set the perturbation, as well as the noncontrolled variables, and the genetic algorithm predictive control is performed. Table 4 presents a series of simulations, for some conmion perturbations, and presents the new calculated values for the manipulated variable. The references values used for these calculations are t those presented prior to the steady state optimisation. The controlled variable is the temperature, while the manipulated is the fuel.

Table 4. Updated values of the manipulated variable after step perturbations. Step order (%) Updated controlled Variable Numerical variable processing time (s) + 10% Solid residue 0,0968 4 0,102 Secondary air +10% 4 Secondary air -5% 0,0943 4 Solid residue -10% 0,0970 4 zyxwvutsrqponmlkji

4. Conclusions The results have shown that the proposed procedure, coupling factorial design reduced model and the genetic algorithm, as an optimisation tool, is very suitable to deal with real time process integration. It has been possible to drive the process to idealised conditions, never dismissing actual and practical constraints, as well as the process in tight control.

5. References Bruns, et al., 1995, Planejamento e Otimiza9ao de Experimentos, UNICAMP, Campinas. Inglez de Souza, E.T., 2000, Mathematical Modelling, Control and Rotary Kiln Process Incineration with Post Combustor, M. Sc. Dissertation, Chemical Engineering College, UNICAMP, Campinas. Michalewicz, Z., 1996, Genetic Algorithms + data structure = evolution programs, Springer-Verlag. Michalewicz, Z., Schoenauer, 1996, Evolutionary Algorithms for Constrained Parameter Optimisation Problems, Evolutionary Computation, 4(1), 1-32. Tomas, E., 1998 Mathematical modelling and simulation of an rotary kiln incinerator for hazardous wastes in steady and non steady state, Ph.D. Theses, Chemical Engineering College, UNICAMP, Campinas. Sparrow, E.M., Cess, R.D., 1970, Radiation heat transfer, Broks/Cole.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

695

Dynamic Simulation of an Ammonia Synthesis Reactor N.kasiri, A.R.Hosseini, M.Moghadam CAPE lab, Chem.Eng.Dept., Iran Univ.of Sci.&Tech., Narmak, Tehran, Iran, 16844 [email protected]

Abstract In the ammonia synthesis process like most other processes the main processing unit and one that attracts most attention from a control point of view is the rector. In ammonia production plant there exists two reactors, the mathanation and the synthesis. The methanation reactor is of much less importance as little amount of reaction takes place in it, while the synthesis reactor is of outmost importance. In this paper initially a four-bed ammonia synthesis catalytic reactor is simulated in a dynamic environment. The simulation is then used to analyze the effect of sudden change on feed pressure and variation of feed distribution on different beds on process parameters such as temperature, pressure, flow rate and concentration through the process.

1. Introduction Automatic and computer aided control of process plants is in practice for many years. This is due to the positive effects of computer control on a production line from a production engineer's point of view. It enables on line, fast and precise control of processes and it is much more valued when only fine controlling practice could be effective and possible. Automatic control of processes has been advancing in the technologies being used. The main tool on which computer aided process has been control is based is the dynamic model of the plant. A controlling practice can only be as effective in application as the model used for prediction is precise and exact. With a dynamic model the behavior of a process in time is predicted as a result of a wanted or an unwanted change on a process parameter. A dynamic process model may also be used for other purposes such as the design of start up and shut down procedures. With all this in mind the ammonia synthesis reactor has been simulated here and the simulation used for an in depth analysis of the process.

2. Kinetics of the Ammonia Synthesis Reaction The ammonia synthesis reaction is an equilibrium reaction between hydrogen, nitrogen and ammonia, in the presence of magnetic iron oxide. The conversion is a function of pressure, temperature and the reactants ratio. Raising the reaction pressure increases the equilibrium conversion therefore increasing the heat of reaction produced inside the catalyst bed. This causes an increase in temperature resulting in a rise in spacial velocity. These results in a reduction in the reactor volume required compared to a reactor operating at lower pressure. The equilibrium reaction is defined by: - N 2 + - H 2 NH 3 , AH 130c = -ll.OKcal

, AH,290c = -13.3Kcalzyxwvutsrqponmlkjihgfed W

696

The reaction is exothermic and the equilibrium rate constant is given by: zyxwvutsrqponmlkjihgfedc Kp =

(2) 3 (PN2)2X(PH2)2 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA PNH^

1

To describe the kinetics of this reaction in the gas phase, the equation type is selected to be equilibrium and reversible and the stocheometry of it is selected using standard equations. The equilibrium and rate constants as functions of temperature are given by: LnK = 2.6899-5.5192x10"' +

2001.6

2.69 ILnT

k = 2.45x10'exi

1.635x10' RT

(3)

When K is the equilibrium constant and k the rate constant.

3. Dynamic Simulation of the Ammonia Synthesis Reactor

As stated before a four bed catalytic fixed bed ammonia synthesis reactor has been simulated here in a process simulator environment. The choice of four fixed beds is to ensure the simulation is capable of providing on analysis tool for as complicated a reactor as exits in this process. The simulation is initially developed in a steady state environment. To do so the SRK thermodynamic package is used. The equations and the relating constants are defined as expressed before. The instrumentation and control devices and the relating parameters were then designed and installed on each of the streams and equipments. The simulation developed thus far is transferred into the dynamic environment. Fig.l demonstrates the mechanism the four-bed reactor was simulated. As observed the main feed gas is divided into two stream one passing through the control valve (VLV-100) and the other bypassing it. The two join and enter the shell side of the preheater exchanger, entering the top of bed 1 after being heated up. The rest of the feed is sub-divided into four quenching streams. Each of the quench gas streams are pass through a separate control valve and fed on top of each of the four catalytic beds to control process operation. The final product exiting bed 4 is passed through the tube side of the feed preheater. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIH

Figl: the reactor mechanism simulated in the simulation environment.

697

4. Results and Discussions

zy

Using the dynamic model developed some cases of interest are studied. They are reviewed below. 4.1. Case 1: The effect of sudden changes in feed pressure on process parameters As observed in fig.2 a sudden increase in feed pressure of 5 bars initially results in an increase in feed flow rate passing through the 5 control valves. The feed flow rate increase through each of the valves is proportional to the initial flow passing though each under steadies state conditions. The controllers close the valves gradually to control the flow rate passing through resulting in the gradual reduction in the flow rates demonstrated in fig 2. A trend opposite to that described above is demonstrated in fig. 3 for a case of reducing the feed pressure by 5 bars. The changes on feed pressure affects reactor pressure and as the reaction takes place in the gas phase the ammonia produced is affected. The 5 bars increase causes an increase in the reactor pressure raising the reaction rate and therefore ammonia production rate as demonstrated on fig.4. The controlling effect of the values again reduces the molar ammonia flow rate. The opposite effect is observed in fig.5 for the case of sudden reduction in feed pressure of 5 bars. The smaller change shown on fig.5 as compared to fig.4 indicates that the reaction is less affected by pressure reductions compared to pressure increases. One of the process parameters of importance in controlling the reactor operation is the temperature of the streams leaving each of the four catalytic beds. In this simulation exit temperatures of the four beds are recorded and the plots are shown in fig.6 and fig.7. The temperature variations are very slight and therefore not of much significance on their effect on the overall operations. As expected the bed exit temperatures show an increase for the 5 bars pressure increase in the feed and a reduction for the reverse case. The larger temperature changes shown on bed 1 exit are dew to the fact that a larger portion of the reaction takes place in this bed compared to the other 3 beds.

250000n

zyx zyxw

zyxwvutsrq • "

'

200000

H*-^

—Y^

*

• • /:• [

• • ? —^

zyxwvutsrq zyxw •

50000

11.1 time (hours)



,

N

^

?• • "--. l!k^i^;^:^J!r''"'" • • '"•' r'vvyrj '• '^ \ • " ' "»

*"*5lP*-

11

'

10.8

11

11.2

'

' • ""

11.4

11.6

'""y

11.8

12

time(hours)

| - » - s t 2 - * ~ St. 5 - A — s t . 1 1 -»«—St. 8 - » - St. 14|

Fig.2: The effect of 5 bar pressure increase on mass flow rate of feed entering reactor beds.

Fig3: The effect of 5 bar pressure decrease on mass flow rate of feed entering reactor beds.

698 9000 8500 O

8000

TO o E § 0) E E

7500


2C0* + H2O + 2*

/?i3 = ^ I 3 ^ M Q C 2 H * V

14

C2H*** + 30* ^ 2C0* + H2O + 4*

/?i4 = ^i4^M^C2Hr* V

15

C2H2 4-0* ^ C2H2O*

/?J5 = ^15^M^C2H2®0* ~ ^15^M®C2H20*

16

C2H2O* + 2 0 * - . 2C0* + H2O 4- *

/?i6 = ^ I 6 ^ M Q C 2 H 2 0 * V

17

C2H^ + 3 0 ^ - ^ 2 C O * + H 2 0 + 3s

^17 = ^ I 7 ^ M 9 C 2 H ^ ^ O ^

18

C2H4 + 2* ^ C2HJ*

/?jg = ^18^M^C2H4^* ~ ^18^M^C2H**

19

C2H4* ^ C2H4 + *

^19 ==" ^19^M^C2H^* ~ ^ 1 9 ^ M ^ C 2 H ; ^ *

20

C 2 H f + 6 0 * -> 2CO2 + 2H2O + 8*

/?2o = ^20^M^C2Ha* ^O*

21

C2HJ4-6O*zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA -^ 2CO2 + 2H2O4-7* /?2i = ^ 2 1 ^ M Q C 2 H * ^ 0 *

22

C2H4 + O* ^F^ C2H4O*

/?22 = ^ 2 2 ^ M Q 2 H 4 ^ 0 * - ^22^M^C2H40*

23

C2H4O* + 5 0 * -^ 2CO2 4- 2H2O + 6* /?23 = ^ 2 3 ^ M Q C 2 H 4 0 * %*

24

NO + * ^ N O *

^24 — ^24^M^NO^* ~ ^24^M®NO*

25

N0*4-*^N*+0*

^25 — ^25^M^NO*^*

26

NO*4-N*^N20*4-*

^26 — ^26^M^NO*^N*

27

N20*->N20H-*

^27 ~ ^27^M^N20*

28

N20*-^N2 + 0*

^28 — ^28^M®N20*

29

N*4-N*^N2 4-2*

/?29 = ^29^M^N*

30 NO 4 - 0 * ^ NO J 31 N O J ^ N 0 2 4-*

^31 ~ ^31^M^N0* ~^31^M^N02^*

The reaction subsystems for CO, C2H2, C2H4 and NOx are separated by lines. For values of the kinetic parameters cf. Harmsen et al. (2000,2001 a,b) and Mukadi and Hayes (2002).

integration method for stiff systems with intemally generated full Jacobian (mf=22) has been used for dynamic simulations.

722 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Table 2. Inlet gas composition used in the simulations (balance Ni)-zyxwvutsrqponmlkjihgfedcbaZYX CO

1.22% (vol.)

C2H4

380 ppm (vol.)

NO

1130 ppm (vol.)

O2

0.6-0.8 % (vol.)

C2H2

280 ppm (vol.)

CO2

12.2 % (vol.) zyxwvutsrqponmlkjihgf

3. Results Existence of multiple steady states, observed when simulating the ignition/extinction of the CO oxidation reaction on Pt/y-AhOs catalyst (Eqs. 1-6 in Table 1) by increasing/ decreasing inlet gas temperature, is illustrated in Fig. 1, left. Isothermal and adiabatic courses of reactions are compared - the hysteresis region is wider and shifted to lower temperature for the adiabatic case as it reflects temperature rise and heat capacity of the reactor. However, the multiplicity is preserved in the isothermal case, hence it follows from the used kinetic scheme. Fig. 1 on the right represents spatial profiles of surface coverages of CO oxidation intermediates - in this case a non-monotonous abrupt change from zero to full coverage occurs in the center of the washcoat. In addition to multiple steady states also existence of oscillations of various types has been observed in the model. Continuation methods (Kubicek and Marek, 1983) can be used to locate positions of limit points (multiple solutions), Hopf bifurcation points (origin of oscillations) and period doubling bifurcation points. Fig. 2 shows an example of the results of such computations, using the continuation software CONT (Kohout et al.. 1

1.2 k 0.9 -\ 0.6 0.3 0.0 1.2 0.9 0.6 0.3 0.0 300

1

1

oco* f % p - —zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 0* \ T'^down ••• 1.0 CO* "^•^ 0.8 [ h -zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA \ / —r— \i ! * __^^^ Isothermal: J 0.6 h V1 'i ^^^*s. Tup ^"v,^,^^^ T down 0.4 i.

' " ' • • • • .: .

f

• ^^

1

1

— 1 —

0.2 0.0

v

\ 1

y

yf\ V^

\

.«,.

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 10 15 20 350 400 450 500 550 r(|iim)

T'" (K)

Figure 1. Hysteresis of CO conversion. CO oxidation by Oi on Pt/^-AhO^, catalyst, 6=20 fim. - 7 ^ 2 , - 1 L N M = 5 0 mol.m' 3 ^m . 0.61 %. • Left: Outlet CO concentrations for the temperature ramp ±1 K/s. • Right: Concentration profiles of the components on catalytic Pt-centers. Steady state with higher CO conversion (cf isothermal Tdown curve) for r""=450 K.

D^^=6xl0

yoJ"(%)^

0.74

Figure 2. Dependence of the solution (yj^o on the inlet concentration of oxygen in CO oxidation, obtained by the continuation. sSS-stable steady state, uSS-unstable steady state, sP-stable periodic oscillations (minimum and maximum values), Hopf BP-Hopf bifurcation point; unstable periodic solutions are not presented. T=630 K (isothermal), 3=20 fxm, L^M=SO mol.m~^, Lose=-^2 rnol.m~^, no diffusional resistance in the washcoat.

723 y o ; up 0.20 ^ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 0.15

o

O 0.10 o

>»0.05 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 0.00

^^ 0.20 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA f^

Yo

0.15 o 0.10 >>0.05 0.00

down:

o

rtol=10,hn,a3( implicit 0.61

0.62

0.63

0.66

0.67

0.68 zyxwvutsrqpon

Figure 3. CO oxidation by O2 - evolution diagrams in inlet concentration of oxygen. Inlet concentration ofOz changes with constant rate ±10'"^ %/s. D^^=6xl0~^ m^.s~^, other parameters are taken from FigurezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFED 2. 1

1

r

1400 1200 E^IOOO ^

800

°o 600

• •• •

1

1

1

1

I

1

0.6 h

\I"

0.4

400 200 0

-

0.2

0.0 20

X ^

CgHg

k

mmii ii ii ii iiiiii

10

20

30

...ill

40 50 t(s)

iiiiiii

60

70

II lull

15\

^

r(nm ) 1 0 \ ^

5^

80

Figure 4. Complex oscillatory behaviour in TWC operation. • Left: Outlet HC concentration. • Right: Spatiotemporalprofile ofCiHi*** surface concentration. T=630 K (isothermal), 8=20 [xm, D^^=6xl0~^ m^.s"^, L^M=SO mol.m'^, Losc=J00moLm~^, inlet concentrations are given in Table 2, y^ =0.74 %. 2002) applied to the isothermal system for CO oxidation on Pt/Ce/y-AhOs catalyst (Eqs. 1-10 in Table 1) with no internal diffusion resistance. In that case PDEs in the model can be replaced by ODEs so that the dimension of the resulting system is smaller (here 11 ODEs). The corresponding evolution diagram for the distributed system with finite diffusion coefficient is given in Fig. 3. Comparison of Figs. 2 and 3 confirms again that the observed nonlinear phenomena follow from the used kinetic scheme and the introduction of intemal diffusion effects only modifies the behaviour - we can observe the alternating existence of single and period doubled oscillations. More complex oscillations have been found when the full TWC microkinetic model (Eqs. 1-31 in Table 1) has been used in the computations, cf. Fig. 4. The complex spatiotemporal pattern of oxidation intermediate C2H2** (Fig. 4, right) illustrates that the oscillations result from the composition of two periodic processes with different time constants. For another set of parameters the coexistence of doubly periodic oscillations with stable and apparently unstable steady states has been found (cf. Fig. 5). Even if LSODE stiff integrator (Hindmarsh, 1983) has been succesfully employed in the solution of approx. 10^ ODEs, in some cases the unstable steady state has been stabilised by the implicit integrator, particularly when the default value for maximum time-step (/imax) has been used (cf. Fig. 5 right and Fig. 3 bottom). Hence it is necessary to give care to the control of the step size used, otherwise false conclusions on the stability of steady states can be reached.

724 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

60

56 80 t(s) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA t(s)

Figure 5. Dynamic simulation of TWC operation. T-630 K (isothermal), d=10 fim, LNM=0 for all components, adsorption and desorption are always possible for all component. rj=k(n*-nj)

Condition 11: When Ck=0 and Cj^^k^O, adsorption of j-component on particles is possible but k-component adsorption is impossible. rj=k( n* -nj), rk=k( nl -nk)j>i

where m^ax is the number of combinations selecting two components among Ncomp without order. N -CN -1) (3) The above kernel fulfills conditions I/II/IV with tolerable negative concentrations of -5i and -6j. The positive thickness (5j) for the sum kernel is used in order to make sure of j.generai^Q ^^ q^Q ^^^ ^jj j (^^ondition III). The negative thickness (-5i and -5j) for the product kernel is to guarantee (t)product^O at Cj=0 and Ck^j>0 (condition II). 2.3. Generalized rate model To satisfy all of the four conditions at the same time, the exchange probability kernel must be a logical sum of the sum and product kernels. For species exchange problems, A generalized adsorption rate equation is therefore expressed as follows: j

~~ T sum ' T product * ^ j ' J ~" ^ • • •^^ comp ~ ^

general _ _ Ncom ~

Ncomp-1 , Y " -general ^ j

(4) ^ ^

770 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Here, the parameters (a and 5j) must be determined properly so that they are not sensitive to the problems considered. For instance, we use in our numerical study 6j=max(Cj^ feed, j=l...Ncomp)xlO"'^. Once the concentration thickness is determined, the ((^Q) at Cj=6j. For sigmoid parameter, a, is calculated on the basis of an expected valuezyxwvutsrqponmlkjihgfedcbaZYX 24n((^,/(l-^J) example, when (^> , 0«(t),

/"r

1 1 1 f 1 ' r 1 /

(D

\

zyxwvutsrqponmlkjihgfed

if

1

column, N

2

X^;-,

y

O

O

1

column, N

2

zyxwvutsrqponmlkjihgfedcba

Fig. 2, Liquid concentration profiles for the binary system according to kernels in the 3 production columns after nine shiftings (dashed line: CA, solid line: Cg).

772 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Column number 1

i

(c)t= 5.0

1

1



1

1

1 1 1 1 1 1 V zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 1

p o o

0' ^

_L

L

• d ^M i

^ M

^"• _ " number zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM Column Fig.zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 5. Liquid concentration distribution for the ternary system over the 15 columns within one cycle time at 3V^ shiftings (circle: CA, solid line: Cg, cross: Cc). zyxwvutsrqponmlkjihg

4. Conclusion For ion-exchange packed-bed chromatographic adsorption problems, a generalized adsorption rate model is proposed for multi-component systems without losing generality of conventional rate models. The new model with the exchange probability kernels can describe both active and inactive zones of the chromatographic column. The time-continuous kernels based on the LCC are developed in two respects: 1) an adsorption rate becomes zero when adsorbents are not present in the liquid phase, 2) concentrations are not less than zero. The sum kernel is for the former and the product kernel for the latter situation. Consequently, this model is considered as a concentrationdependent rate model. The generalized rate model satisfying the LCC yields reliable results so that negative concentrations are controlled within 1% of the maximum concentration. The new model will be useful to simulate a start-up of chromatographic processes before reaching at the cyclic steady-state, since the active and inactive zones are mixed in chromatographic columns during start-up. Furthermore, the new model is needed for a SMB operation involving a washing step to rinse the column.

5. Reference Carta, G. and Lewus, R.K., Adsorp., 6(2000), 5-13. Lim, Y.I. and Jorgensen, S.B., J. Chromatogr. A, 2002, submitted. Marcussen, L., Superfos-DTU internal report, Dept. Chemical Engineering, DTU, Denmark, 1985. Smith, R.P. and Woodbum, E.T., AIChE. 24(1978), 577-587.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

773

Comparison of Various Flow Visualisation Techniques in a Gas-Liquid Mixed Tank Tatu Miettinen, Marko Laakkonen, Juhani Aittamaa Laboratory of Chemical Engineering, Helsinki University of Technology P.O. Box 6100, FIN-02015 HUT Finland, email: [email protected], [email protected],[email protected]

Abstract Computational Fluid Dynamic (CFD) and mechanistic models of gas-liquid flow and mass transfer at turbulent conditions are useful for studying local inhomogeneities and operation conditions of gas-liquid stirred tanks. They are applicable also as scale-up and design tools of gas-liquid stirred tank reactors and other gas-liquid contacting devices v^ith greater confidence compared to purely heuristic design methods. Experiments are needed for the development and the verification of these models. Various flow visualisation techniques have been utilised to obtain experimental results from local gas hold-ups and bubble size distributions (BSD) in a gas-liquid mixed tank. Particle Image Velocimetry (PIV), Phase Doppler Anemometry (PDA), Capillary suction probe (CSP), High-speed video imaging (HSVI) and Electrical Resistance Tomography (ERT) techniques have been applied. The applicability of various techniques is dependent on the location of the measurement, the physical properties of the gas-liquid flow, the gas hold-up and the size of the tank. Local characteristics of the gas-liquid flow have been measured for air-water dispersion in a baffled 13.8 dm^ mixed tank at various gas feeds and impeller rotational speeds. BSDs have been measured in the tank using CSP, PIV and PDA techniques. CSP, PIV and ERT have been used for the determination of local gas hold-ups. HSVI has been applied for the visualisation of the breakage, the coalescence and the shapes of the bubbles. Results from the applied techniques have been compared with each other and their advantages, disadvantages and limitations have been discussed.

1. Introduction Gas-liquid mixed tanks are used for various operations in industrial practise. The design of gas-liquid mixing units and reactors is still done by empirical correlations, which are usually valid for specific components, mixing conditions and geometries. Computational Fluid Dynamic (CFD) techniques have been used successfully for single-phase flow, but gas-liquid flow calculations are still tedious for computers. Therefore, simpler and more accurate multiphase models are needed. In order to verify multiphase CFD calculations and to fit unknown parameters in the multiphase models, experimental local bubble size distributions and flow patterns are needed. Bubble breakage and coalescence functions can be fitted against the local, timeaveraged BSDs. In this way, a generalised model for the mass transfer area that includes

774 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

the dependence on local dissipation of mixing energy and physical properties of dispersion can be developed. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDC

2. Visualisation Techniques zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDC Capillary suction probe technique (CSP) CSP technique (Barigou and Greaves, 1992, Genenger and Lohrengel 1992) is a single point invasive method, which has been used to measure bubble size distributions (BSD) and gas volume fractions (Tabera et al. 1990). In the photoelectric suction probe technique bubbles are sucked through the capillary where they are transformed into cylindrical slugs of equivalent volume. The measuring probe, which encloses the capillary, consists of lamps and phototransistors. The electrical resistance of the phototransistor changes every time when a bubble passes the sensor. The sizes of bubbles are calculated utilising the distance between the detectors, the times between changes in the resistance of consecutive detectors and the diameter of the capillary. The CSP technique is useful for opaque and dense dispersions that are beyond the applicability of most optical techniques. Probes are also inexpensive relative to most optical methods. CSP does not apply to very small vessels, since the continuous sample stream reduces the volume of dispersion and disturbs the flow pattern. Furthermore bubbles might break having collided with the funnel shaped edge of the capillary causing error to the BSDs. Electrical Impedance Tomography (EIT) During the last years tomography has obtained intensive research to characterise multiphase flows (Fransolet et al. 2001). EIT is a non-invasive technique that applies to opaque dispersions. In EIT experiments resistivities are measured between the electrodes that cover the part of the walls of the vessel. The continuous phase must be conductive and the difference in conductivity between the continuous phase and the dispersed phase must be distinct. The resistivity distributions are reconstructed to produce three-dimensional images of the resistivity field. Tomography techniques are relatively slow compared to the time scale of flow in a mixed tank so it is not suitable for the determination of BSDs. Phase Doppler Anemometry (PDA) Laser Doppler Velocimetry (LDV) (Joshi et al. 2001) and PDA (Schafer et al. 2000) are optical techniques that have been used to determine BSDs, gas hold-up and flow patterns. Detectors observe the Doppler shift and phase difference when bubbles pass through the volume of the intersection of two laser beams. Doppler effect is related to the velocities of bubbles and the phase difference is related to the sizes of bubbles. Particle Image Velocimetry (PIV) PIV (Deen et al. 2002) takes also advance of laser light. A pulsing laser light scatters from the bubbles and illuminates part of the dispersion. Illuminated volume is imaged using CCD digital cameras. The local displacements of bubbles between two laser pulses are measured from the taken pictures. Displacement vectors can be measured also for the liquid phase by adding scattering particles. Therefore PIV can be used to determine local BSDs and relative velocities (slip) between a dispersed and a continuous phase simultaneously. Furthermore, gas hold-up is obtained from the PIV results when depth, and area of PIV pictures are known.

775 zyxwvutsrqpo Imaging techniques Planar imaging techniques like conventional photography and high-speed video imaging HSVI (Takahashi et al. 1992) have been used to visualise multiphase flows in mixed tanks. The observations about the mechanisms of bubble breakage, coalescence and wake effects can be used for the development of mechanistic bubble functions (Takahashi and Nienow 1993). HSVI requires plenty of well-directed light. PDA, PIV and HSVI are non-invasive optical techniques, which apply only for transparent solutions. High concentrations of bubbles hamper the visibility of the measurement volume and attenuate the intensity of light. Because of this, optical techniques can be used to investigate small-sized vessels and at low bubble concentrations.

Table 1. The comparison of various techniques. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO Method

Applicability

Bubble size >0.5 nmi

Gas hold-up Less than 25% Less than 5%

Physical properties Low viscosity of dispersion Transparent dispersion

CSP

BSDTgas hold up

PDA

BSD, gas hold-up, flow pattern Gas hold-up

30|im1.2 mm

BSD, gas hold-up, flow pattern Visualisation

0.1-

Less than 4%

Conductive continuous phase Transparent dispersion

Not limited

Less than 1%

Transparent dispersion

EIT

PIV

HSVI

0-99%

Other notiOcations Inexpensive, simple Modelling & calibration needed Calibration experiments needed Expensive, tedious data processing -

3. Experimental Experiments were carried out in a flat-bottomed cylindrical glass vessel (0.0138 m^), which was equipped with four-bladed radial impeller and four baffles. Gas was fed through a 0.66 mm (inner diameter) single-tube nozzle, which was located in the middle of vessel, 30 mm from the bottom of the tank (Figure 1). Experiments were carried out for air-tap-water system at atmospheric pressure and room temperature 22 °C. Surface tension 69 mN/m was measured with Sigma 70 Tensiometer. Gassing rates and stirring speeds were varied between 0.1-1.0 dmVmin and 300-600 rpm. The locations of the experiments are presented in Figure 1. Locations of experiments with various techniques do not coincide. It was impossible to carry out measurements under the impeller with the capillary and bubbles were too large elsewhere but near the impeller with PDA. Baffles also set some restrictions to the PDA and PIV techniques.

776 PDA

Capillary •1

.2

zy zyx

Figure 1. Dimensions of the stirred tank and locations of the experiments in the tank.

4. Results and Discussion 4.1. Bubble size distributions BSDs were calculated from 1000 to 5000 bubbles per measured location by CSP and PDA. 4000 to 70000 bubbles were used in PIV experiments depending on mixing conditions and the location. In the PIV technique, the smallest detectable bubble size was 0.10 mm due to spatial resolution of the CCD camera and the largest observed bubbles were approximately 8.5 mm. With PDA bubbles from 0.03 mm to 1.3 mm were observed. The inner diameter of capillary was 1.2 mm and detected bubbles were ranged from 0.8 mm to 6 mm. Smaller bubbles were out of range, since they did not form slugs inside the capillary. On the other hand, larger bubbles were not observed, because they did not exist or they broke into smaller ones during the sampling. The overall volume of bubbles in one experiment was determined by collecting bubbles into the measuring burette that was supplied with the pressure meter. BSDs were calibrated with the assistance of total volume of the collected bubbles and the pressure difference between the burette and atmospheric pressure. Capillary and PDA results were in close agreement. The peaks of the BSDs were around 1 mm, which is close to the limit of both techniques, so it was not possible to see overall bubble size range either with capillary or with PDA. The same peaks were close to 0.2 mm in the PIV experiments, which differs a lot from the results with capillary and PDA. It was observed later that some of the large bubbles were identified as groups of small bubbles. This seemed to be the reason for the deviation. 4.2. Gas hold-ups Local bubble concentration i.e. the gas hold-up is in relation to the ability of bubbles to coalesce. Therefore, local gas hold-ups are required for the development of the models. Local gas hold-ups were determined at positions A-F with the PIV and capillary techniques. The position A was not accessed with the capillary due to impeller. PIV gas hold-ups were determined from the depth, width and height of PIV pictures. The width and the height of PIV pictures were determined by the optical settings of camera. The depth of illuminated plane in the dispersion was obtained from the calibration experiments with a bubble gel. Sensitivity analysis denoted that local gas hold-up determined from the PIV results is relatively insensitive to the depth of the illuminated

777 zyxwvutsrqp

plane. In the capillary technique gas hold-ups were measured from the volume of the sucked gas and liquid. A problem of this method is the selection of sampling rate that gives correct local gas hold-ups. Isokinetic sampling can be reached when the sampling rate of bubbles is equal to the arriving rate of bubbles at the tip zone of capillary (Greaves and Kobbacy, 1984). Larger gas hold-ups were obtained with capillary than with PIV. Local PIV gas holdups were certainly too small due to problems in bubble identification algorithm. Another reason for the differences between the PIV and capillary might be a false sampling rate of capillary. The problem of isokinetic sampling arises partially from the fact that bubbles rise up relative to liquid due to buoyancy. Since the capillary probe was located vertically in the dispersion, gas hold-up become overestimated in the experiments. Absolute gas hold-up values in the performed experiments were low and therefore the absolute differences in the values obtained with the capillary and the PIV technique are relatively low. Gas hold-ups were determined from the EIT reconstructions by using the resistivity distribution of continuous phase as a reference. Gas-liquid resistivity distributions were compared to the reference and three-dimensional images were formed.If resistivity is assumed to depend linearly on the gas hold-up, relative differences in gas hold-up are obtained from EIT results between the various locations. Actually, the relation between the conductivity and the gas hold-up is slightly non-linear (Mwambela and Johansen 2001) and therefore calibration experiments are needed to determine gas volume density distributions. Due to the fluctuating nature of gas-liquid flow some abnormal resistivity distributions were obtained with EIT and the averaging of several experiments is necessary to get accurate resistivity fields from the mixed tank. Abnormal resistivity distributions were also found at the boundaries like at the liquid surface and the bottom of the tank. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

5. Conclusions The applicability of various flow visualisation techniques was tested in a mixed tank. CSP was used to measure bubble size distributions and gas hold-ups. In order to provide meaningful data, calibration, suction speed and the size of the capillary have to be determined carefully. To carry out experiments with the PIV technique a reliable imageprocessing algorithm is needed to recognise bubbles from the images. PDA was observed to detect very narrow bubble size range from 0.03 mm to 1.3 mm, which limited its applicability to the vicinity of impeller in the tank. The EIT is a promising technique for the determination of gas volume density distributions from a mixed tank, but due to the fluctuating nature of gas-liquid flow the averaging of several experiments seems to be necessary to get a reliable resistivity distributions in the vessel. To obtain the relation between the gas hold-up and resistivity, calibration experiments are needed. The imaging of gas-liquid flow was useful for the detection of phenomena that were not observed with other experimental techniques. Imaging revealed also the complexity of two-phase flow, which partially explains the differences in the results obtained with the applied techniques. Every technique has its limitations and disadvantages and therefore the visualisation of multiphase flow in stirred tanks is still a challenging task. Further research and improvements of measurement techniques are therefore needed.

778 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

6. References Barigou, M., Greaves, M., Bubble-size distributions in a mechanically agitated gasliquid contactor, Chem. Eng. Sci. 47 (1992), 2009-2025. Deen, N.G., Westerweel, J., Delnoij, E., Two-phase PIV in bubbly flows: Status and trends, Chem. Eng. Technol. 25 (2002), 97-101. Fransolet, E., Crine, M., L'Homme, G., Toye, D., Marchot, P., analysis of electrical resistance tomography measurements obtained on a bubble column, Meas. Sci. Technol. 12 (2001), 1055-1060. Genenger, B., Lohrengel, B., Measuring device for gas/liquid flow, Chem. Eng. Proc. 31 (1992), 87-96. Greaves, M., Kobbacy, K.A.H., Measurement of bubble size distribution in turbulent gas-liquid dispersions, Chem. Eng. Res. Des. 62 (1984), 3-12. Joshi, J.B., Kulkarni, A.A., Kumar, V.R, Kulkarni, B.D., Simultaneously measurement of hold-up profiles and interfasial area using LDA in bubble columns: predictions by multiresolution analysis and comparison with experiments, Chem. Eng. Sci., 56 (2001), 6437-6445. Mwambela, A.J., Johansen, G.A., Multiphase flow component volume fraction measurement: experimental evaluation of entropic thresholding methods using an electrical capasitance tomography system, Meas. Sci. Technol. 12 (2001), 1092-1101. Schafer, M., Wachter, P., Durst, F., Experimental investigation of local bubble size distributions in stirred vessels using Phase Dobbler Anemometry. 10^^ European Conference on Mixing 2000,205-212. Tabera, J., Local gas hold-up measurement in stirred fermenters. I Description of the measurement apparatus and screening of variables, Biotechnol. Tech. 4(5) (1990), 299-304. Takahashi, K., McManamey, W.J., Nienow, A.W., Bubble size distributions in impeller region in a gas-sparged vessel agitated by a Rushton turbine, J. Chem. Eng. Jpn. 25(4) (1992), 427-432. Takahashi, K., Nienow, A.W., Bubble sizes and coalescence rates in an aerated vessel agitated by a Rushton turbine , J. Chem. Eng. Jpn. 26(5) (1993), 536-542.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

779

A Hybrid Optimization Technique for Improvement of P-Recovery in a Pellet Reactor L. Montastruc, C. Azzaro-Pantel, A. Davin, L. Pibouleau, M. Cabassud, S. Domenech Laboratoire de Genie Chimique- UMR CNRS/INP/UPS 5503 ENSIACET, 118 route de Narbonne 31077 TOULOUSE Cedex , France Mail: [email protected]

Abstract Emphasis in recent years has been focused on improving processes which lead to enhanced phosphate recovery. This paper studies the precipitation features of calcium phosphate in a fluidized bed reactor in a concentration range between 50 et 4mg/L and establishes the conditions for optimum phosphate removal efficiency. For this purpose, a hybrid optimization technique based on Simulated Annealing (SA) and Quadratic Progranmiing (QP) is used to optimize the efficiency of the pellet reactor. The efficiency is computed by coupling a simple agglomeration model with a combination of elementary systems representing basic ideal flow patterns (perfect mixed flow, plug flow,...). More precisely, the superstructure represents the hydrodynamics conditions in the fluidized bed. The "kinetic" constant is obtained for each combination. The two levels of the resolution procedure are as follows: at the upper level, SA generates different combinations and at the lower level, the set of parameters is identified by a QP method for each combination. The observed results show that a simple combination of ideal flow patterns is involved in the pellet reactor modeling, which seems interesting for a future control.

1. Introduction Phosphorus recovery from wastewater accords with the demands of sustainable development of phosphate industry and the stringent environment quality standard. In this context, the past decade has seen a number of engineering solutions aiming to address phosphorus recovery from wastewater by precipitation of calcium phosphates in a recyclable form (Morse et al., 1998). An advanced alternative is to apply the so called pellet reactor (Seckler, 1994). The purpose of the study presented in this paper is to develop a methodology based on modeling for optimizing the efficiency of the pellet reactor. The article is divided into four main sections: first, the process is briefly described. Then, the basic principles of modeling are recalled. Third, the hybrid optimization strategy is presented. Finally, typical results are discussed and analyzed.

2. Process Description The process is based on the precipitation of calcium phosphate obtained by mixing a phosphate solution with calcium ions and a base. More precisely, it involves a fluidized

780 bed of sand continuously fed with aqueous solutions. Calcium phosphate precipitates upon the surface of sand grains. At the same time, small particles, i.e., "fines", leave the bed with the remaining phosphate not recovered in the reactor. A layer of fines which has agglomerated is observed at the upper zone of the fluidized bed. The modeling of fines production involved amorphous calcium phosphate (ACP) for the higher pH values and both ACP and DCPD (DiCalcium Phosphate Dihydrate) for lower pH tested as suggested elsewhere (Montastruc et al. 2002b). Both total and dissolved concentrations of phosphorus, pH and the temperature were measured at the outlet stream. In order to measure the dissolved concentrations, the upper outlet stream was filtered immediately over a 0.45 jLtm filter. The sample of total phosphorus was pretreated with HCl in order to dissolve any suspended solid. The phosphate removal efficiency (r|) of the reactor and the conversion of phosphate from the liquid to the solid phase (X) are defined as:

where Wpin represents the flowrate of the phosphorus component at the reactor inlet, Wp,tot gives the total flowrate of phosphorus both as dissolved and as fines at the reactor outlet and Wpsoi is the flowrate of dissolved P at the reactor top outlet. If r|agg is the agglomeration rate, that is, the ratio between phosphorus in the bed and in the inlet stream, the following relation can be deduced: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJ r] = r}a,,X

(3)

The phosphate covered grains are removed from the bottom of the bed and replaced intermittently by fresh sand grains. In most studies reported in the literature (Morse et al., 1998), the phosphate removal efficiency of a single pass reactor, even at industrial scale, has an order of magnitude of only 50%. Let us recall that the pellet reactor efficiency depends not only on pH but also on the hydrodynamical conditions (Montastruc et al., 2002a). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB

3. Modeling Principles Two models are successively used to compute the reactor efficiency. In the first level (see Figure 1), the thermochemical model determines the quantity of phosphate both in the liquid and solid phase vs. pH value, temperature and calcium concentration. Moreover, this model quantifies the produced amount of ACP and DCPD as a function of the initial conditions (Montastruc et al., 2002b). The second step would involve an agglomeration model requiring (Mullin, 1993) the density value of the calcium phosphate which have precipitated in the pellet reactor and also the fines diameter. Moreover, the agglomeration rate depends on the hydrodynamical conditions

781 particularly the eddies sizes. These values are difficult to obtain and require a lot of assumptions which are difficult to verify practically. Influent [P]

s pH

T E P

Thermodynamical model for precipitation

[Ca]

[P]Solid zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB [Pluquid Sand amount

Reactor network model

s

Flow rate Q Effluent [P]Grain

[P]F

[P] total

T E P zyxwvutsrqpon

Figure 1. Principles of pellet reactor modeling.

4. Reactor Network Model To solve the problem, another alternative is used to compute the pellet reactor efficiency, which implies the identification of the pellet reactor as a reactor network involving a combination of elementary systems representing basic ideal flow patterns (perfect mixed flows, plug flows,...) (see Figure 2).

The combination of elementary systems representing basic ideal flow patterns is described by a superstructure (Floquet et al., 1989). This superstructure contains 4 perfect mixed flows arranged in series, 2 plug flows, 1 by-pass, 2 dead volumes and 1 recycling flow, and represents the different flows arrangement (integer variables) that is likely to take place in the fluidized bed. Let us recall that more than four series of perfect mixed flows produce the same effect that a plug flow. The precipitation phenomenon is seen as agglomeration, which is represented by Smoluchowski's equation (Mullin, 1993). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIH

dt

—kN-Nj (i=fines, j=grains) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJI (4)

which can easily be as follows:

782 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

(5)

' ' zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

dt

N is the particle concentration (m'^) and C is the concentration (mg/m^). K and k represent kinetic constants (m'^.s'').

"'-4

(6)

-7rr: 3 '

The bed porosity e is calculated with a modified Kozeny-Carman equation: zyxwvutsrqponmlkjihgfed

1-e

= 130

sup

p.zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF ^ (7)

'

Ps-Pi

where TJ is the grain radius, Vsup the superficial velocity (m/s), i) the kinematic viscosity (m^/s) and p the density (kg/m^). The continuous variables are the "kinetic" constant (K), the flowrate (5) and the reactor volumes (8). The goal is to obtain the same combination for different conditions of flowrates in the pellet reactor. In fact, the superstructure represents the hydrodynamical conditions in the fluidized bed. The "kinetic" constant is obtained for each combination. The problem solution depends only on flowrate and sand amount. A summary of the global methodology to compute the efficiency is presented in the Figure 1.

PLUG FLOW

Ob

do

cq

PLUG FLOW

Figure 2. Superstructure detail.

do

L^JFmes

Q

783 zyxwvutsrqp

5. System Resolution zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA At the upper level of the procedure, the scheduling of the basic structures is first optimized by a Simulated Annealing algorithm (SA). The dynamic management of the different constraints generated by the structures induced by the stochastic simulated annealing algorithm is then solved by a Quadratic Progranmiing (QP) (QP package from the IMSL library). At the lower level, the set of parameters is identified for a given structure by QP. The objective function for QP is to minimize the square distance between the experimental and the computed points. The simulated annealing procedure mimics the physical annealing of solids, that is, the slow cooling of a molten substance, that redistributes the arrangement of the crystals (Kirkpatrick et al., 1983). In a rapid cooling or quenching, the final result would be a metastable structure with higher internal energy. The rearrangements of crystals follow probabilistic rules. In the annealing of solids, the goals is to reach given atomic configurations that minimize internal energy. In SA, the aim is to generate feasible solutions of an optimization problem with a given objective function. As careful annealing leads to the lowest internal energy state, the SA procedure can lead to a global minimum. As a rapid cooling generates a higher energy metastable state, the SA procedure avoids to be trapped on a local minimum. The Simulated Annealing algorithm implemented in this study involves the classical procedures. For SA, the criterion is based on minimization of the QP function with a penalty term proportional to the complexity of the tested structure. The S A parameters are the length of the cooling stage (Nsa), the initial structure and the reducing temperature factor (a). The usual values for Nsa are between 8 and 2 times the chromosome length whereas for a the values are between 0.7 and 95. The Nsa and a values used throughout this study are respectively 7 and 0.7.

6. Results and Discussion

In this study, two cases are presented as a function of different penalty terms for two values of total flowrate of the solution to be treated. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM Table 1. Comparison between the experimental results and the modeling results.

Penalty term Experimental Tiagg

for 50L/H 90L/H Total Reactor volume for 50L/H

9Qim Modeling rjagg Error Kinetic constant

for 50L/H 90L/H

Case 1 10* 0.742 0.523 1.9L 1.3L 0.7396 0.5242 0.2% 4.830

Case 2 0 0.742 0.523 1.9L 1.3L 0.7423 0.5231 0.01% 4451

The results obtained show that the combination is different as a function of penalty term. On the one hand, it is interesting to notice that if the penalty term is very low or

784 equal to zero, the resulting error is also low but the combination is more complicated than the one obtained with a higher penalty term (Tablel). On the other hand, this combination induced a more important error between computed and experimental results, thus suggesting that the method is sensitive to the required precision. For 100 runs of SA, the CPU time is the same for the two cases that is 7 min (4.2 s for each S A) on a PC architecture. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

00 W^ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Case 1 Case 2 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR Figure 3. The best combination obtained for 2 values of the penalty term. zyxwvutsrqponmlkjihgfed

7. Conclusions In this paper, a hybrid optimization technique combining a Simulated Annealing and an QP method has been developed for identification of a reactor network which represents the pellet reactor for P-recovery, viewed as a mixed integer programming problem. Two levels are involved: at the upper level, the SA generated different combinations and at the lower level, the set of parameters is identified by an QP method. The observed results that for the two values of the total flowrate of the solution to be treated, show that a simple combination of ideal flow patterns is found, which seems interesting for the future control of the process.

8. References Floquet, P., Pibouleau, L., Domenech, S., 1989, Identification de modeles par une methode d'optimisation en variables mixtes, Entropie, Vol. 151, pp. 28-36. Kirpatrick, S., Gellat, CD., Vecci, M.P., 1983, Optimization by simulated annealing. Science, Vol. 220, pp.671-680. Montastruc, L., Azzaro.Pantel, C , Cabassud, M., Biscans, B., 2002a, Calcium Phosphate precipitation in a pellet reactor, 15* international symposium on industrial crystallization, Sorrento (Italia), 15-18 September. Montastruc, L., Azzaro.Pantel, C , Biscans, B., Cabassud, M., Domenech, S., 2002b, A thermochemical approach for calcium phosphate precipitation modeling in a pellet reactor, accepted for publication in Chemical Engineering Journal. Morse, G.K., Brett, S.W., Guy, J.A., Lester, J.N., 1998, Review: Phosphorus removal and recovery technologies. The science of the total Environment, Vol. 212, pp.69-81. Mullin, J.W., 1993, Crystallization, Third Edition, Butterworth Heinemann. Seckler, M.M., Bruinsma, O.S.L., van Rosmalen, G.M., 1996, Phosphate removal in a fluidized bed -2. Process optimization , Water Research, Vol. 30, N°7, pp. 1589-1596.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

785

Modelling of Crystal Growth in Multicomponent Solutions Yuko Mori, Jaakko Partanen, Marjatta Louhi-Kultanen and Juha Kallas Department of Chemical Technology, Lappeenranta University of Technology, P.O.Box 20, FIN-53851 Lappeenranta, Finland

Abstract The crystal growth model was derived from Maxwell-Stefan equations for the diffusioncontrolled growth regime. As a model system, the ternary potassium dihydrogen phosphate (crystallizing substance) - water (solvent) - urea (foreign substance) system was employed. The thermodynamic model for the present system was successfully derived by the Pitzer method and allowed calculating activity coefficients of each component. The resulting activity-based driving force on each component and other solution properties; mass transfer coefficient, concentration of each component and solution density, were introduced to the Maxwell-Stefan equations. The crystal growth rates were successively determined by solving the Maxwell-Stefan equations. The model was evaluated from single crystal growth measurements. The urea concentrations, supersaturation level and solution velocity were varied. The results showed that experimental and predicted growth rates are in acceptable agreements

1. Introduction In industrial crystallization processes crystals are usually grown in a multicomponent system. The crystal growth rate in a multicomponent system may differ from the crystal growth rate in a pure binary system significantly; thus, it is worth of understanding the growth kinetics in multicomponent solutions where the foreign substances, other than the crystallizing substance and the solvent, are present. In general, growth process is simply described by the diffusion layer model, in which growth units diffuse to the crystal-solution interface (mass-transfer) and then incorporated into crystal lattice (surface integration) (Myerson, 1993). According to this model, the slowest step in the growth process determines the crystal growth rate. If the mass transfer is the controlling resistance, the crystal growth rate can be determined only from the mass transfer process. At the present study, the crystal growth model in multicomponent solutions was derived on the basis of the Maxwell-Stefan equations (Wesselingh and Krishna, 2000). The model was applied for the growth process from the ternary potassium dihydrogen phosphate (KDP) - water - urea system. The KDP was considered to be the crystallizing substance and urea the foreign substance, respectively. In this study relatively high urea concentrations were employed in order to emphasise the diffusion of urea species. The non-ideal properties of multicomponent solutions in the model were estimated by applying a simple thermodynamic model to the system. The Pitzer method was used to model the activity coefficients of the KDP solute and urea molecule. The parameters in the model were estimated using the binary and ternary equilibrium data. The resulting activity based driving force on each component and other solution properties; mass transfer coefficient, concentration of each component and solution density, were introduced to the Maxwell-Stefan equations. The crystal growth rates were determined by solving the Maxwell-Stefan equations.

786 In addition, the growth experiments of a single KDP crystal were carried out to verify the growth rate model. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

2. Crystal Growth Model Using Maxwell-Stefan Equations

Let us consider a KDP crystal exposed to the supersaturated KDP of aqueous urea solution. Due to the gradient of chemical potential the KDP species diffuse to the crystal surface and integrate into crystal lattices, and successively crystal is grown. At the same time the species of urea and water diffuse when each chemical potential gradient is greater than friction with the other components. Figure 1 describes the concentration profiles of above components adjacent to a growing crystal. Here the film theory was applied. Thus, the concentration and activity gradient is linear in the film. When the steady state is achieved, in the case of rapid reaction,zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO Xn is nearly jci^. In this study it was assumed that xu equals to xip. The ternary diffusion is generalized using the linearized Maxwell-Stefan equation (Wesselingh and Krishna, 2000). The difference equations of mass transport of each component are: KDP(l) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA __ z= |- -

— j^

^1,2^

(1)

^1,3^

Urea (2) _

Aa2

3ciA^2~^2^i

X2N2-X2N2

^1,2^

(2)

^2,3^

Water (3) _ Aflj _ X^N^ -X^N^

3C.2^3 ~ ^ 3 ^ 2

^1,3^

^3

(3)

^2,3^

where x, and a- are average mole fraction [-] and activity [mol/kg-solvent] of component /, respectively, c average solution concentration [mol/m^], Ni the flux of component / [mol/m^ s] and ktj the mass transfer coefficient between component / and 7. It should be remarked that two of above equations are independent. Apart from an extremely high growth rate, the inclusions of water and urea species into a KDP crystal do not take place. Thus, the following bootstrap relations are obtained: (4)

N2=N,=0 After applying eq.(4) to eqs. (1) and (2) and rearranging, eqs. (1) and (2) become:

{xia-^^i^yi^ifi-^ia)

^2a "^ ^2/3

^3a "^ ^3fi

A^i

^la+^lfi

^1,2

^1,3

c^ +c,

^2/3

^2a

«2a+«2yS

A^l

(5) (6)

^1,2 (^a + ^ , 5 )

Additionally the following constrains are satisfied: ^\a "*" ^2a "^ ^3a ~ ^

(7a)

^1/8 "^ ^2)8 "*" ^3fi ~ ^

(7b)

787

xi^ Each activity was calculated from its concentration (see section 2.1). Thus, if xi^,zyxwvutsrqponmlk and X2a are given, unknown variables reduce to two, ^2^^ and A^i, which can be solved using eqs. (5) and (6). When A^i is determined, the growth rate of a KDP crystal is determined by the following relation: G=^

(8)

where G is the growth rate [m/s] and Q the crystal density [mol/m^]. zyxwvutsrqponmlkjihgfedcbaZY a H20(3)^3J Bulk solution

Urea (2) ^la

KDF(iyia I

^^ .^ ^ \^ Crystal-solution Boundary layer, Az interface, /

Figure 1. The concentration profile of each component adjacent to a growing crystal. zyxwvutsrqpon 2.1. Calculation method of activities The activity is defined by activity coefficient yand molality-based concentration m [mol solute/kg solvent] as: a=ym

(9)

The activity coefficients of KDP ions and urea molecule in KDP-water-urea system were modelled using the Pitzer method. The equations for activity coefficients ^of KDP ions and urea molecule in the ternary KDP - urea - water system can be described as: +2B,.,.m,. + / ( B ' ) + 2 i _ , . m _

(10)

InG',- ) = / ' ' + 2B,.,_m,. + f{B') + 2 A _ ^ . m _

(11)

ln(r,.)=r

^ u/

urea,K

where A

K

urea,A

A

urea,urea

urea

^

^

^^ ,. is the ion-molecule interaction coefficient and /l„,,, „,,, the molecule-

urea,K orA

urea,urea

molecule interaction coefficient, respectively. The quantities in eqs. (10) and (11) are described in literature (Covington and Ferra, 1994). Using solubility data for the binary KDP-water system and for the ternary KDP-water-urea system, (A^^^^ ^^ + A^^^^ ^- j was estimated to 0.017122 (Mori et al., 2002). The mean ionic activity coefficients of KDP is calculated as:

V

llf^+B ^

^

^

m

+B . _m .+/(B')U.034244m„„„

K*A- A-

K + A-

K^ • '^ ' i

"""

,^^^ (13)

On the other hand, the coefficient /i,,,.^„,.^ was estimated from the isotonic method

788 (Scatchard, 1938). The equilibrium state of aqueous solutions of potassium chloride and urea is expressed as: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA ^^KCl^KCl

=^ureaKea

(14)

where ^ is the osmotic coefficient. (I)KCI was calculated by the Pitzer equation for the osmotic coefficient (Pitzer and Mayorga, 1973) and ^^^^ea was derived from the GibbsDuhem equation as: rurea

~

urea,urea

urea

K^-^)

After introducing both data of the equilibrium concentrations, calculated (/>Kch and eq. (15) to eq. (14), the error of eq. (14) was minimised by the least squares method with respect to A^rea, urea- The estimated value of A^rea. urea is -0.02117. Finally the activity coefficient of urea is calculated as: ^^0.034244m,-0.04234.m_

^ ^ .

2.2. Estimation of mass transfer coefficient The Maxwell-Stefan equations contain the mass transfer coefficients. In the film theory the mass transfer coefficient is obtained by:

Az where :^,^. is the Maxwell-Stefan diffusion coefficient for the component pair / andy" [m Is] and Az the boundary layer thickness [m]. The ternary diffusion coefficient strongly depends on the solution concentration. In order to calculate accurate mass transfer coefficients, experimental data of diffusion coefficients at the interest concentrations and temperatures are necessary. However, data are not available at concentrations and temperature used at the present study, it was assumed that the ternary diffusion coefficients were equal to the binary diffusion coefficients. The binary diffusion coefficients of the KDP-water pairs and the ureawater pairs were taken from literature (Mullin and Amatavivadhana, 1967; Cussler, 1997). The values were transformed into the Maxwell-Stefan diffusivities using the thermodynamic correction factor. The Maxwell-Stefan diffusivity of the KDP-urea pairs in the ternary system was approximated to the limiting diffusivity since the mole fraction of water is close to 1 and estimated by the following model proposed by Wesselingh and Krishna (2000):

where ^i/"~'^ and ^^2,3'"'^^ is the Maxwell-Stefan binary diffusivity of KDP-water pairs and of urea-water pairs at infinite dilution, respectively. The boundary layer thickness is only function of flow conditions and it was determined from the growth rate experiments in the binary system at different solution velocities. zyxwvutsrqpon

3. Experimental Procedure Growth rate measurements were performed in the growth cell on a single KDP crystal.

789

Experimental setup consists of a 1-litre jacketed vessel, a peristaltic pump and two heat exchangers in addition to a growth cell. The optical microscope, which equipped the digital camera, was amounted above a cell and it allowed taking images of a growing crystal at regular intervals. The saturated solutions of KDP in water and urea solutions of 1.0, 2.5 and 5.0m were prepared in a vessel. Two levels of activity-based supersaturation, Aa / a =0.022 and 0.037, were employed for all solutions. Additional two levels of Aa/« , 0.045 and 0.052, were only applied for pure solution. Before each run, the solution temperature was increased to 50°C. The solution was pumped to the flow cell through a glass heat exchanger by which it was cooled to the crystallization temperature of 30°C. After passing the flow cell, the solution returned to the mother liquor vessel via a heating jacket with which it was heated up to 50°C. Supersaturation and the solution velocity were kept constant during each run. The solution velocity was varied from 0.00165 m/s to 0.05 m/s. When the solution had reached thermal equilibrium, the seed crystal with the dimensions of about 2.5x2.0x1.0 mm^was introduced into the cell. After operation conditions were stabilised, the first image of the growing crystal was registered, and later images were taken at intervals of 10 minutes in 6 h duration. All the images of the crystals were analysed using the image analysis software AnalySIS in order to determine the normal growth rates of the (101) face. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ

4. Results and Discussion

4.1. Growth rate of KDP in the binary system at different flow velocities The growth rate of a KDP crystal in a pure solution was measured as a function of flow velocity at constant activity-based supersaturation, Aa / a =0.037. At each condition the steady level of growth was achieved during experiments. The experimental results indicate that the growth process is diffusion-controlled at the flow velocity lower than 0.033 m/s. Thus, the flow velocity,zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE D= 0.005 m/s, was chosen for the growth measurements in the ternary system. The mass transfer coefficients were determined by applying the binary Maxwell-Stefan equations to the measured growth rates and subsequently the boundary layer thickness was obtained at different flow velocities. 4.2. Growth rate of KDP in the ternary system at different supersaturations and urea concentrations The growth rate of a KDP crystal in the ternary KDP-water-urea system was measured at different activity-based supersaturations and urea concentrations. For urea concentrations of 1.0m, the growth rate was achieved to be steady. However, for urea concentrations of 2.5m and 5.0m, the crystal growth was firstly stabilized at the same level as the growth in a pure solution at the same supersaturation. After a period of time, the growth was declined slowly and stabilized at the second steady level of growth. Finally, after a time the growth is levelled down slowly. It is interpreted that at the first stage urea enhanced the growth of a KDP crystal (Kuznetsov et ai, 1998). However, it is difficult to discuss the role of urea on the initial growth promoting in the diffusioncontrolled process at the present study. The observed results at the final stage can be understood that the diffusion coefficient decreases due to the aging of the solution. This phenomenon was also observed in the KDP-water and the KDP-1.0m urea solution systems. It was shown that the effect of the solution aging is more significant as the urea concentration increases. The second steady level of growth was considered to be the growth rate in the present system. Figure 2 shows the computed growth rate from Maxwell-Stefan equations (5)(6) in the ternary system compared with the experimental data. In Fig. 2 the computed values accord with experiments reasonably well. The deviation might be decreased

790

when the concentration-dependence on the diffusivity is taken into account. zyxwvutsrqponmlkjihg 8.E-08 7.E-08 J

(syy

0)

6.E-08 -j 5.E-08 CD zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA o

Pure KDP

^ ^ ^ 2 CD

4.E-08 J 3.E-08 2.E-08

-l 1.E-08 1 \

O.E+00 0.00

/v^»**

KDP+1.0murea i a .^' zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO •

• r^*

KDPf2.5murea i KDPfS.Om urea 1 - K D P pure

wmm K H P - L I D m 1 irpp>

.^^ -^^^^ 0.02

- KDP+2.5m urea \ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE - KDRfS.Omurea ^— 1—

0.04

0.06

0.08

0.10 zyxwvutsrqponmlk

Figure 2. Growth rate computed from Maxwell-Stefan equations in the ternary system (lines) compared to the experimental data (symbols). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ

5. Conclusions At the present study the diffusion-controlled growth process from the ternary system was modelled by the Maxwell-Stefan equations. The estimation methods of the required parameters in the model were shown. The model was evaluated from single crystal growth measurements in the ternary system. The results showed that experimental and predicted growth rates were within acceptable agreements.

6, References Covington, A.K. and Ferra, M.I.A., 1994, A Pitzer mixed electrolyte solution theory approach to assignment pH to standard buffer solutions, J. Solution Chem., 23, 1. Cussler, E.L., 1997, Diffusion mass transfer in fluid systems, 2nd ed., Cambridge University Press, Cambridge. Kuznetsov, V.A., Okhrimenko, T.M. and Rak, M., 1998, Growth promoting effect of organic impurities on growth kinetics of KAP and KDP crystals, J. Crystal Growth, 193, 164. Mori, Y., Partanen, J., Louhi-Kultanen, M. and Kallas, J, 2002, The influence of urea on the solubility and crystal growth of potassium dihydrogen phosphate. Proceeding of ISIC15, September 15-18, Italy, 1, 353. Mullin, J.W. and Amatavivadhana, A., 1967, Growth kinetics of ammonium-and potassium -dihydrogen phosphate crystals, J. Appl. Chem., 17, 151. Myerson, A.S., 1993, Handbook of Industrial Crystallization, 1st ed., ButterworthHeinemann, Stoneham. Pitzer, K.S. and Mayorga, G., 1973, Thermodynamics of electrolytes. II. Activity and osmotic coefficients for strong electrolytes with one or both ions univalent, J. Phy. Chem., 77(19), 2300. Scatchard, G., Hamer, W.J. and Wood, S.E., 1938, Isotonic Solutions. I The chemical potential of water in aqueous solutions of sodium chloride, sulfuric acid, sucrose, urea and glycerol at 25°, J. Am. Chem. Soc, 60, 3061. Wesselingh, J.A. and Krishna, R., 2000, Mass transfer in Multicomponent Mixtures, 1st ed.. Delft University Press, Delft.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

791

Towards the Atomistic Description of Equilibrium-Based Separation Processes. 1. Isothermal Stirred-Tank Adsorber J. p. B. Mota Departamento de Quimica, Centre de Quimica Fina e Biotecnologia, Faculdade de Ciencias e Tecnologia, Universidade Nova de Lisboa, 2829-516 Caparica, Portugal

Abstract A new molecular simulation technique is developed to solve the perturbation equations for a multicomponent, isothermal stirred-tank adsorber under equilibrium controlled conditions. The method is a hybrid between the Gibbs ensemble and Grand Canonical Monte Carlo methods, coupled to macroscopic material balances. The bulk and adsorbed phases are simulated as two separate boxes, but the former is not actually modelled. To the best of our knowledge, this is the first attempt to predict the macroscopic behavior of an adsorption process from knowledge of the intermolecular forces by combining atomistic and continuum modelling into a single computational tool.

1. Introduction Process modelling is a key enabling technology for process development and design, equipment sizing and rating, and process optimization. However, its success is critically dependent upon accurate descriptions of thermodynamic properties and phase behavior. Molecular simulation has now developed to the point where it can be useful for quantitative prediction of those properties (Council for Chemical Research, 1999). Although there are several molecular simulation methodologies currently available, bridging techniques, i.e. computational methods used to bridge the range of spatial and temporal scales, are still largely underdeveloped. Here, we present a new molecular simulation method that bridges the range of spatial scales, from atomistic to macroscale, and apply it to solve the perturbation equations for a multicomponent, isothermal stirred-tank adsorber under equilibrium controlled conditions.

2. Problem Formulation Consider an isothermal stirred-tank adsorber under equilibrium-controlled conditions. € is the bulk porosity (volumetric fraction of the adsorber filled with fluid phase), 6p is the porosity of the adsorbent,zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Fi > 0 is the amount of component / added to the adsorber in the inlet stream, and Wi > 0 is the corresponding amount removed in the outlet stream; both Fi and Wi represent amounts scaled with respect to the adsorber volume. The differential material balance to the /th component of an m-component mixture in the adsorber yields € dQ -f (1 - 6)6p dqi = dF/ - dW/,

(1)

where Cf and qi are the concentrations in thefluidand adsorbed phases, respectively. Since the fluid phase is assumed to be perfectly mixed, dW/ = yi dW = ci dG,

(2)

792 wherezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA yi is the mole fraction of component / in the fluid phase and dG is the differential volume of fluid (at the conditions prevailing in the adsorber) removed in the outlet stream, scaled by the adsorber volume. Substitution of Eq. (2) into Eq. (1) gives € dc/ -f- (1 - 6)€p dqi = dFi - a dG.

(3)

When Eq. (3) is integrated from state n — 1 to state n, the following material balance is obtained: €Acl'^^ + (1 - €)€pAq^^^ = AF,^"> - c,^"-^/2>AG^">,

A0^"> ^ 0^"^ - (/> + (1 - e)epq\"^ = €c\"-'^ + (1 - €)epg|"-») + AF\"\

(7)

Given that the inlet value AFJ"^ is an input parameter, the terms on the r.-h.-s. of Eq. (7) are known quantities. To simplify the notation, the r.-h.-s. of Eq. (7) is condensed into a single parameter denoted by wi and the superscripts are dropped. Eq. (7) can be written in this shorthand notation as (6 -F A G ) Q + (1 - €)€pqi = Wi.

(8)

This equation requires a closure condition which consists offixingthe value of either AG or the pressure P at the new state. Here we show that Eq. (8), together with the conditions of thermodynamic equilibrium for an isothermal adsorption system (equality of chemical potentials between the two phases^), can be solved using the Gibbs ensemble Monte Carlo (GEMC) method in the modified form presented in the next section.

3. Simulation Method In the GEMC method (Panagiotopoulos, 1987) the two phases are simulated as two separate boxes, thereby avoiding the problems with the direct simulation of the interface between the two phases. The system temperature is specified in advance and the number of molecules of each species / in the adsorbed phase, A^/p, and in the bulk, NIB, may vary according to the constraint MB 4- MP = M", where Ni is fixed. If Eq. (8) is rewritten in terms of NIB and N/p, the following expression is obtained: NiB +

MP

= Ci =

—-Wi,

(9)

(1 - €)€p ^When one of the phases is an adsorbed phase, equaUty of pressure is no longer a condition of thermodynamic equilibrium. This is because the pressure within the pore of an adsorbent is tensorial in nature, whereas in the bulk fluid the pressure is a scalar.

793

where A^AV is avogadro's number andzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFED Vp is the volume of the box simulating the adsorbed phase. In Eq. (9) the value of C/ is expressed as a function of Vp instead of the volume VB of the box simulating the bulkfluid.The reason for this is that Vp is alwaysfixed,whereas, as we shall show below, VB must be allowed tofluctuateduring the simulation when the pressure is an input parameter. Obviously, for Eq. (9) to be valid the values of VB and Vp must be chosen in accordance with the relative dimensions of the physical problem, i.e. ^

^

= - - ^ .

(10)

Since the GEMC method inherently conserves the total number of molecules of each species, Eq. (9) is automatically satisfied by every sampled configuration provided that each Ci turns out to be an integer number. This is why GEMC is the natural ensemble to use when solving Eq. (9). Unfortunately, in general it is not possible to size VB and Vp according to Eq. (10) so that each C/ is an integer number. To overcome this problem, Eq. (9) is satisfied statistically by allowing Ni to fluctuate around the target value C/ so that the ensemble average gives {Ni}=Ci.

(11)

This approach is different from that employed in a conventional GEMC simulation where Ni isfixed.When AG is an input parameter, the sizes of the two simulation boxes are fixed and their volumes are related by Eq. (10). On the other hand, when the pressure of the bulk fluid is imposed, the volume VB must be allowed tofluctuateduring the simulation so that on average the fluid contained within it is at the desired pressure. Once the ensemble average ( VB) is determined, the value of AG follows from Eq. (10): (Vn) AG = (l-€)€p^-€. (12) Vp It is shown in detail elsewhere (Mota, 2002) that if an equation of state for thefluidphase is known, the bulk box does not have to be explicitly modelled: computations on the bulk box amount to just updating the value the NIB as the configuration changes. Thermodynamic equilibrium between the two boxes is achieved by allowing them to exchange particles and by changing the internal configuration of volume Vp. The probability of acceptance of the latter moves (molecule displacement, rotation, or conformational change) is the same as for a conventional canonical simulation: min{l,exp(-^AC/)},

(13)

where fi = I/UBT, with kB the Boltzmann's constant, and At/ is the internal energy change resulting from the configurational move. The transfer of particles between the two boxes forces equality of chemical potentials. The probability of accepting a trial move in which a molecule of type / is transferred to or from volume Vp is, respectively, zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE 2iCC(NiP

NiP + U:

MB

= min acc(N/P

-> NiB - 1) L

\ '

VP^/CNB.O)

NiP + 1

NiP - 1;: MB zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA -> MB + 1) = mir>(1, txp{-fi[U(s^'^-^) ^ \ Vp^y;(NB,i)

- f/(s^'T)]}\,

(15)

794 where [/(s^'^"^^) is the internal energy of configuration S^'P+^ in volume Vp, NB = [A^iB,...,zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA NmB], and /i(NB, k) is the fugacity of species / in a gas mixture at temperature T and mole-fraction composition ^\B A'B + ^

NiB+k A^B + ^

NmB A^B + ^

These acceptance rules imply that a box is first chosen with equal probability, then a species is chosen with afixed(but otherwise arbitrary) probability, andfinallya molecule of that species is randomly selected for transfer to the other box. How the equation of state is actually employed to compute fi depends on the type of problem being solved. If AG is an input parameter, VB is fixed throughout the simulation and the gas mixture is further specified by its number density PNB-\-k = (^B + ^ ) / ^ B - If, on the other hand, the pressure is fixed, its value defines the state of the mixture. The statistical mechanical basis for Eqs. (14) and (15) is discussed elsewhere (Mota, 2002). All that remains to complete the simulation procedure is to generate trial configurations whose statistical average obeys Eq. (11). Let us consider how to do this. First, note that the maximum number of molecules of species / that may be present in the simulation system without exceeding the material balance given by Eq. (9), is obtained by truncating Q to an integer number, which we denote by int{Ci). The remainder 5/ (0 < 5/ < 1), which must be added to int{Ci) to get C/, is 8i = Ci - intid).

(17)

To get the best statistics Ni — NIB H- A^/P must fluctuate with the smallest amplitude around the target value C/, which is the case when Ni can only take the two integer values int(Ci) or int{Ci) -h 1. It is straightforward to derive that for Eq. (11) to hold, the probability density offindingthe system in one of the two configurations must be M{Ni -> int{Ci)) a 1 - 5/,

N{Ni -> int{Ci) + 1} a 5/.

(18)

In order to sample this probability distribution, a new type of trial move must be performed which consists of an attempt to change the system to a configuration with int{Ci) or int{Ci) + 1 particles. This move should not be confused with the particle exchange move given by Eqs. (14) and (15); here, a particle is added or removed from the system according to the probability given by Eq. (18). It is highly recommended that the box for insertion/removal of the molecule always be the bulk box (except for the rare cases that NiB becomes zero). This choice is most suited for adsorption from the gas phase where, in general, the bulk phase is much less dense than the adsorbed phase and, therefore, more permeable to particle insertions. Furthermore, given that the bulk box is not actually modelled the molecule insertion/removal amounts to just updating the value of NIB • zyxwvutsrqponml

4. Application Example Due to lack of space, the few results presented here are primarily intended to demonstrate the applicability of the proposed method. The pore space of the adsorbent is assumed to consist of slit-shaped pores of width 15 A, with parameters chosen to model activated carbon. The porosity values are fixed at 6 = 0.45 and £p = 0.6. The feed stream is a ternary gas mixture of CH4(30%) / C2H6(50%) / H2(20%). The vapor-phase fugacities were computed from the virial equation to second order, using coefficients taken from Reidetal.(1987).

795 zyxwvutsrqpon Table 1: Input parameters and output variables employed in each phase of the simulated operation of the adsorber Phase

Output variables

Input parameters

# steps

pin) AG^"^ = 0 10 0 in-\) P^"^ = 1 0 bar AG("> 36 pin) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO 21 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA AF^"^ = 0 ^AGin) _ 100/c.(n-l) Notes: c = ^Ci\ ^nearly equivalent to setting AF in) _ 400 mol/m^; ^nearly equivalent to setting A W^"^ = 90 mol/m^. I II III

CH4 and C2H6 were modelled using the TraPPE (Martin and Siepmann, 1998) unitedatom potential. The Lennard-Jones parameters for H2 were taken from Turner et al. (2001). The potential cutoff was set at 14 A, with no long-range corrections applied. The interactions with the carbon walls were accounted for using the 10-4-3 Steele potential (Steele, 1974), with the usual parameters for carbon. The simulations were equilibrated for 10"* Monte Carlo cycles, where each cycle consists of N = NB -{- Np attempts to change the internal configurations of volume Vp (equally partitioned between translations, rotations and conformational changes) and N/3 attempts to transfer molecules between boxes. Each particle molecule attempt was followed by a trial move to adjust the total number of molecules of that type according to Eq. (18). The production periods consisted of 3 x 10"* Monte Carlo cycles. Standard deviations of the ensemble averages were computed by breaking the production runs into five blocks. The simulation reported here consists of the following sequential operating procedure applied to an initially evacuated adsorber: (I) charge up to P = 10 bar, (II) constantpressure feed, and (III) discharge down to F = 1.25 bar. This example encompasses the major steps of every cyclic batch adsorption process for gas separation, in which regeneration of the bed is accomplished by reducing the pressure at essentially constant temperature, as is the case in pressure swing adsorption. The input parameters and output variables for each phase are listed in Table 1. Figure 1 shows the simulated pressure profile plotted as a function of either F or W for each phase. The corresponding gas-phase mole fraction profiles are plotted in Figure ??. During charge the adsorbed phase is enriched in C2H6 and CH4, which are the strongly 1

10

T? ® a,

1

1

1

1

1

1

1 •



1

^

8

^

1

:^

. / zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

. / zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA \ 1 •

\

4 2 •

0



./ 1

0

1

1

1

2

1

3

1

1

4

5

6

10-3xF'(mol/m3)

1

7

1

0

1

1

3

1

1

1

6 9 12 10-3xF"(mol/m3)

1

15 0

0.5 1 1.5 lO-^xW^^imoVm^)

Figure 1: Simulated pressure profile. (I) charge; (II) const-pressure feed; (III) discharge.

796

0.8 —•

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE / ^^' zyxwvutsrqponmlkjihgfedcbaZYXW



0.6 0.4

/

0.2

A

""••••••••••••••a

zyxwvutsrqponmlkjihgfedcbaZYXWV • • ... zyxwvutsrqponmlkjih v^ ;:rtrf.
Methane inlet/outlet - Jacket

Internal wall

(c)

zyxwvutsrqponmlkjihgfedcbaZYXW

Figure 3. Distribution of the computational domain among the two solvers, (a) physical domain in two-dimensional axially-symmetric cylindrical coordinates; (b) computational domain handled by ANGTANK; (c) computational domain handled by FLUENT. routines and an efficient staggered-corrector sensitivity algorithm. Furthermore, the additional information required by DSL48S (notably sparse Jacobian matrix, analytical partial derivatives with respect to model parameters, and sparsity information) are generated automatically with other DAEPACK components. To render the hybrid solution procedure computationally more efficient, the numerical grids employed by FLUENT and ANGTANK do not have to match at their interface. To allow for thisflexibility,data is exchanged between grids using an interpolation procedure that is consistent with the control-volume-based technique employed by both solvers. The two codes run as independent processes and communicate through shared memory. The software interface, which has been implemented as a user-defined function in FLUENT, dynamically suspends or activates each process as necessary, and is responsible for data exchange and process synchronization. This strategy leads to an optimum allocation of CPU usage. The two codes interact with each other as follows. Suppose that we wish to advance the solution from current time r„ to time tn-\-i. At the start of the step FLUENT is active whereas ANGTANK is suspended. Before computing the new solution, FLUENT first updates its boundary conditions. To do this, it provides ANGTANK with the walltemperature profile data, T^^\ These data are defined on the boundary interface between the two computational domains, which is identified as 'internal wall' in Figure 3. Once this has been done, FLUENT is suspended and ANGTANK is reactivated. The latter interpolates the data from FLUENT'S grid to its own grid. It then advances the solution in the adsorbent bed from tn to ^„+i using the newly received T^^ data. Before being suspended again, ANGTANK computes the packed-bed-side wall heat flux data, -Uyj • keVT^^'^^\ and interpolates them from its grid to FLUENT'S grid. It also updates the value of the exhaust gasflowrate, which is another input to FLUENT, and then sends the new boundary data to FLUENT. Now, the CFD code can compute the solution at r„_|_i. The data exchange between ANGTANK and FLUENT ensures continuity of temperature and heat flux along the outer surface of the cylinder wall, and is the mechanism by which the two solutions are synchronized. An algorithm describing the software interface is provided in Figure 4, the function calls given there are specific to the Windows 2000 operating system and handle the notification of events across process boundaries.

800 FLUENT zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA DLL interface ANGTANK • Initialize zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA > Initialize •WaitForSingleObject (hTwOK, INFINITE);

J

->- • Get the face-center values ofzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF T^^"' in Ru- • • Retrieve the face-center values of T^" from shared memory; interpolate the data ent grid and store them in shared memory; from Fluent grid to local grid; • SetEvent (hTwOK); f>- • Advance the • Advance the solution inside the tank from solution in • WaitForSingleObject (hHFwOK, the annular tn tof„+i; I INFINITE); space from r„ • Compute the face-center values of —tiw • tor„+i; »>-• Return from shared memory the facekeVT^"'^^^; interpolate the data from local center values of—liw-keVT^"^^^ in Fluent • n -^ n + 1; grid to Fluent grid; store the interpolated grid; i data in shared memory; • SetEvent (hHFwOK);

J Figure 4. Algorithm of software interface to manage data exchange and synchronization ofANGTANK and FLUENT.

3. Results and Discussion The hybrid solution procedure described in the previous section is computationally more demanding than one that does not rely on the CFD package to predict the heat transfer from the exhaust gas. In fact, this simpler approach was adopted in the early stages of the project, the heat transfer process was modelled using a mean heat transfer coefficient estimated from correlations for convective heat transfer in annuli. However, it was soon realized that this method has a high degree of uncertainty when the heat transfer process takes place under unsteady-state conditions and when the thermal entry length spreads over an appreciable extent of the domain. These conditions are always met in the application under study. The heat capacity, viscosity, andflowrate of the exhaust gas can be related to the methane discharge flow rate F by a simple combustion model. Fuel (methane) and oxidant (air) are presumed to combine in a single step to form a product: CH4 + a(202 + 7.52N2) = CO2 + 2H2O -f 2(a - 1)02 + 7.52aN2,

(1)

where a = 1.2 is the air-fuel ratio. Assuming that this model is a good approximation to the real combustion process, the Reynolds number for flow of the exhaust gas in the annular space of the jacket is given by MCO2 + 2MH2O + 2(Qr l)Mo2 + 7.52aMN2 (2) F, 7r(/? + ^u;) where R is the cylinder radius, Cw is the wall thickness, and MCO2»• • • ^^^2 ^^^ ^^e molecular weights. Equation (2) shows that Rcc is independent of the thickness of the annular space of the jacket, and further analysis reveals that under normal discharge conditions the flow in the jacket is laminar. The unnecessariness of turbulence modelling reinforces our confidence on the accuracy of the heat transfer data obtained using FLUENT. The hybrid computational tool has been successfully employed to size, simulate and optimize the new tank design. As an illustration of the results obtained, we compare in

RCe

801 zyxwvutsrqp

-3D

-20

-10

0

1Q

20

30

40

50

60

zyxwvutsrqponmlkjihgfedcbaZYXWVU

Figure 5. Comparison of the temperaturefieldfor a conventional cylinder (left) and that for a jacketed tank (right) with the same geometry, during a three-hour discharge. Figure 5 the temperature field for a conventional storage cylinder (left) with that for the new design incorporating a jacket (right). The exhaust gas inlet temperature is 80°C. At the end of the discharge the mean temperature in the standard cylinder has dropped 27°C, whereas in the jacketed cylinder it has been increased by 8°C above its initial value. Given that the jacket takes up space that in a conventional storage tank can be filled with adsorbent, the performance of the proposed prototype should be compared with those of two standard tanks: one having the same volume as the prototype and the other having the same weight. This leads to two different values of the dynamic efficiency, respectively rjy and r/u;, for the same exhaust gas inlet temperature. These two cases are suitable benchmarks for mobile applications in which the limiting constraint is, respectively, volume (^rjy) or weight of storage (^rjw)Figure 6 shows the influence of discharge duration on the exhaust temperature required for increasing the net deliverable capacity of the storage cylinder to isothermal performance level (r] = 1). The results presented are for the two benchmark cases and refer to two different values, Cc =2 mm and Cc = 5 mm, of thickness of the annular space of the jacket. As expected, higher exhaust temperatures are necessary to attain the equivalent of isothermal performance when the comparison is made on volume basis than on weight basis. If the discharge duration is increased, which is equivalent to decreasing the fuel dischargeflowrate, then isothermal performance can be reached with lower exhaust temperatures. Decreasing Cc improves the performance of the storage cylinder because heat transfer to the carbon bed is enhanced. This increase in performance is more pronounced when the comparison between prototype and regular cylinder is made on a volume basis. The energy demand of a city vehicle, equipped with 3 cylinders like the one considered

802 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 250

1

1 —

ec = 5 mm

— ^c = 2 mm • weight basis J u zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 200 hh zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA volume basis o zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIH

i•\N^

\^

& 150zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA [ \ >\

4

\\ 100 h

^ >W-

X^

^ >V"-o. * ^ ^^^•^ ^"*o— V ^

"-rrr**-• --..""-o.^ • • - ^ — 0 —4 ^ ^ " • *

«

50

1.5

2

2.5

3

3.5

4.5

Discharge duration, h Figure 6. Required exhaust temperature to attain isothermal performance (rj — A) as a function of discharge duration. R/L =. 10/74; Cu) = 5 mm; (thermal conductivity of adsorbent bed) ke =2xl0~^ Wcm'^K-K in this work and travelling at cruising speed, gives a discharge duration of about 3 hours. Figure 6 shows that, in this case, the required exhaust temperatures to attain the isothermal performance level are in a perfectly feasible range (80-100°C).

4. Conclusions The case study presented here shows that computational fluid dynamics and process simulation technologies are highly complementary, and that there are clear benefits to be gained from a close integration of the two.

5. References Bezzo R, S. Macchietto, C.C. Pantelides, 2000, Comp. Chem. Eng. 24, 653. Brenan K.E., S.L. Campbell, L.R. Petzold, 1989, Numerical Solution of Initial-Value Problems in Differential-Algebraic Equations. Elsevier, New York. Chang, K.J., O. Talu, 1996, App. Thermal Eng. 16, 359. Mota J.PB., E. Saatdjian, D. Tondeur, A.E. Rodrigues, 1997, Comp. Chem. Eng. 21,387. Mota, J.RB., 1999, AIChE J. 45,986. Mota J.P.B., A.E. Rodrigues, E. Saatdjian, D. Tondeur, 2001, in Activated Carbon Compendium: Dynamics of natural gas adsorption storage systems employing activated carbon, Ed. H.Marsh, Elsevier, Amsterdam. Patankar, S.V., 1980, Numerical Heat Transfer and Fluid Flow, McGraw-Hill, New York. Schiesser, W.E., 1991, The Numerical Method of Lines Integration of Partial Differential Equations. Academic Press, San Diego. W. Shiells, W., R Garcia, S. Chanchaona, J.S. McFeaters, R.R. Raine, 1989, SAE paper No. 892137. Tolsma, I.E., RI. Barton, 2000, Ind. Eng. Chem. Res. 39,1826.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

803

Online HAZOP Analysis for Abnormal Event Management of Batch Process Fangping Mu, Venkat Venkatasubramanian* Laboratory for Intelligent Process Systems, School of Chemical Engineering Purdue University, West Lafayette, IN 47907, USA

Abstract Hazard and operability analysis (HAZOP) is a widely used process hazard analysis method for batch processes. However, even though HAZOP analysis considers various potential accident scenarios and produces results that contain the causes, consequences and operator options for these scenarios, these are not generally available or used when those emergencies occur in the plant. In this work, we describe an approach that integrates multivariate statistical process monitoring and HAZOP analysis for abnormal event management. The framework includes three major parts: process monitoring and fault detection based on multiway principal component analysis, automated online HAZOP analysis module and a coordinator. A case study is given to illustrate the features of the system.

1. Introduction Batch and semi-batch processes play an important role in the chemical industry. They are widely used in production of many chemicals such as biochemicals, pharmaceuticals, polymers and specialty chemicals. A variety of approaches to a safe batch process have been developed. Process Hazard Analysis (PHA) and Abnormal Event Management (AEM) are two different, but related, methods that are used by chemical industry to improve the design and operation of a process. Hazard and operability (HAZOP) analysis is a widely used PHA method. AEM involves diagnosing abnormal causal origins of adverse consequences while PHA deals with reasoning about adverse consequences from abnormal causes. When an abnormal event occurs during plant operation, the operator needs to find the root cause of the abnormality. Since a design stage safety analysis methodology, such as HAZOP analysis, overlaps with many of the issues faced by monitoring and diagnostic systems, it seems reasonable to expect some re-use of information. Henio et al. (1994) provided a HAZOP documentation tool to store safety analysis results and make the results which are relevant to monitor situation available to operators. Dash and Venkatasubramanian (2000) proposed a framework that uses the offline HAZOP results of automatic HAZOP tool HAZOPExpert in assessment of abnormal events. In all of these works, off-line HAZOP results are used in assessment of abnormal events. This approach has two main drawbacks. Firstly, it suffers from the problem related management of HAZOP results and the updating of HAZOP results

Corresponding author. Tel: 1-765-494-0734. E-mail: [email protected]

804 when the plant is changed. Secondly, the worst-case scenario is considered during offline HAZOP analysis. During on-line application, when abnormal event occurs, lots of on-line measurements are available and these measurements can be used to adapt the hazard models for efficient abnormal event management. The approach based on offline generated HAZOP results ignores online measurements. In this work, we describe an approach to integrate multivariate statistical process monitoring and online HAZOP analysis for abnormal event management of batch processes. The framework consists of three main parts: process monitoring and fault detection, automated online HAZOP analysis module and a coordinator. Multiway PCA is used for batch process monitoring and fault detection. When abnormal event is detected, signal-to-symbol transformation technique based on variable contribution is used to transfer quantitative sensor readings to qualitative states. Online HAZOP analysis is based on PHASuite, an automated HAZOP analysis tool, to identify the potential causes, adverse consequences and potential operator options for the identified abnormal event. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

2. Multiway Principal Component Analysis (PCA) for Batch Process Monitoring

2.1. Multiway PCA (MPCA) Monitoring and control are crucial tasks in the operation of a batch process. Multivariate Statistical Process Monitoring (MSPM) methods, such as multiway PCA, are becoming popular in recent years for monitoring batch processes. The data from historical database of past successful batch runs generates a threedimensional array X(IxJxK), where I is the number of batches, J is the number of variables and K is the number of sampling times, in a given batch. The array X is unfolded and properly scaled to a 2-dimensional matrix X(IxJK). PCA is applied to generate the score T, loading matrix P and residual E as X = T P + E (Nomikos and MacGregor 1994). This model can also be used to monitor process performance online. At each sample instance during the batch operation, Xnew(JxK) is constructed by using all the data collected up to the current time and the remaining part of X is filled up assuming that the future deviations from the mean trajectories will remain for the rest of the batch duration at their current values. Xnew is scaled and unfolded to x\ew(lxJK). The scores and residuals are generated as, ? = Pjc , e = jczyxwvutsrqponmlkjihgfedcbaZYXWVUT -t P - Two statistics, *-*

new

"^new^

• ^new

new

namely T^ and SPE-statistic, are used for batch process monitoring. The T^-statistic is calculated based on the scores while SPE-statistics is computed based on residuals. When abnormal situation is detected by MPCA model, contribution plots (Nomikos, 1996) can be used to determine the variable(s) that are no longer consistent with normal operating conditions. The contribution of process variables to the T^-statistic can be negative, which can be confusing. In this paper, we propose a new definition of variable contribution to T^-statistic which avoids the negativity problem. Given that

j-2 _^rQ-i^ _ 11^-1/2Jp _|U-i/2pjp _L-i/2y« ^ , we can define the variable II I I IzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE 2^j t^ jKX-R J^ II contribution to T-statistic as Con^' =\s~^'^v-^ ^x-A • Using Box's approximation J

I

* jKxR

jK 11

(Box 1954), its confidence can be estimated as Con^.^ = g^-^zl(h^-^)'

805 where ^J'zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA =trace{b^)ltrace{h\ hf = {trace(b)}^/trace(b^) and b = cow(Xi^jj^)Pjj^^j^S~^Pjj^^j^. X is the data set used to obtain the model. At time instance k, the contribution of variable j to SPE-statistic can be defined as Conf^ =e((k-l)J -i- j)^ and its confidence limits can be calculated fi-om the normal operating data as Con^^^ = ^^'^ Z^(

)" ^^^^^ ^^j and v^j are the mean and

variance of the contribution of variable j to SPE obtained for the data set used for the model developed at time instant k.zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF a is the significance level.

2.2. Signal-to-symbol transformation A knowledge based system, such as PHASuite, takes the inputs as qualitative deviation values such as 'high', 'low' and 'normal'. We can transform signal measurements to symbol information based on variable contributions and shift direction of each process variables at the current sample. If T^-statistic indicates the process to be out of limits at time interval k, the qualitative state of process variable j can be set as, zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK high, if Con^ J > Con^j^ and jc, ^ > 0 0'^ = low, if Conl J > Conl j ^ and x^ • < 0 normal, otherwise If SPE-statistic is out of limit at time interval k, the qualitative state Q^^^of process variables can be set similarly. If both T^- and SPE-statistic are out of limit, we can combine them as. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Qkj =

high, if Qlj = high or Ql'f = high \ low, if Qlj = low or Ql^f = low normal, otherwise

Note that it is not possible for g['. =high while Q^^^ =1OW or g [ ' =low while Q^^^ =high according to the above definition. 2.3. Multistage batch processes Many industrial batch processes are operated in multiple stages. Batch recipe defines the different stages of a batch process. For example, for a batch reaction, the first stage can be a heating stage, and the second can be a holding stage. Usually the correlation structures of the batch variables are different for different stages. For multistage batches, it is natural to use different models for the different stages in order to achieve better results. In this work, separate MPCA models for each stage are used. For online monitoring, one needs to shift from one model to the other when one stage ends and the next stage begins.

806

zyxwvuts

3. Online HAZOP Analysis

3.1. PHASuite—an integrated system for automated HAZOP analysis PHASuite is an integrated system consisting of HAZOPExpert: a model-based, objectoriented, intelligent system for automating HAZOP analysis for continuous processes, BHE: a model-based intelligent system for automating HAZOP analysis for batch processes based on HAZOPExpert, and iTOPs: an intelligent tool for procedure synthesis. In this system, colored Petri Nets are chosen to represent the HAZOP analysis as well as batch and continuous chemical processes. Operation-centered analysis and equipment-centered analysis are integrated through abstraction of the process into two levels based on functional representation. Causal relationships between process variables are captured in signed directed graph models for operation and equipment. Rules for local causes and consequences are associated with digraph nodes. Propagation within and between digraphs provide the potential causes and consequences for a given deviation. PHASuite has been successfully tested on a number of processes from chemical and pharmaceutical companies (Zhao 2002). Multiway PCA process monitoring

zyxw zyxwvutsrqp

Online prsdiclion monitoring I

r

T'-andSPE-slatBtK:

] • - »{

3.Z:

^

OnNne HAZOP anatyait result

zyxwvutsrqponmlkj

Figure 1. Software components of the proposed online HAZOP analysis system. 3.2. Online HAZOP analysis module Based on PHASuite, this module provides the capability to reason about the potential causes and consequences of abnormal event identified by the process monitoring and fault detection module. For online HAZOP analysis, digraph nodes are classified as measured or unmeasured according to the sensor settings. When process monitoring and fault detection module detects an abnormal event, the qualitative states of measured digraph nodes are determined based on signal-to-symbol transformation. Starting from each measured process variable, if the state of the variable is not 'normal', simulation engine qualitatively propagates backward/forward from the corresponding digraph node to determine the states of unmeasured digraph nodes for causes/consequences. The propagation is a depth-first propagation. The backward search is to detect the causes for

807

the abnormal situation while forward search is to generate potential consequences. After all the measured process variables are scanned, the rules for causes and consequences are applied to each digraph node to generate potential causes and consequences for the deviations detected. This is a conservative design choice that favors completeness at the expense of poor resolution. Pure qualitative reasoning can generate ambiguities and possibly generate lots of infeasible situations. Quantitative filtering can be used to filter out some of these infeasible situations. When an abnormal event is detected, process sensors provide the quantitative information, which can be used for quantitative filtering. The quantitative information collected by sensors is sent to online HAZOP analysis module to set the states of corresponding process variables and is used for filtering when the online HAZOP analysis results are generated. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJI

4. Integrated Framework for AEM Using HAZOP Analysis The overall structure of the proposed framework is shown in Figure 1. Client-Server structure is used to design the system where PHASuite is built as a server and process monitoring module is a client. Therefore, PHASuite can be used offline or online depending on the situation. The complete system has been developed using C++ running under Windows system. Object-oriented programming techniques were used for the development of the system.

5. Illustrative Example This example involves a two-stage jacketed exothermic batch chemical reactor based on a model published by Luyben (1990). The reaction system involves two consecutive —> B —> C . The product that we want to make is first-order reactionszyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA A component B. The batch duration is 300min, and the safe startup time is lOOmin. Measurements in eight variables are taken every 2 minutes. By introducing typical variations in initial conditions and reactor conditions, 50 normal batches, which are defined as normal operation condition data, are simulated. 5.1. Results According to batch recipe, this process is operated in different stages. The first stage is a heating stage and the second is a holding stage. Usually the variations in the correlation structure of the batch variables are different for different stages. Figure 2 gives the variance-captured information for the whole process by 5 principal components. The two stages are clearly visible and we can define the first 100 minutes as the heating stage and the next 200 minutes as the hold stage. Two multiway PCA models are built for heat and hold stage, separately. Case 1: Fouling of the reactor walls This fault is introduced from the beginning of the batch. T^-statistic, which is not shown here, cannot detect the fault. Figure 3 shows SPE-statistic with its 95% and 99% control limits for the heating stage. SPE-statistic identifies the fault at 12 minutes. At that time, the variable contribution plot for SPE is shown in Figure 4.

808

Variable 3, which is reactor temperature, shows the major contribution to the abnormal event. Its qualitative state is set to be 'low' based on the signal-to-symbol transformation formula and the qualitative states of all other measured variables are 'normal'. Online HAZOP analysis is performed and the results are given in Table 1. zyxwvutsrqpon

200 Time(minutes)

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Figure 2. Cumulative percent of explained variance.

Figure 3. SPE-statistic for heating stage.

Figure 4. Variable contributions to SPEstatistic at sample 6.

Table 1. Online HAZOP analysis results. Deviation Causes Low 1) agitator operated at low speed; temperature 2) fouling induced low heat transfer coefficient; 3) cold weather, external heat sink, or lagging loss

Consequences 1) incomplete reaction zyxwvutsrqponmlkjih

6. Conclusions This paper presents a framework for integrating multivariate statistical process monitoring and PHASuite, an automated HAZOP analysis tool, for abnormal event management of batch process. Multiway PC A is used for batch process monitoring and fault detection. After abnormal event is detected, signal-to-symbol transformation technique based on contribution plots is used to translate signal measurements to symbol information, and is input to PHASuite. PHASuite is then used to identify the potential causes, adverse consequences and potential operator options for the abnormal event.

7. Reference Box, G.E.P., 1954, The annals of mathematical statistics. 25:290-302. Heino P., Karvone, I., Pettersen, T., Wennersten, R. and Andersen, T., 1994, Reliability Engineering & System Safety. 44 (3): 335-343. Dashes, S. and Venkatasubramanian, V., 2000, Proc. ESCAPE. Florence, Italy. 775-780 Luyben, W.L., 1990, Process modeling, simulation and control for chemical engineers. McGraw-Hill, New York. Nomikos, P. and MacGregor, J.F., 1994, AIChE Journal, Vol. 40 No. 8 pl361-1375 Nomikos, P., 1996, ISA Transactions, 35, 259-266. Zhao C , 2002, Knowledge Engineering Framework for Automated HAZOP Analysis, PhD Thesis, Purdue University.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

809

Analysis of Combustion Processes Using Computational Fluid Dynamics - A Tool and Its Application Christian Mueller, Anders Brink and Mikko Hupa Abo Akademi Process Chemistry Group Abo Akademi University, 20500 Abo/Turku, Finland

Abstract Numerical simulation by means of Computational Fluid Dynamics (CFD) has developed over recent years to a valuable design tool in engineering science. In the beginning mainly applied to address fluid dynamic questions it is nowadays capable to predict in detail conditions in various complex technical processes. State of the art commercial CFD codes are almost always set up as multi-purpose tools suitable for a wide variety of applications from automotive industry to chemical processes and power generation. However, since not highly specialized in all possible fields of application, CFD codes should be rather seen as a collection of basic models that can be compiled and extended to individual tools for special investigations than as readily applicable tools. In power generation CFD is extensively used for simulation of combustion processes in systems like utility boilers, industrial furnaces and gas turbines. The purpose of these simulations is to analyze the processes, to optimize them with regard to efficiency and safety and to develop novel techniques. Since combustion processes have been the target for CFD software for long, also standard models available in the codes are of high quality as long as modelling of conventional combustion systems is concerned. However, as soon as characteristics of novel combustion systems or fuels or detailed effects within a certain process are of interest, the limits of these standard models are reached easily. At this point extension of standard models by process specific knowledge is required. This paper presents some of the opportunities CFD offers when applied to analyse different combustion systems. Practical examples presented are ash deposition predictions on heat exchanger surfaces and walls in a bubbling fluidised bed furnace and detailed nitrogen oxide emission predictions for the same furnace type. Furthermore, the extension of a standard model using process specific data is presented for the fuel conversion process in a black liquor recovery furnace.

1. Introduction Computational Fluid Dynamics (CFD) has grown over the years from a plain mathematical description of simple mass and heat transfer problems to a powerful simulation tool applicable in almost any technical branch. It is nowadays commonly accepted as research tool and its potential for industrial design and development work has been discovered. From the various opportunities this tool offers, two are

810

outstanding: Firstly, the possibility to predict physical and chemical phenomena in technical systems that cannot be easily evaluated with experimental techniques, like the processes in industrial furnaces, and secondly, the cost efficiency and speed insight into these processes is obtained compared to experimental procedures. The latter gets especially obvious when parametric studies on different conditions towards the optimum solution are performed. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF 1.1. Combustion system analysis using computational fluid dynamics Detailed analysis of combustion processes, especially large-scale industrial ones, is a complicated matter due to the high temperatures of up to 2000 K that produce an extremely unfriendly environment for experimental investigations. For those processes numerical simulation by means of CFD is an excellent alternative investigation method. As long as the combustion of standard fuels in pulverised fuel fired or fluidised bed units is concerned, current multi-purpose CFD software gives a very good insight into the process. The turbulent flow field, the conversion of particles and gaseous species and the heat transfer are well described by standard models and allow an accurate description of the phenomena e.g. in a process furnace or the combustion chamber of a power boiler (Knaus et al., 2001). However, next to these general phenomena that are most relevant for the overall design of the combustion process, more specific aspects are getting interesting when processes need to be optimised for certain operational conditions. Here the focus may be e.g. on low emission levels for certain species which requires a substantial improvement of the chemical approaches currently available in most CFD codes. On the other hand, the purpose of the investigation may be an increase in boiler availability taking into consideration alternative fuels and design characteristics and resulting operational effects. An even bigger challenge is the adjustment of existing CFD codes to novel combustion processes and fuels that include new physical and chemical phenomena. For those cases established modelling approaches need to be significantly extended.

2. Computational Fluid Dynamics in Combustion Processes Examples of Problem Specific Modelling Approaches Hereafter three examples are presented for the application of CFD to analyse advanced combustion systems. Each of the examples covers a specific technical problem and shows how standard CFD models need to be adjusted to address individual questions. The first example deals with the increase of boiler availability due to reduced ash deposition on furnace walls and superheater surfaces. The second one addresses the question of reduced nitrogen oxide emissions from a bubbling fluidised bed combustor and the last example presents a novel model for black liquor droplet combustion. 2.1. Ash deposition A novel trend in boiler operation is the use of alternative fuels like biomasses and biomass mixtures instead of fossil fuels. Biomass is known to lead to ashes with a wide melting range starting at low temperatures and therefore ash related operational problems are ranking very high on the list of reasons leading to significant reduction of boiler availability. Ash related problems strongly depend on fuel specific aspects such

811

as mineral matter distribution in the fuel, aspects specific to the used combustion technique as well as design aspects unique for combustion chambers of any operating unit. The overall goal in biomass combustion related research is therefore the prediction of potential operational problems originating from fuel and oxidiser entering the combustion chamber and those problems originating from the design of individual furnaces. Hence an advanced ash behaviour prediction tool for biomass combustion in fluidised bed combustors combiningzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFED computational fluid dynamic calculations (CFD) with chemical fractionation analysis and multi-component multi-phase equilibrium calculations has been developed (Mueller et al., 2002a). From the advanced fuel analysis the ash forming elements of the fuel are identified, their melting behaviour is calculated under furnace conditions and a stickiness criterion as function of ash particle temperature is defined for each individual fuel. In the CFD calculations this stickiness criterion is utilised by checking the particle temperature at its impaction on a wall or superheater surface. If the particle temperature is above the stickiness criterion, the ash particle sticks at the wall and the location is recorded as location for possible deposition. On the other hand, if the particle temperature is below the criterion, the particle rebounds back to the furnace and continues its flight. Figure 1 shows a deposition map for the back wall of a bubbling fluidised bed freeboard. The coloured dots show the locations for particle hits at the specified temperature on the wall and clearly indicate the areas of possible deposition in this furnace. The picture on the left of the figure shows the deposit situation in the real furnace and serves as validation for the applicability of the tool. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK

Particle Temperature [K] —GRID O AIRINLfTS • 1050-1150 (K) * 1150-1250 (K) • 1250-1350 (K)

m,

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Figure 1. Visual validation of ash deposit prediction in the freeboard of a bubbling fluidised bed furnace. 2.2. Nitrogen oxide (NOx) emissions Nitrogen oxides are mainly formed through three paths. In the fuel-N path nitrogen containing species in the fuel can form NO or N2. In the two other paths the fixation of N2 in air is involved. One of these is the well known thermal-NO path, where radicals react at high temperatures with N2 to form NO. The other one is the often called prompt-NOx path, where hydrocarbon radicals react with N2. For most of these paths global reaction models are available (Mitchell and Tarbell, 1982; De Soete, 1974; Bowman, 1979) which can be also found in most current CFD codes. If a certain path is

812

dominating the formation of NO^, it might be possible to use these standard models for quantitative NOx predictions. However, in general the only description that is detailed enough to guarantee high quality predictions is the one based on a detailed reaction mechanism. For a simple hydrocarbon such a mechanism typically consists of more the 50 species and several hundreds of reversible reactions. Unfortunately, there are only a few turbulence-chemistry interaction models that can account for such a mechanism. One such model is the Eddy Dissipation Concept (EDC) by Magnussen (1989). Here, results for NOx emissions from a peat-fired bubbling fluidized bed furnace obtained using a combination of a skeletal mechanism, i.e., a mechanism with only the most relevant reactions of a detailed mechanism retained, together with the EDC are presented. However, before the simulations can be started, a number of processes present in the full boiler need to be described or simplified for the model. E.g., at present calculation of the dense bubbling bed is not possible, or too time consuming. Hence, the computational domain focuses on the freeboard region and starts above the bed surface. Another difficulty is the accurate modelling of the fuel supply. In the present case the fuel is peat. It is assumed that 90% of the peat is pyrolysed in flight before arriving at the bed (Lundmark, 2002). The remaining 10% fuel is assumed to be fully oxidized when entering the freeboard from the bed surface. At present, there are no detailed models available to determine the composition of the pyrolysis gas with respect to nitrogen-containing species. The values have to be assigned based on experience and naturally also on the nitrogen content of the fuel. The same uncertainty exists for the determination of the composition of the main pyrolysis gas. In this case the simplification has been made that the pyrolysis gas consists of CH4 and H2O only with retaining approximately the right heating value as well as the flue gas composition. zyxwvutsrqponm

l-z zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA I O.OOe+00

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ

Figure 2a) Left, outline of the grid used in the CFD simulation b) Right, NO mass fraction.

813

Figure 2a shows the outline of the grid used in the CFD calculation. From the figure it can be seen that there are a number of different inlets. In detail this are six fuel inlets, four start-up burners, six secondary air openings, four coal burners and six tertiary air openings. Some of these openings are divided into an inner non-swirling part and an outer swirling part. In the present case, data for the air supply can be taken directly from the operating system. Figure 2b shows the calculated NO mass fraction. According to the measurements for the present case, the NO is 160 mg/m^. This corresponds to a mass fraction of approximately 1.3-10""* NO. In the calculation, the predicted NO levels are almost twice as high. However, taking the uncertainties of the composition of the pyrolysis gas as well as the primary gas coming from the bed into consideration, the agreement is satisfactory. Earlier attempts to achieve this agreement in a similar case with standard models have failed (Brink et al., 2001). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ

2.3. Black liquor combustion The black liquor combustion process is unique from the process as well as from the fuel point of view. It starts with generation of droplets while spraying the liquor into the furnace, continues with the thermal conversion of the droplets and burnout of the char carbon in flight and on a char bed in the bottom of the furnace and ends with the recovery of the chemical compounds contained in the liquor. This series makes it obvious that the quality of an overall simulation of the process is strongly dependent on an accurate droplet combustion model. However, the description of the droplet conversion is a challenging task due to the special characteristics of the fuel. These are its high water content ranging up to 40%, and the almost even split of the solid part of the fuel in combustible species and low melting inorganic compounds originating from the pulping process. In addition to this unique fuel composition, the burning behaviour of black liquor is strongly liquor dependent and is characterised by significant liquor specific swelling of the droplet during devolatilisation. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO

Figure 3. Experimental setup with muffle furnace, quartz glass reactor, video system and online gas-analysers. Plots show change in diameter during conversion of a 2.47 mm droplet at 900° C in 3% oxygen. Comparison of experimental data (left) and modelling results (right) (Mueller et al, 2002b).

814 Starting from an earlier work by Frederick and Hupa (1993) a new simplified black liquor droplet model is developed to replace the standard droplet model in CFD simulations of black liquor recovery furnaces. Liquor specific input data obtained from single droplet experiments is incorporated into the new droplet model. The model is implemented in a commercial CFD code and simulations in an environment that represents well the experimental setup of the single droplet furnace are performed (Figure 3). This way, model expressions for droplet swelling during devolatilisation and carbon release curves during devolatilisation and char carbon conversion can be validated. After this validation procedure the model can be used for full scale recovery furnace simulations. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

3. Conclusions Multi-purpose CFD codes are nowadays a frequently used and well accepted tool in academia and industry. Already the available standard codes must be regarded as powerful tools that can be successfully applied to various technical disciplines including combustion processes. In this field at present the real value in CFD calculations lies in predicting trends that occur when operational conditions are changed. This statement is true for the above presented ash deposition predictions as well as for the NOx emission predictions and is validated for both cases with experimental data. However, in the future the real power of CFD codes lays in the possibility to extend and adjust them with process specific data to tailor-made tools applicable to address individual technical problems and specific questions. The successfully developed and validated simplified black liquor droplet combustion model presented in this paper proves this assessment.

4. References Brink, A., Bostrom, S., Kilpinen P. and Hupa, M., 2001, The IFRF Combustion Journal, ISSN 1562-479X, Article Number 200107. Bowman, C.T, 1979, Prog. Energ. Combust. Sci, Student ed.. Vol 1, p. 35. De Soete, G.G., 1974, 15. Symp. (Int.) on Combustion, p. 1093. Frederick, W.J., Hupa, M., 1993, Report 93-3, Combustion and Materials Chemistry Team, Abo Akademi University, Turku/Finland. Knaus, H., Schnell, U., Hein, K.R.G., 2001, Prog, in Comput. Fluid Dynamics Vol. 1, No.4, pp. 194-207. Lundmark, D., 2002, Diploma Thesis, Abo Akademi University, Turku/Finland. Magnussen, B.F., 1989, 18. Int. Congress on Combustion Engines, Tianjin/China. Mitchell, J.W. and Tarbell, J. M., 1982, AIChE J, 28:2, pp. 302. Mueller, C , Skrifvars, B.-J., Backman, R., Hupa, M., 2002a, Progress in Computational Fluid Dynamics, to appear. Mueller, C , Eklund, K., Forssen, M., Hupa, M., 2002, Finnish-Swedish Flame Days, Vaasa/Finland.

5. Acknowledgement This work has been supported by the Academy of Finland as a part of the Abo Akademi Process Chemistry Group, a National Centre of Excellence.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

815

Modelling of the Free Radical Polymerization of Styrene with Benzoyl Peroxide as Initiator K. Novakovic, E.B. Martin and A.J. Morris Centre for Process Analytics and Control Technology University of Newcastle, Newcastle upon Tyne, NEl 7RU, England [email protected]; [email protected]; katarina.no vako vie @ncl.ac.uk

Abstract This paper demonstrates, through the use of a polymerization example, how mechanistic models can be built and used prior to carrying out an experimental study. Using knowledge available from the literature, it is shown that parameter ranges can be calculated within which comparable experimental results can be expected. The system chosen was the free radical polymerization of styrene with benzoyl peroxide as initiator. This polymer-initiator system was selected since a model was not already available in the literature. The model was developed in the programming language gPROMS and was validated using data obtained from a laboratory batch polymerization.

1. Introduction The traditional approach to the modelling of any chemical or biochemical process, such as polymerization, is to first undertake experimental work and then to estimate the model parameters from the data (e.g. Villermaux and Blavier, 1984; Lewin, 1996; Ghosh, Gupta et al., 1998; Krajnc, Poljansek et al. 2001). This paper proposes an alternative approach. It demonstrates how useful information can be gained from building a mechanistic model that can be used to influence the experimental study. Once the initial conditions and/or the ranges in which the conditions are expected to lie for the experimental study have been identified, theoretical modelling can be performed. By using knowledge that is available from the literature, in this case for a polymerization process, it is shown that the parameter ranges can be predicted within which comparable results can be expected. In this way, a better understanding of the relationship between the operating conditions of reactors and the quality of the polymer produced can be established prior to carrying out laboratory experiments. In this article, the term polymer quality is defined as the set of structural characteristics of macromolecules such as number average and weight average molecular weight and polydispersity (the ratio of two average weights, weight average and number average, respectively). In this study the overall kinetics of chain polymerization (Odian, 1991) were used with the steady state assumption being applied to eliminate total concentration of all free radicals. In addition, the overall rate of monomer growth in the polymerization mixture, and the number average and weight average molecular weights were calculated using the first and second order moments for dead polymers (Villermaux and Blavier, 1984). Modifications were made to calculate those assumptions relating to possible transfers and to deal with the termination mechanism for which it was not possible to determine whether it would occur through disproportion or coupling. The modeling of free radical

816

polymerization of styrene with benzoyl peroxide as the initiator was selected as the demonstrator process which was then validated using laboratory data. The prediction of conversion, the range in which the values for number average and weight average molecular masses and the values for polydispersity are expected to lie are presented. In addition a comparison of the model results with the experimental data for the chosen polymer-initiator system is described. Finally the influence of benzoyl peroxide as the initiator in the polymerization of styrene can be compared with other initiator influences such as azo-bis-isobutyronitrile (AIBN) and bis-4-t-butylcyclohexyl peroxydicarbonate (Perkadox 16) reported by other researches (Villermaux and Blavier 1984). The nomenclature for all relationships in the following three sections is given in the 'Nomenclature' chapter. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDC

2. Modelling Isothermal Batch Polymerization

The polymerization of unsaturated monomer, in this case styrene, by chain polymerization is first discussed. The mechanism consisting of initiation, linear propagation and termination by combination and/or disproportion, as presented in many textbooks (Odian 1991), is adopted in this study. Thus based on the defined mechanism, the rate of decomposition of initiator can be presented as: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ

dt The rate of monomer disappearance, which is synonymous with the rate of polymerization, is given as the sum of the rates of initiation and propagation. Since the number of monomer molecules reacting in the initiation step is much less than the number involved in the propagation step, the initiation step could be neglected and the rate of monomer disappearance can be set equal to the rate of propagation. In addition since the rate constants for all the polymerization steps are the same (Odian 1991), the rate of propagation can be defined as: Rp = kp'M*

M

(2) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Equation (2) is not usable because it contains a term for the total concentration of all free radicals M* This quantity is difficult to measure. Thus to eliminate Af from the analysis, the assumption of steady state is made, i.e. the concentration of radicals increases initially but almost instantaneously attains a constant, steady-state value. This means that the rate of initiation and termination of the radicals are equal. According to this, the quasi-steady concentration of free radicals is given by:

V

'

J

The kinetic chain length can then be calculated according to the following equation: /?„ ~ Ri

k„MC ~2-f-kj-A

(4)

817 Processes in which macromolecules are produced are termination by coupling and/or disproportion, and transfer to other molecule, i.e. monomer. In this case it is assumed that there is no transfer to other molecules. Since it is not known which termination mechanism (coupling or disproportion) is in the majority, and since only one termination rate constant can be calculated, two extreme cases are considered. The calculated termination rate constant will be assumed to be equal to the termination rate constant by coupling and the termination rate constant by disproportion, as presented below. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA ^ = krC' dt ^

and

^ = 2.krC' dt ^

^'^

The extent (conversion) of the reaction is calculated according to: X

{M^-M)

(6)

M

with the overall rate of monomer growth in the polymerization mixture (Villermaux and Blavier 1984) being given by:

dt

P

Assuming no transfer to monomer is present and because only one termination rate constant can be provided, two cases are considered: •

Termination occurs only by coupling (8)

dt






"t

/

The Number Average molecular weight can be represented as: M „ = m ( f i , • / ' ) /P = m n ,

(10)

and the Weight Average molecular weight is given by: M^:=m(il2P)/(ii'iP)=m\X2/Hi

(11)

Polydispersity is then calculated as: PD = M^/M„

(12)

818

The expressions for the kinetic rate constants and the value for initiator efficiency, /, that are appropriate for styrene polymerization with benzoyl peroxide as initiator have been taken from the literature (Biesenberger and Sebastian 1983), (Berger and Meyerhoff 1989), (Buback 1995), (Moad and Solomon 1995). zyxwvutsrqponmlkjihgfedcbaZYX 1-2

(13) ^^^6.378-10—exp(_29700//?r)zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDC f=0.8

kp =10 7.630 exp(-7740//?r)

(14)

kt =1.255 10^ • exp(-1675//?r))

(15)

Values for k^, kp and kf were calculated according to the temperature to be set in the batch reactor. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

3. Comparison of Model and Experimental Results The proposed model was then validated with data obtained from a laboratory batch polymerization reactor (Boodhoo 1999). The polymerization system consisted of styrene as monomer in an initial concentration of 7.28 mol/dm^, benzoyl peroxide as initiator in an initial concentration of 5.1-10"^ moUdw? and toluene as solvent in an initial concentration of 1.567 mol/dm^. Batch temperature was set at 90°C and agitation speed was 500 rpm. The model results in the case of termination only by coupling, and termination only by disproportion, are compared with experimental results. The results are presented in Figs. 1, 2, 3 and 4. BU-n

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 30000 a

70-

. . » "

60-

fso. 0

2 40

25000

| |

20000

° ° D D

0

»•' &^ fi zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA ^ r 15000 9 ^

B

^

I

— "^

20 10-

B

01i—

A

A.

A A A A

1^

e

C30O

< ffl 10000 e zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

^ 20

_ ,

40

,

1

1

1

,

,

60

80

100

120

140

160

I

5000

^

0

)

50

Time (min) o Experiment

o Model Kt=Ktc

A Model Kt=Ktd

Fig. 1. Conversion (model and experimental results) as a function of polymerization time.

100

150

Time (min) 0 Experiment

o Model Kl=Ktc

AModelKt=Ktd

Fig. 2. Number average molecular weight (model and experimental results) as a function ofpolymerization time.

819 zyxwvutsrq

20

40

60

80

0 20 40 60 80 100 120 140 160 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 100 120 140 Time (mIn)

Time (min) 0 Experiment

a Model Kt=Ktc

A Model Kt=Ktd

0 Experiment

o Model Kt=Ktc

A Model Kt=Ktd

zyxwvutsrqponmlkjihg

Fig. 3. Weight average molecular weight Fig. 4. Polydispersity (model and (model and experimental results) as a experimental results) as a function of function ofpolymerization time. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE polymerization time.

4. Discussions and Conclusions As can be seen from Fig. 1, conversion is well predicted by the proposed model. The model results are in agreement with the experimental results within a confidence interval of ±5%. Fig. 2 presents the results for number average molecular weight as a function of time. As can be seen, the experimental data set lies, for the whole of the polymerization process, between the two extreme cases, termination only by coupling and termination only by disproportion. At the beginning of the polymerization process, the experimental data lies exactly between the two extreme cases but after 60 minutes of polymerization, the experimental data fluctuates toward the termination only by coupling and reaches this extreme mechanism after 140 minutes from the beginning of the polymerization. Fig. 3 presents the weight average molecular weight as a function of polymerization time. At the beginning of the polymerization process, for the first 50 minutes of the process, the main mechanism of termination is by disproportion. Between 50 and 120 minutes from the beginning of the polymerization, the experimental results again lie between the two extreme mechanisms of termination modelled and as the reaction approaches the last stage, after 120 minutes, the main mechanism becomes termination by coupling. Experimental results for polydispersity are more likely to agree with coupling as the only mechanism of termination. This can be seen from Fig. 4. Comparing the influence of benzoyl peroxide (BPO) as the initiator in the polymerization of styrene with other initiator influences, such as azo-bisisobutyronitrile (AIBN) and bis-4-t-butylcyclohexyl peroxydicarbonate (Perkadox 16) as reported by other researches (Villermaux and Blavier 1984), it can be concluded that the results achieved with BPO as initiator have the same trends as when AIBN is used as initiator The pre-experimental modelling approach proposed can be used to provide initial predictions of conversion and to help determine the interval in which the molecular weights will occur. This could be very useful in future experiments since the model is able to provide an indication as to what to expect under certain experimental conditions. However to be able to predict more accurate molecular weights, it would be necessary to determine both termination rate constants.

820 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

5. Nomenclature zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA A - Initiator concentration, mol/dm^ C - Quasi-steady concentration of free radicals / - Initiator efficiency kd - Initiator decomposition rate constant, s'^ kp - Propagation rate constant, dm^-mol'^s*^ kt - Termination rate constant, dm^mol'^s"^ ktc - Termination by combination rate constant, dm^mol'^s"^ ktd Termination by disproportion rate constant, dm^mol'^s'^ L - Kinetic chain length M - Monomer concentration, mol/dm^ Af - Concentration of free radicals, mol/dm^ m - Monomer molecular weight, g/mol Mo - Monomer concentration at the beginning of polymerization, moMdw?

Mn - Number Average molecular weight, g/mol Mw - Weight Average molecular weight, g/mol jLij - First order moment for dead polymer jLi2 - Second order moment for dead polymer P - Macromolecule cone, mol/dm^ PD - Polydispersity R - Universal gas constant, 1.986 cal/molK /?j - Rate of initiation, mol/dm^ Rp - Rate of propagation, mol/dm^ 7 - Temperature in reactor, K X - Extent of reaction

6. References Berger, K.C. and Meyerhoff, G., 1989. Propagation and Termination Constants in Freeradical polymerization. Polymer Handbook. Immergut. New York, WileyIntercsience: II/67-II/79. Biesenberger, J.A. and Sebastian, D.H. 1983. Principles of Polymerization Engineering. New York, John Wiley. Boodhoo, K.V.K. 1999. Spinning Disc Reactor for Polymerization of Styrene. Chemical and Process Engineering. Newcastle upon Tyne, University of Newcastle, Buback, M.E.A., 1995. Critically Evaluated Rate Coefficients for Free-radical Polymerization. I Propagation Rate Coefficient for Styrene. Macromol. Chem. Phys. 196: 3267-3280. Ghosh, P., Gupta, K.S., Saraf, D.N., 1998. An Experimental Study on Bulk and Solution Polymerization of Methyl Methacrylate with Responses to Step Changes in Temperature. Chemical Engineering Journal 70: 25-35. Krajnc, M., Poljansek, J., Golob, J., 2001. Kinetic Modeling of Methyl Metacrylate Free-Radical Polymerization Initiated by Tetraphenyl Biphosphine. Polymer 42: 4153-4162. Lewin, D.R., 1996. Modelling and Control of an Industrial PVC Suspension Polymerization Reactor. Computers Chem Engng 20: S865-S870. Moad, G. and Solomon, D.H., 1995. Chemistry of Free-Radical Polymerization, Oxford-Elsevier Science. Odian, G.G., 1991. Priciples of Polymerization. New York, John Wiley & Sons. Villermaux, J. and Blavier, L., 1984. Free Radical Polymerization Engineering - II Modeling of Homogeneous Polymerization of Styrene in a Batch Reactor, Influence of Initiator. Chemical Engineering Science 39(1): 101-110.

7. Acknowledgements KN would like to thank the UK ORS Scheme and CPACT for providing funding for her PhD studies.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

821

Combining First Principles Modelling and Artificial Neural Networks: a General Framework R. Oliveira Department of Chemistry - Centre for Fine Chemistry and Biotechnology Faculty of Sciences and Tecnology, Universidade Nova de Lisboa, P-2829-516 Caparica, Portugal, Tel: +351-21-2948303, Fax: +351-21-2948385, E-mail: [email protected]

Abstract In this work a general hybrid model structure for stirred-tank bioreactors is proposed. The general structure combines first principles modelling with artificial neural networks: the bioreactor system is described by a set of mass balance equations, and the cell population system is represented by an adjustable mixture of neural network and mechanistic representations. The identification of unknown parameters from measurements is studied in detail. The sensitivities equations are derived for the general model enabling the analytical calculation of the Jacobian Matrix. The identification procedure consists of a least squares optimisation that employs a large-scale Sequential Quadratic Programming (SQP) algorithm. The methodology is outlined with simulation studies.

1. Introduction

Hybrid modelling has been recognised as a valuable methodology for increasing the benefit/cost ratio of bioprocess modelling (SchubertzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO et al. (1994), Preusting et al. (1996)). The main design concept is that the a priori mechanistic knowledge is not viewed as the only relevant source of knowledge, but also other sources, like heuristics or information hidden in databases, are considered as valuable complementary (not alternative) resources for model development. The application of hybrid modelling to chemical and biochemical reactors has been exemplified in several works. The most widely adopted hybrid structure is based on the mass balance equations, like in the traditional first principles approach, but the reaction kinetics are modelled with artificial neural networks (ANNs) (Psichogios and Ungar (1992), Schubert et al (1994), Montague and Morris (1994), Feyo de Azevedo et al. (1997), Chen et al. (2000)). Unfortunately, even for such simple hybrid structures, there are many theoretical issues, such as identifiability and stability, that are not well characterised. In fact, most of the reported studies are eminently problem-oriented. In the current work, some theoretical aspects related with stability and identifiability in hybrid modelling are studied. The problem is tackled by formulating a general dynamic hybrid structure valid for a wide class of problems. The resulting dynamical system is then studied in a systems engineering perspective. The methodology is outlined for the

822 secreted protein production process described in Park and Ramirez (1989) with simulation studies. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

2. Theoretical developments

2.1. General dynamic hybrid model As discussed previously a principal design issue in hybrid modelling is that it should allow to incorporate several different sources of knowledge. The first step in the present study is to define a flexible system structure that allows to incorporate different forms of knowledge, but also simple in the sense that one must be able to characterise it in terms of identifiability and stability or other important properties. With this main concern the following system structure is proposed: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO

dc — = KH(c)p-Dc + u at p = N(c,W)

(la) (lb)

where c is a vector ofzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA n concentrations, K a nxm yield coefficients matrix, H(c) a mXr matrix of known kinetic expressions, p(c) a vector of r unknown kinetic functions, D is the dilution rate, u is a vector of input volumetric rates, N(-) is a network function and W a vector of nw parameters. The main idea is to insert all the a piori first-principles knowledge in Eq. (la) whereas all other sources of knowledge are inserted in Eq. (lb). The Eq. (la) is the general dynamical model proposed by Bastin and Dochain (1990). The Eq. (lb) states that the term p is computed by a network function. This network function refers to connectionist systems in general; not only the usual neutral networks but also Fuzzy or statistic networks may be considered. With this mathematical formalism, first priority is given to mechanistic knowledge, while other types of knowledge may also be activated in the model through Eq. (lb). Three important properties of system (1) should be pointed out: i) the representation of the kinetic rates in Eq. (1) is rather generic both for chemical as well as for biological reaction catalysis (e.g, Bastin and Dochain (1990), Dochain et al. (1991)), ii) the framework introduced by this expression enables to use other modelling techniques for establishing p. Instead of a single neural network, m neural nets, a fuzzy system or several combinations thereof are possible, iii) provided that all functions in matrix N(c,W) are continuous, differentiable and bounded, the Bounded Input Bounded Output (BIBO) stability results presented in Bastin and Dochain (1990) also apply to system (1), and also very importantly, parameter sensitivities may be computed. 2.2. Identification Equation (lb) establishes a parametric (or semi-parametric) non-linear relationship between p and c where a set of nw parameters W are involved. These parameters must be identified from measurements. Irrespective of the type of relationship defined in Eqs (lb), the goal of the identification procedure is to obtain the parameter vector W that minimises the deviation between the model and real process outputs. The real process reaction kinetics cannot be measured directly; only the concentrations can be measured using adequate measuring devices. By definition, the reaction rates can be calculated using Eq. (la). In practice, only a partition of r equations is required

823

p = [K,H(c)ff-^+Z)c„-u,

(2) zyxwvutsrqp

tzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA at zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM

where index a denotes a partition ofzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM r state variables of Eq. (lb). From Eq. (2) an important condition for the identifiability of model (1) arises. Model (1) is identifiable if and only if the rxr matrix KaH(c) is non-singular. Two possible strategies may be adopted. Method I is a two-steps procedure: in the first step the unknown kinetics are estimated for instance using Eq. (2). In the second step an optimisation algorithm minimises the errors between the estimated and modelled reaction rates .^The application of Method I is exemplified in Chen et al (2000). The main drawback of this methodology is that the concentrations are often measured off-line with high sampling times yielding inaccurate reaction rates estimates. Method II is more common in the context of hybrid modelling and consists in minimising the deviation between the measured and estimated concentrations:

argmin|E = l x t / - < ^ / M c ; - c j | w [ 2j-^i J

(3)

This method requires that the model equations (1) are integrated numerically between measurements. The numeric integration may be time consuming especially when many measurements are available. Psichogios and Ungar (1992) applied this strategy for training ANNs embedded in mass balance equations. They suggested to employ the sensitivities method for calculating parameter gradients. The evaluation of gradients is less time consuming than the numeric alternative. For the particular case of hybrid model (1), the sensitivities equations may also be derived provided that functions N(c,W) are continuous and differentiable. The differentiation of E in W results in the following Eq.: dE

^fdE^fdc]

1 ^

^

Jdc)

^ = 2:\-^\\^\ =-^S-2e,S| —I 1=1 acj,|awj,. p ^ '^awj,

(4)

with ei=(c'i -Ci). The matrix 3c/3W must be computed for each measured point. This can be accomplished through the sensitivities equations. The sensitivity equations are obtained by differentiating Eqs. (la-b) in W. After some manipulations the following equations are obtained:

dt

A]..^.Bw,.,. = K,„p|.K.|-.,.,B.Kp^,

The set of equations (5) must be integrated simultaneously with Eqs. (la-b). The initial condition for Eq. (5) should be (9p/9W)t=o=0 because the initial value of state variables is independent of parameters W.

824 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

3. Results and discussion zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB The model described in Park and Ramirez (1988) for fed-batch production of a recombinant protein will serve as an example to outline the proposed methods. The mass balance equations take the following state space format 1 0 0] X 7.3 0 0 o 0 1 0 0 0 0 ij

X zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONML juiS) S-So zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR (6a) fpiS) -D\ x 0 Pt 0zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (P,-PJ P^ 0

0

being X the biomass concentration, S the glucose concentration, Pt the total protein concentration, P^ the secreted protein concentration, D is the dilution rate (D=F/V being F the input feed rate and V the medium volume inside the bioreactor) and So the substrate concentration in the input stream. The true kinetic expressions are the following

JU(S) =

4J5ju(s) Se -55 21.875 ^ fp(S) = , 0(5) = 5-HO.I 0.12-hju(s) (S + 0A){S + 62.5)

(6b)

Two simulations were carried out with process simulation time of 16 h. The sampling times were 1 min for on-line measurements (F and V) and 15 min for off-line measurements (X, S, Pt and Pm ). The two resulting datasets had 960 data records. In order to excite the process and to obtain wide variations in S, the feed rate was the control output of a glucose on-off controller varying between 0.01-0.2 L/h and glucose between 10-0.1 g/L. The glucose concentration in the inlet feed was So=40 g/L. The initial X and S were chosen randomly from the uniform distribution in the intervals 0-2 g/L and 0-0.5 g/L, respectively. The initial concentrations for total and secreted protein were Pt(0)=0 and Pm(0)=0 respectively. Gaussian errors were added to X,S, Pt and P^ with standard deviations of 0.25, 0.25,0.025 and 0.025, respectively An hybrid model was developed considering that both the mass balance equations (Eq. (6a)) are known. The only part of the process that is considered to be unknown, in a mechanistic sense, is the 3 kinetic Eqs. (6b). As such, the matrix of known kinetic expressions was H=diag([X, X, (Pt - Pm)])- The 3 unknown rate expressions were modelled with a BP neural network with one input (glucose concentration), 8 hidden nodes and 3 outputs. As such the hybrid model consists on Eq. (6a) and the additional equation: [ ^ , / ^ , o ] ' = J/ag t , , , / , , ^ , , , 0 ^ 3 , ] ) s ( W 2 s(Wi s(S) +

(7)

being Wi,B2, W2, B2 parameter matrices associated with connections between nodes in the neural net, and s(.) the sigmoid function. The parameter vector W represents a vectored form of matrices Wi,B2, W2, B2 and comprises in this case 42 scalar parameters.

825 zyxwvutsrqp

(a)

(b) zyxwvutsrqponmlkj

Figure 1. Hybrid model simulation results for the test dataset. (a) biomass (b) secreted protein. Full lines represent measured values; dashed lines represent modelling results.

The first study was to identify the parameter vector W using method I. It was impossible to obtain good estimates of the kinetic rates because the data was too noisy and the sampling time was not sufficiently low for resolving the process dynamics. The same unsatisfactory results were obtained with splines least-squares fitting with euler discretisation and middle point discretisation. The off-line sampling and the high dynamical behaviour precludes the application of method I. The results obtained with method II were however very promising. The algorithm employed was a large scale SQP optimisation. Only one dataset was used for identification. The simulation results for the test dataset (not used for parameter identification) are plotted in Figs. (la-b). The mean square error for the test dataset was 5xlO'^.(with concentrations scaled to their average value). The prediction capabilities of this model, as measured by the test dataset modelling errors, seam to be rather satisfactory. In Fig. (2a-b) the identified kinetic functions are plotted together with the true curves (Eqs. (6b)) as functions of S. In this particular example, only one process experiment was sufficient to identify the specific (Fig. 2a) total protein production rate (not shown in the picture). In the case of the specific growth rate a more careful analysis of Fig (2a) shows that the modelling accuracy degrades for glucose concentrations higher than 10 g/L. The reason for this fact is that there are no measurements available in this concentration range, as may be seen in Fig. (lb). In the case of specific total protein

826

(a) •g 0,05 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 8-0,00 10 glucose (g/L) _6,00n 45,00 o ^24 , 0 0 ^ f 3,00-1

(b)

§)2.00

taie

I 1,00 0)

identified

0,00 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 10 15 20 zyxwvutsrqponmlkjihgfedcbaZYXW glucose (g/L)

Figure 2. Kinetics modelling results: (a) specific growth rate, (b) protein secretion rate. Full lines represent the true kinetics and dashed-lines the modelling results. production rate, the modelling results are not good for very low glucose concentrations because only few measurements are available in this range. In contrast with Fig. (2a), Fig. (2b) shows that the modelling results for the protein secretion rate 0(S) are very bad. It was verified (not shown in the pictures) that the known kinetic function h33=(FtPm) is most of the time very small or even zero. This fact renders (S) unidentifiable because one cannot invert h33. Still, the multiplication of both functions h33x(S) is identifiable and the identification algorithm managed to produce good secreted protein prediction results (Fig. Ic).

4. References Bastin, G. and Dochain, D., 1990, Elsevier, Amsterdam. Chen, L., Bernard, O., Bastin, G., Angelov, P., 2(X)0, Control Eng. Practice, 8, 821-827. Dochain, D., Perrier, M., Ydstie, B.E., 1991, Chem. Eng. Sci., 47,4167-4178. Feyo de Azevedo, S., Dahm, B., Oliveira, F.R., 1997, Comp. chem. Engn., 21, 751-756. Montague, G., Morris, J., 1994, Trends BiotechnoL, 12, 312-324. Park, S. and Ramirez, W.F., 1988, AIChE Journal, 34(9), 1550-1558. Preusting, H., Noordover, J., Simutis, R., Lubbert, A., 1996, Chimia, 50(9), 416-417. Psichogios, D.D. and Ungar, L.H., 1992, AIChE Journal, 38(10), 1499-1511. Schubert, J., Simutis, R., Doors, M., Havlik, I. and Lubbert, A., 1994, Chem. Eng. TechnoL, 17,10-20.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

827

Classifying and Proposing Phase Equilibrium Methods with Trained Kohonen Neural Network S. Oreski\ J. Zupan^and P. Glavic^ ^Faculty of Chemistry and Chemical Engineering, PO Box 219, SI-2000 Maribor, Slovenia, emails: [email protected], [email protected] ^National Institute of Chemistry, Hajdrihova 19, PO Box 30, SI-IOOO Ljubljana, Slovenia, email: [email protected]

Abstract The Kohonen neural networks were chosen to prepare a relevant model for fast selection of the most suitable phase equilibrium method(s) to be used in efficient vaporliquid chemical process design and simulation. They were trained to classify the objects of the study (the known physical properties and parameters of samples) into none, one or more possible classes (possible methods of phase equilibrium) and to estimate the reliability of the proposed classes (adequacy of different methods of phase equilibrium). Out of the several ones the Kohonen network architecture yielding the best separation of clusters was chosen. Besides the main Kohonen map, maps of physical properties and parameters, and phase equilibrium probability maps were obtained with horizontal intersections of the neural network. A proposition of phase equilibrium methods is represented with the trained neural network.

1. Introduction A proper selection of phase equilibrium methods is critical factor for efficient process design and simulation. But among many different phase equilibrium methods it is very difficult to choose the most appropriate ones. Therefore much effort has been made to build an advisory system for appropriate phase equilibrium methods selection. In the past the advisory systems were expert systems, which were advising the engineers through the sequence of questions and answers, i.e. CONPHYDE (Baiiares-Alcantara et al., 1985), TMS (Nielsen et al., 1991) and PHYP (Oreski and Glavic, 1997). For the same purpose the artificial neural network can be used. When trained, neural networks are able of quick response. The obtained results are better or at least of the same quality as results gained with other methods. The additional advantage of the neural networks is that they give results also when they are not possible to obtain with classical methods. In chemical engineering and chemical industry the diversity and number of neural network applications has increased dramatically in the last few years. The neural networks are used in fault detection, diagnosis, process identification and control, process design and simulation. The applications have been discussed by Bulsari (1995) and Renotte et al. (2001). The neural networks are also used as criterion functions for optimisation with known mathematical model and unknown process parameters (Dong et al., 1996). In the field of phase equilibria, the neural networks are used in

828 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

vapor/liquid equilibrium prediction. The neural network applications represent a part of (Alvarez et. al., 1999) or a complete (Sharma et al., 1999, Buenz et al., 1999) vapor/liquid or physical property predictive tools. Except in our work (Oreski and Glavic, 2001 and 2002), in the field of phase equilibria the artificial neural networks have been used for prediction only and not for classification so far. zyxwvutsrqponmlkjihgfedcbaZY

2. Method When determing the neural network model, which will be able to solve the classification problem, four main characteristics of the problem were exposed: A large number of data exists, represented with objects consisting of diverse combinations of physical properties and parameters. The domain of phase equilibrium methods is not covered with all mathematically possible combinations of physical properties and parameters. A classification is to be made by neural networks. The reliability of the phase equilibrium methods proposed must be estimated. According to the nature of the problem we were trying to solve, the Kohonen neural network was employed among several different neural networks as one with the most appropriate architecture and learning strategy. Kohonen neural network In the application the Kohonen network is based on a single layer of neurons arranged in a two-dimensional plane. A matrix presentation of the network is chosen, because the matrix description shows very clear relation between the input data and the planes (Figure 1). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Figure 1: A matrix representation of two-dimensional Kohonen neural network layout. The aim of Kohonen learning is to map similar signals to similar neuron positions. The learning procedure is unsupervised competitive learning where in each cycle the neuron c is to be found with the output most similar to the input signal:

^(x,-w.fl

j = l,2,...,n c )

(2)

The next object is input and the process repeated (Zupan and Gasteiger, 1999).

3. Research Results 3.1. Data preprocessing The combinations of physical properties and parameters briefly represent different chemical processes. They describe chemical bonds, structure of the components, working conditions, further calculations desired, accuracy of the methods, simplicity and speed of calculations, data availability and exact definition of phase equilibrium methods applicability in vapor-liquid and liquid-liquid regions. The combinations of phase equilibrium methods represent one or more phase equilibrium methods that are appropriate for designing and simulating such chemical processes. Fifteen ones are chosen that are usually used in practice: Soave-Redlich-Kwong, Peng-Robinson, Benedict-Webb-Rubin, Starling, LeeKesler-Plocker and virial equations of state, Margules-1, Margules-2, van Laar, Wilson, NRTL, UNIQUAC, ASOG, UNIFAC, and regular solution activity coefficient methods. The data were collected from experts and literature and expressed with more than 7000 data objects of formzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA X(y, xi, ...xu). Variables jc, are representing eleven different physical properties and parameters, variable y is representing one appropriate phase equilibrium method attached out of fifteen possible. With preprocessing procedure 4228 learning objects were constructed from data objects in a form of 46-dimensional vectors Xs(Xsu Xs2, ..-, Xs46)j haviug a distributed presentation of all variables jc^, (first 31 variables representing 11 different physical properties and parameters, and last 15 variables representing a target vector of all fifteen phase equilibrium methods). 3.2. Training of Kohonen networks and resulting maps According to the number of learning objects several Kohonen neural networks of sizes from 50x50 to 70x70 with 46 weights vv,, on neuron j at different number of epochs were trained. Out of them the 70x70 neural network trained with all learning objects through 900 epochs was chosen for further analysis as neural nertwork architecture yielding the best separation of clusters. The main Kohonen map of the neural network consists of about 1800 evenly distributed grouped labels ' V , 'S', 'L', T, indicating different regions (vapor, vapor-liquid, liquid and liquid-liquid), and empty spaces (Figure 2). Labeled neurons were activated by one or more learning objects, empty spaces were not activated by any of them. With horizontal intersections of the trained neural network, 46 single maps of physical properties and probability maps of phase equilibrium methods were obtained. The first 31 maps are representing physical properties (Figure 3 represent map for physical property temperature). The last 15 maps are representing probability phase equilibrium maps (Figure 4 represent probability map for the UNIFAC method). When inspecting the main Kohonen map and all 46 maps with overlapping, a transparent and expected correlation was found among them.

830 ILLLLLL LL LL L LL LLL LLLLL LL LL LLLL LLl SSSS SSI L l l L Lllll L Z L LL L L L L l l I ILL LL L L L LL L L L L SI L L L L LL LLL L L LL 1 1 L SSS S SI IL L L L L L L L l l LL 1 SS SI 1 L XL L L L LLL L LL L Lll S S X 1 IL LLLLLL LL LLLL 111 S SSI LL S L L L LLLLLL 1 111111 1 ! I S S I L L LLL LLL 111 11 ] IS S SS SSI L ] L L LLL 11 11 ]L L L L IS SSS SS SSSS S I L L 1 S S LL L 111 1 11 1 IS S S S S SS LI S SS SSS L LI 1 1 1 1L L L 1 I 8S S S S LLL 11 S S S L LL 1 1 1 11 L LL 1 IS S SS S L 11 IS S SS SS SSS S L L 11 1 11 1 S SS S L I I S SS S SSSS 1 1 1 1 1 SS S LL LLI ISS S SSSSSS SS S ILL L 1 LL L! S SS I S S LL LL LL 1 LL LLLL I SSS SS 11 LL LI S S S SS L L L 11 LL L L ISS S S SS S SS L 1 L 11 L L S IS S S S SS SS 1 1 1 L I S SS LL 1 L LL S S IS S S S SS S SS S S L LL I S S SS LL 111 L S S IS S S SS S SS 1 1 LI SSS SS SS 111 1 LLL S SS IS S S «BXB a a S B B B i , ! S B B S B S S S S 1 I 47 I S S S S S 1 1 1 S S S S S S S S S S LIL LI 46 IVV V V V V S S S S S 1 1 1 SSS S SS S S S S S LI 45 IV V V S S S 1 1 1 SS S SS S SSS S S LL L LI 44 IV V V VV SSS SSS L 1 S S S S SS S S S S L l l I 43 I V V 8 S S L L S SSS SSS SS L L l ILI 42 IV V VV V S SS S L SS SS S SSSSS SS S L I I 41 IV V V VV S S S L L SS S S S S 1 LL LL LI 40 IV V V S SSSS S L LI S SSS SSS SSS SS S 8 S 1 L LI 39 IVV V V V S S S L S S S S S S S L I 38 I VV V V V V L LI SS S SSSS 8 8 S 8 SSS 111 37 IV VV V V L 8 S 88 88 8 S 8 8 8888 I 3 6 I V V V V V V V V S S S S S S S S S S S S S S S SSI 3 5 I V V V V VV S S S 888 8 8 S 88 8 888 88 81 34 IV V V V V VV VVV 8 88 8 8 8 8 SS S S 8 SSSS SSI 33 IV VV V VV V V VV V V SS 8 88 888 8 8 8 8 SSSSI 32 I V V VV V VV V V 8 888 8888 8 S 8 8 88 I 31 IV V V V V V VVV SS 8 8 8 8 8 888 88 88 S 8 SSS 8 S S I 30 IV VV VV V V V V VVV 8888 8 8 SSSSS 8 888 S8I 29 I V V V V V V V V V V 888888 8 88 88 8 8 S S S S I 28 IVV V VV V V V VVV 888 8 88 88 8 8 8 8 SSI 27 I V V VVVV VV V VV 8 SSS 8 8 S S S 888 8 I 26 IVV V VVV V V V V V V 888 SS 8 8 8 8 8 SSI 25 IV V VVV V V V V V V V V V V S S 8 S S S S S S SSS S I 24 IVV V V V V V V V V V V V VVV S S S S S 88 8 8 881 23 IV V V V V VVVV V V V V V S S S S S S S S S I 22 IV VV VV V V VV V V V V V V 888 S SSS 88 8 SSS 8 8 SSI 21 IV VV V VVV V V V V V VVV VVV 8 88 8 SSS S I 20 IV V V V V V V V V V V VV 8 8 S SS 8 88 SS SSI 19 IV V VV VV V V V VVVV VVVVV VVV S S S S 8 S I 18 I V V VV V VV V V V VV 8 8 8 8 8 8 SS 8 8 SSI 17 IV V V V V WWW VV V V V V V V SSSS S 888 SSSS S I 16 IV V VV V V VVV V V VVV VVV 8 8 8 S 88 S SSI 15 IV V VVV VVV W W V V V V V V S S 8888 I 14 IV V V V V V V V VVV VV V VV 8 S 8 8 88 8 S SSI 13 I V VV V VV VVV VV V V V VVV V 8 8 8 SSSSSSS 8 I 12 IV V V V V V V V V V V V 88 8 8 8 88SSSS8I 11 IV V V VVVV V V V VVVV V V V V V V V 8 8 8 8 S S S I 10 IV V V V V V VV VV V VV V V 8 8 SSSS 8 I 9 I V VVV VV V VV VV VVV V VVV V VV 8 LL L 8 SSS S I 8 IVV V V V V V V L L L L L I 7 IV VV V V V V VV VVV VV VVVV VVV LL LL LL L L LL LLL 8 8 8 SI 6 I V V V V V V V V V VVV VVVV V V L L LLLL L 8 8 I 5 IV VV V V VVV VVV VV V VV V LL L L LL LLL L 8 8 SI 4 IV V VV V V VVVVV V VVV V V LL LL L L LL I 3 IV V V VVVVVVV V V VVVVV V LL L LL LLL LL SSSSI 2 I V V V V V V V V V V V V V V L L l L L I 1 IVVVVVV V V VV V V VV V VV V V VV V V LL LLLLLL L L L LLLL LL 8 SSI + + 1234567890123456789012345678901234567890123456789012345678901234567890 I

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

zyxwvutsrqponmlkjih

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Figure 2: The main Kohonen map of 70x70 neural network trained through 900 epochs. iiiiiii 11 11 1 11 111 11111 11 11 1111 1111 1 m i l 1 t 1 1 2222 221 I l l l l l l l l l l 1 1 1 I 111 11 1 1 1 11 1 1 1 1 1 1 1 21 I I 1 1 1 11 111 1 1 11 1 11 1 t 1 222 2 21 II 1 1 1 1 1 1 1 1 1 11 11 1 11 ] 11 22 21 II 1 1 1 111 1 11 1 111 ] 1 2 2 Z 11 m i l l 11 1111 111 11 1 1 1 112 2 221 I 1 1 1 m m 1 m m i i i i 1 1 2 2 I 12 m 111 111 11 1 1 1 ] 2 22 221 12 2 2 2 22 1 1 1 1 1 11 11 1 1 1 1 ] 2222 2 I 12 2 2 2 11 1 111 1 11 1 1 1 1 2 2 2 22 11 I 22 22 222 1 11 1 " 2 2 2 111 11 1 11 1 11 1 2 2 22 2 1 11 12 2 22 22 222 22 2 1 I 2 I 2 22 2 11 111 22 2 122 2 222222 2 22 I _ _ _ 11 1 1 1 1 1 11 1 1 1 1 2 2 2 222 22 11 11 11 122 2 2 2 2 22 1 1 1 1 1 11 1 1 22 2 2 2 22 2 11 11 12 2 2 2 22 2 22 1 1 1 11 1 1 2 22 2 2 22 1 1 11 12 2 2 2 22 2 22 11 1 1 11 2 2 2 22 22 111 1 I 12 2 2 22 1 1 111 1 2 2 2 2 22 2 2 _ _ _ _ 12 2 2 222 22 22 111 1 111 2 22 2 2 22 2 22 1 1 II 12 2 2 2 2 22 1 1 2 2 22 2 2 2 2 2 1 Z I 2 2 2 2 2 1 1 1 2 2 2 2 2 2 2 22 2 11 1 IZ Z33 3 3 3 3 2 2 22 2 1 1 1 222 2 22 2 2 2 2 2 IZ Z3 3 3 2 2 2 1 1 1 22 2 22 2 222 2 2 11 1 IZ Z3 3 3 33 222 2 22 1 1 2 2 2 2 22 2 2 22 l l l Z Z 3 3 2 2 2 1 1 2 222 222 22 1 1 1 IIZ Z3 3 33 3 2 22 2 1 22 22 2 22222 22 2 1 1 Z Z3 3 3 33 2 2 2 1 1 22 2 2 2 2 1 11 11 IZ 13 3 3 2 2222 2 1 11 2 222 222 222 22 2 2 2 1 1 II 133 3 4 3 2 2 2 1 2 2 2 2 22 2 1 I I 33 4 3 3 3 1 11 22 2 2222 2 2 2 2 222 111 14 44 3 3 1 2 2 22 22 2 2 2 2 2222 I 14 44 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 22 2 22 221 1 4 4 3 3 3 3 2 2 2 222 2 2 2 22 2 222 22 21 15 4 4 3 3 33 333 2 22 2 2 2 2 22 2 2 2 2222 221 15 55 4 33 3 3 33 3 3 22 2 22 222 2 2 2 2 22221 I 5 4 33 3 33 3 3 2 222 2222 2 2 2 2 22 I 15 5 4 4 4 3 3 3 3 22 2 2 2 2 2 222 22 2 2 2 2 222 2 221 15 55 44 4 4 4 3 333 2222 2 2 22222 2 222 221 I 5 5 5 4 44 4 4 4 3 222222 2 22 22 2 2 2 2 2 2 1 155 5 44 4 4 4 333 222 2 2 2 22 2 2 2 2 221 I 5 5 5544 44 4 33 2 222 2 2 2 2 2 222 2 I 144 5 555 5 5 4 4 4 3 222 22 2 2 2 2 2 221 14 5 555 5 5 5 5 5 5 44 4 4 2 2 2 22 22 22 2 2 2 2 I 1 4 4 55 5 5 5 5 5 5 4 4 3 333 2 2 22 2 22 2 2 2 21 14 4 5 5 5 5555 5 4 4 4 4 2 22 2 22 2 2 2 I 14 44 55 5 5 55 5 4 4 4 3 3 222 2 222 22 2 2 2 2 22 221 14 44 5 555 5 5 5 4 4 444 333 2 22 2 222 2 I 14 4 4 5 5 5 5 5 5 5 4 33 22 2 22 2 22 22 221 14 4 44 55 5 5 5 5544 33333 333 2 2 2 2 2 2 I I 4 4 55 5 55 5 4 3 33 2 2 2 2 22 2 2 2 2 221 14 4 4 4 5 555555 55 5 4 4 3 3 3 2222 2 222 2222 2 I 15 4 44 5 5 555 4 4 333 333 2 2 2 2 22 2 221 15 4 444 455 55 55 4 4 3 3 3 3 2 2 2222 I 15 5 4 4 4 5 5 5 4 4 4 33 3 33 2 2 2 2 22 2 2 221 Z 5 44 4 55 555 44 4 3 3 333 3 2 2 2 2222222 2 I 13 4 5 5 4 4 4 55 4 3 3 22 2 2 2 2222222Z 13 4 5 5 44 4 S 4 4 3 3 3 3 33 3 3 3 33 2 2 2 2 2 22 I 13 4 5 5 5 5 44 55 3 33 3 3 2 2 2222 2 I 1 4 555 44 5 44 33 3 3 3 3 333 3 33 2 11 1 2 2 2 2 21 144 4 5 4 4 3 3 l l l l l I 14 44 5 5 5 5 44 433 33 3333 333 11 1 1 1 1 1 1 1 1 1 1 1 22 2 21 I 4 44 5 5 55 4 4 333 3333 3 3 1 1 1 1 1 1 1 2 2 1 13 4 4 4 5 554 444 33 3 33 3 11 1 1 11 1 1 1 1 2 2 21 13 4 44 5 5 55544 4 444 3 3 11 11 1 1 11 I 13 4 5 5555544 4 3 33333 3 11 1 11 1 1 1 1 1 22221 13 3 4 4 4 5 55 5 4 3 33 3 1 1 1 1 1 Z Z 3 3 3 3 3 4 4 4 45 4 5 5 5 4 3 3 3 3 3 3 3 3 11 1 1 1 1 1 1 1 1 1 1 1 1 1 11 2 22Z ^1234567890123456789012345678901^456789012345678901^

\

_53 52 51 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1

:

Figure 3: Map representing physical property temperature. Label 1 indicates TzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 0.5 >0.5 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM

Figure 1. Sampling in a unit square by using (a) HSS and (b) Monte Carlo techniques. zyxwvutsrqpo

4. The Approach and Its Computational Implementation In this work we compare the performance of the SD algorithm when the HSS and the MC sampling techniques are used to sample the random variables. The computational implementation of the algorithm involves a framework that integrates the GAMS modeling environment (Brooke et al., 1998), the sampling code (FORTRAN) and a C++ program which generates the appropriate LP problems for each SD iteration. The implementation is shown in Figure 2.

C++ Code

^

Sampling (FORTRAN)

1) Generation of an approximation tozyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Q(x): sampling and multiple generation and GAMS - OSL solution of LP's

^r 2) Addition of optimality cut and solution to the 1st stage problem

GAMS - OSL

^ Figure 2. Computational implementation of the SD algorithm.

5. Chemical Engineering Case-Study Our case-study corresponds to a stochastic version of the boiler/turbo generator system problem presented by Edgar et al. (2001). The system may be modeled as a set of linear constraints and a linear objective function. The demand on the resources are considered as uncertain variables in the problem. The distributions used for the demands are shown

855 in Table 1 and the plant is shown in Figure 3. To produce electric power, this system contains two turbo generators. Turbine 1 is a double extraction turbine and Turbine 2 is a single extraction turbine. To meet the electric power demand, electric power might be purchased. The resultingzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA SLPwR was solved by using the SD algorithm with MC and HSS sampling techniques. Seeking simplicity, we will not show the constraints of the model (See Edgar et al., 2001). The results are described in the following section.

Table 1. Probability distributions of the uncertain demands. Resource Medium-pressure steam (195 psig) Low-pressure steam (62 psig) Electric power

Demand [267,000 : 274,000] lb„/h [97,000 : 103,000] lb„/h [22,000 : 26,000] kW

Distribution Uniform Uniform Uniform

635 psig stream

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIH

X

Pressure reducing valve

PI

Turbine 1

(power)

P2 (power)

Purchased power

Condensate

195 psig stream

Pressure reducing valve

62 psig stream

Figure 3. Case-study: boiler/turbo generator system (Edgar et al, 2001). zyxwvutsrqponmlkjihgfed

6. Results and Conclusions The values obtained for the objective function with MC and HSS sampling techniques are shown in Figure 4a. Figure 4b shows the error of those values when compared to the convergence value of the objective function. It can be observed that the error presented with the HSS sampling is lower than that obtained with the MC sampling. After the solution of several other SLP's, the reduction on the number of iterations and the error seems to be a general advantage of the HSS with respect to MC and other sampling techniques. Current research efforts focus on using a fractal approach to characterize the

856 error presented with each of the techniques; we are also searching for an extension of the analysis for stochastic mixed-integer linear programs. It is expected that, since every node of a branch and bound algorithm can be individually seen as a SLP, the number of iterations and computer time when using HSS should be dramatically reduced. (a)

(b) zyxwvutsrqponmlkjihgfedcbaZYX ;

1 1 HI

Ilk .

zyxwvutsrqponmlkjihgfedcbaZYXW

0 + 100

0 Iteration -HSS

MC

zyxwvutsrqponmlkjihgf

Figure 4. (a) Objective value for the case-study using SD with MC and HSS techniques, (b) Error of each iteration with respect to the convergence value of the objective.

7. References Birge, J.R. and Louveaux, F., 1997, Introduction to stochastic programming, SpringerVerlag, New York. Bouza, C , 1993 Stochastic programming: the state of the art, Revista Investigacion Operacional, 1993, 14(2). Brooke, A., Kendrick, D., Meeraus, A. and Raman, R., 1998, GAMS- A User's Guide, GAMS Development Corporation, Washington, D.C., USA. Edgar, T.F., Himmelblau, D.M. and Lasdon, L.S., 2001, Optimization of chemical processes, McGraw-Hill, New York. Higle, J.L. and Sen, S., 1996, Stochastic decomposition, Kluwer Academic Publisher. Kalgnanam, J.R. and Diwekar, U.M., 1997, An efficient sampling technique for off-line quality control, Technometrics, 39(3), 308. Rico-Ramirez, V., 2002, Two-Stage Stochastic Linear Programming: A Tutorial, SIAG/OPT Views and News, 13(1), 8-14.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

857

The Effect of Algebraic Equations on the StabiUty of Process Systems Modelled by Differential Algebraic Equations* B. Pongracz^ , G. Szederkenyi, K. M. Hangos Systems and Control Laboratory, Computer and Automation Res. Institute HAS H-1518 Budapest P.O. Box 63, Hungary Dept. of Computer Science, University of Veszprem, Veszprem, Hungary ^e-mail: [email protected]

Abstract The effect of the algebraic constitutive equations on local stability of lumped process models is investigated in this paper using local linearization and eigenvalue checking. Case studies are used to systematically show the influence of algebraic equations on the open loop local stability of process systems using illustrative examples of a continuous fermentation process model and a countercurrent heat exchanger.

1. Introduction Lumped dynamic process systems are known to be modelled by differential and algebraic equations (DAEs). The differential equations originate from conservation balances for the extensive conserved quantities while the algebraic constitutive equations describing physico-chemical properties, equations of state, reaction rates and intensive-extensive relationships complete the model (Hangos and Cameron 2001). The general form of DAE process models consists of an input-affme differential part, and the algebraic equations are given in an implicit form: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA -^

= f{x,z)-}-Y^gi{x,z)ui

0 =

h{x,z)

(1) (2)

where x is the state vector, u = [ui.. .Up\^ is the vector of manipulable control inputs Ui and z is the vector of algebraic variables. Note that control inputs only occur in the differential part of the model. Dynamic nonlinear analysis techniques (Isidori 1995) are not directly applicable to DAE models but they should be transformed into nonlinear input-affine state-space model form by possibly substituting the algebraic equations into the differential ones. There are two possible approaches for nonlinear stability analysis: Lyapunov's direct method (using an appropriate Lyapunov-function candidate) or local asymptotic stability analysis using the linearized system model. * Extended material of the paper is available on http://daedalus.scl.sztaki.hu

858

In this paper, only the latter will be considered for the purpose of showing the influence of algebraic equations on open loop stability of process systems using illustrative examples of a continuous fermentation process model and a countercurrent heat exchanger. Special emphasis is put into the effect of different mechanisms, such as convection, transfer and reaction, occurring in lumped parameter process systems on local stability. zyxwvutsrqponmlkjihgfedc

2. Local Stability Analysis of Lumped Process Models This section contains the basic notions and techniques which are used for local stability analysis of lumped process models. 2.1. The structure of nonlinear DAE process models The structure of lumped process models depend on both the mechanisms taking place in the system and on the choice of input variables. Two practically important different cases are considered. 1.zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Inlet intensive potential variables as inputs If the control inputs are chosen to be the intensive potential variables at the inlets then the differential equations (1) of the above general DAE process models are in the following special form (Hangos et al. 2000):

where the coefficient matrices Atrans^ Boconv and Binconv are constant matrices originating from the convective terms, while Q^ is a smooth nonlinear function representing the transfer and source terms, respectively. 2. Flowrates as input variables If the flowrates of the convective flows are chosen to be the input variables, then the differential (conservation) equations take the following special form: X = AtransX

+ Q^{x,

p z ) - ^ ^ Qconvi (x, z)Ui

(4)

1=1

where Atrans is a constant matrix term, while the nonlinear smooth functions Qconv and Q^ originate from the convective terms and source terms, respectively. Under the assumption that physico-chemical properties are constant and specifications result in an index 1 model, the algebraic equations are always substitutable into (1). 2.2. Open loop local stability analysis of DAE models For the purpose of stability analysis, we need to linearize the DAE model around a steady state operating point [x* z*]^, which is in the following form in the case of the general model (1-2): zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA dx

0 = oxfi

Z / *

*^

-{-ygx(x*,z*)

g2{x*,z*) ... gp{x* ,z*)ju

(5)

OZ

dh

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA z (6) r *

*\

(JZ ( x * , z * )

for given operating point values of the input variables u* {i = 1,... ,p), and with the centered variables x = x — x'^.'z = z — z* and u = u — u*.

859

If | j L ^ ^. is invertible (which is equivalent with that the model has a differential index 'z can be explicitly expressed in equal to one), the vector of centered algebraic variableszyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO terms of state variables 'x yielding to a purely differential representation:

\

^

^

J I(X*,Z*)

The operating point(s) [x* z^'Y can be determined for prescribed input values u* by solving (1-2) with a; = 0 which means the solution of an algebraic system of equations. A necessary condition on the solvability of the system of equations above is that the number of differential (algebraic) equations equals to the number of differential (algebraic) variables (degree of freedom equals to zero), and the original DAE system has differential index 1. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 2.3. Mechanism-wide local stability analysis of DAE process models We investigate the effect of mechanisms (transfer, convection, reaction) on local stability using that both (3) and (4) are broken down into additive terms of these mechanisms. Earlier results show that transfer is a stabilizing term, because the eigenvalues of the matrix Atrans are on the open left-half plane (Hangos and Perkins 1997), and in case of constant mass holdups in each balance volume, Kirchoff convection matrices ensure that convection may also be a stabilizing term. Further mechanism-wide stability considerations of the locally linearized models in the above two input variable cases are as follows. 1. Inlet intensive potential variables as inputs The linearized model of (3) with the algebraic dependence (2) is in the following form: • *'— I -^trans "r J^outconv ~r I r \ OX

~ OZ

X -f Binconv'^

(8)

\dz) dx ( x * , z * ) / zyxwvutsrqponmlkjihgfedcbaZYXWVUT

Since the coefficient matrices Atrans, Binconv and Boutconv in Eq.(3) are constant matrices, the algebraic dependence (2) only affects the transfer and source terms in the model and thus has a major effect on the open loop stability of the system. 2. Flowrates as input variables The linearized model of (4) with the algebraic dependence (2) is similar to the former case: ^

_

/.

4-

(gconvi{x*,Z*)

(dQ^ dx

m"

dQ^fdh\~'dh dz \dz J dx

. . . gconvp{x*,Z*)jU

(9)

The main difference is that convection is affected by the inputs therefore the state matrix of the linearized model contains the transfer and source terms only.

860 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

3. Case Study 1: A Continuous Fermentation Process

A simple continuous fermentation process (for example in (TakamatsuzyxwvutsrqponmlkjihgfedcbaZY et al. 1975)) is used as a case study with constant liquid volume V. The liquid feed (F), the temperature and all physico-chemical properties are assumed constant. The state variables are the concentration of biomass {X) and of that the substrate (5). The control input of the system is the substrate feed concentration Sp which is an intensive potential at the inlet as described in (3) and there is no transfer term. The reaction rate expression is given by an algebraic equation for the reaction rate r. X

=

-zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJ fx + r (10)

1 ^ ^ Q ^S zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM (11)

S 0 =

v^ /z(X,5)-r

(12)

3.1. Stability of the simple fermenter We will show that the stability of the model depends on the reaction kinetics only. The linearized model of the fermenter is a special case of (8) with no transfer effect {Atrans = 0) in the following form: dr I X zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA X 0 --zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 0 axU SF (13) + 0 S zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 1 dr I z

-i

Ml 1)

Y dx\

Y ds\*

J/

V

The state matrix A of the linearized model consists of the sum of the diagonal output convection term (Boutconv) and the reaction term (Asource), where only the source term depends on the steady state. Since ^ is a matrix polynomial of the source term (A = —y{Asource)^ + {^sourceY) and the linearized reaction term is singular because there is a single reaction term, the eigenvalues of A can be computed according to (Gantmacher 1959): dr F F F X{A)i = - —+0= - —, X{A)2 = -y-^trace{Asource)\^ = dX ^

^dr_ 'VdS

(14)

It leads to the stability condition dr 'dX

]_dr_

'Yds

V

(15)

3.2. Stability of the simple fermenter with different reaction kinetics With five different reaction kinetic expressions (// functions), the model exhibits different stability properties. Investigation is performed by eigenvalue checking of the linearized models at the operating point(s) in the following cases. 1. Constant characteristics fi = K results in a linear time invariant (LTI) model which is globally asymptotically stable. This case is the basis of all the following models, containing only the effect of the differential variables. 2. The linear reaction rate /x = Kx gives also an LTI model with the operating point of biomass wash-out, which is stable if K < — v 3. The simplest nonlinear, a bi-linear reaction rate /x = KSX causes two operating points: a wash-out point and an other one.

861 zyxwvutsrqpon

Table 1. The effect of reaction kinetics zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK Model type linear time invariant linear time invariant nonlinear input affme with operating points (1),(2)

Reaction kinetics r= K r = KX r = KSX

Pmflg^' - X ki+S + k2S'^

with operating points (1),(2) nonlinear input affme with operating points (1),(2),(3)

Stable if unconditionally K,m

r lllliiiiii %iiilllll t,»Jt« 0

0.2

0.

1

1-5

1

1.5

zyxwvutsrqponmlkjihgfedcbaZY

Figure 3 - Cluster 1 is represented in a) and cluster 2 is indicated in b). The relevance of 1st and 2nd cluster is shown respectively in c) and d).

6. Conclusions The mathematical fundaments for possibilistic fuzzy clustering of fuzzy rules were presented. The P-FCAFR algorithm was used to organize the rules of the fuzzy model of the liquid level inside the Pilot Plant Reactor in the HPS structure. The partition matrix can be interpreted as containing the values of the relevance of the sets of rules in each cluster. This approach is currently showing its potential for modelling and identification tasks, particularly in the fault detection and compensation field. Acknowledgment Financial support from FCT under research projects is gratefully acknowledged.

7. References Afonso, P.A. and Castro, J., 1998, Improving Safety of a Pilot Plant Reactor using a Model Based Fault Detection and Identification Scheme, Comp. & Chem. Eng., 22. Salgado, P., 2001, Fuzzy Rule Clustering, IEEE Conf. on Syst, Man and Cybernetics 2001, Arizona, Tucson, AZ, USA, pp. 2421-2426. Salgado, P, 2002, Relevance of the fuzzy sets and fuzzy systems. In: "Systematic Organization of Information in Fuzzy Logic", NATO Advanced Studies, lOS Press. Yager, R., 1998, On the Construction of Hierarchical Fuzzy Systems Models, IEEE Trans. On Syst., Man, and Cyber. -Part C, 28, pp. 55-66. Wang, Li-Xin, 1994, Adaptive Fuzzy System and Control, Design and stability analysis. Prentice Hall, Englewood Cliffs, NY 07632.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

905

Residence Time Distributions From CFD In Monolith Reactors - Combination of Avant-Garde and Classical Modelling Tapio Salmi, Johan Warna, Jyri-Pekka Mikkola, Jeannette Aumo, Mats Ronnholm, Jyrki Kuusisto Abo Akademi, Process Chemistry Group, Laboratory of Industrial Chemistry, FIN-20500 Turku/Abo, Finland

Abstract Computational fluid dynamics (CFD) was used to investigate the flow pattern and flow distribution in a recirculating monolith reactor system designed for catalytic three-phase processes. The information from the CFD model was transferred to a simplified simulation model, where the monolith and the mixing system were described by parallel tubular reactors coupled to a mixing space. The model was based on the following principles: the mixing space and the monoliths were in fully dynamic states, but the concept of differential reactors was applied on the monolith channels. Thus the simplified model consisted of ordinary differential equations for the gas and liquid phases. The modelling concept was successfully illustrated by a case study involving complex reaction kinetics: hydrogenation of citral to citronellal, citronellol and 3,7dimethyloctanol over cordeorite-supported nickel on alumina washcoat. A comparison of experimental results with the model predictions revealed that the proposed approach is reasonable for the description of three-phase monolith reactors.

1. Introduction Residence time distribution (RTD) is a classical tool in the prediction of the comportment of a chemical reactor: provided that the reaction kinetics and mass transfer characteristics of the system are known, the reactor performance can be calculated by combining kinetic and mass transfer models to an appropriate residence time distribution model. RTDs can be determined experimentally, as described in classical textbooks of chemical reaction engineering (e.g. Levenspiel 1999). RTD experiments are typically carried out as pulse or step-response experiments. The technique is principally elegant, but it requires the access to the real reactor system. In large-scale production, experimental RTD studies are not always possible or allowed. Furthermore, a predictive tool is needed, as the design of a new reactor is considered. The current progress of computational fluid dynamics (CFD) enables computational 'experiments' in a reactor equipment to reveal the RTD. A lot of commercial software has recently been developed to carry out CFD calculations, particularly for homogeneous systems, such as CFX and Fluent. Typically CFD is used for non-reactive fluid systems, but nowadays even reactive systems can be computed (Baldyga and Bourne 1999). The ultimate goal of chemical reaction engineering is to predict the overall reactor performance in the presence of chemical transformations. The difficulties of CFD, however, grow very much when multiphase systems with chemical reactions are considered. For this reason, a logical approach is to utilize CFD to catch the essential features of the flow pattern and use this information in classical reactor models based on RTDs.

906

The approach is illustrated by a case study, a three-phase monolith reactor coupled to a recycling device, the Screw Impeller Stirred Reactor (SISR) developed at TU Delft (Kaptejn et al. 2001),. Cylindrical monoliths are placed in a stator, and a foam of gas and liquid is forced through the monolith channels with the aid of a screw (Fig. 1). Monolith reactors combine the advantages of slurry reactors and fixed beds: minimized internal diffusion, low pressure drop and continuous operation (Nijhuis et al. 2001). zyxwvutsrqponmlk

Figure. 1. The monolith reactor schematically and in reality. zyxwvutsrqponmlkjihgfedcbaZYXWVU

2.zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Flow Distribution from CFD Calculations In monolith reactors, the distribution of the fluid into the channels is typically at least somewhat uneven; thus it is very important to predict the flow distribution and include it in the quantitative modelling. We utilized CFD calculations to obtain the flow characteristics of the experimental system (Fig. 1). The CFD calculations were performed with the software CFX.4.4. The flow profiles in the gas and liquid phase were solved with the turbulent k-e method (320000 calculation elements) To evaluate the distribution of gas bubbles, the Multiple Size Group method was applied. The results from the CFD calculations give the flow velocities for gas and liquid, the bubble sizes and the gas and liquid hold ups in the channels. Fig. 2. This information can be utilized in the conventional reactor model. The predicted slug flow (Taylor flow) conditions in the monolith channels were also confirmed by visual investigation of the flow by replacing the autoclave with a glass vessel of equal size. Fig. 1. Schematically the reactor can be regarded as a system of parallel tubes with varying residence times. The screw acts as a mixer, which implies that the outlet flows from the channels are merged together, and the inlet flows to the monolith channels have a uniform chemical composition. The principal flowsheet is displayed in Fig. 3. Based on this flowsheet, the mass balance equations are derived as follows.

907 zyxwvutsrqpo

Figure. 2. Flow distribution calculated in the monolith channels by CFD.

Figure 3. Simplified flowsheet of the monolith system described as parallel tube reactors and stirred mixing volume. zyxwvutsrqponmlkjih

3. Simplified Model for Reactive Flow The surroundings of the monolith was considered to be perfectly backmixed system, where no reactions take place.. The monolith channels were approximated by the plug flow concept. The gas-liquid as well as liquid-solid mass transfer resistances were included in the model. Since the catalyst layer was very thin (few micrometers) and the reactions considered in the present case were slow, the internal mass transfer resistance in the catalyst layer was neglected. The gas-phase pressure in the reactor was maintained constant by controlled addition of hydrogen. The temperature fluctuations during the experiments were negligible; thus the energy balances were not needed. The conversions of the reactants were minimal during one cycle through the monolith, which implies that a constant gas hold-up was assumed for each channel. The reactions were carried out in inert solvents and previous considerations have shown that the liquid density did not change during the reaction. Based on this background information, the dynamic mass balance for the liquid phase in each channel can be written as follows: n'Li,j,m + Nuj AAL = Nuj, + n'uj.om + duu/dt

(1)

Due to the assumption of constant density, the volumetric flow rate does not change, and the model can be expressed by concentrations. The basic volume element is let to shrink and the hyperbolic partial differential equation (PDE) is obtained:

dcLi,j/dt = NujaL - Nuj, as - (TLJ Ey) "^ dcuj/dz zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO (2) This complete model is valid for all of the components, but, actually, the gas-liquid mass transfer (Nuj) term is non-zero for hydrogen only. The PDE-model can be further simplified by taking into account the fact that the conversion is minimal during one cycle through the channel, and the concentration profile in the channel can be assumed to be almost linear. The entire model can now be expressed by the average (c*) and the outlet concentration (CQ): dc*u,j/dt = N*u,jaL- N*u,j, as - 2(TLJ e^)'' (c*u,j -CQU)

(3)

The exact formulations of the fluxes (N*) depend on the particular model for mass transfer 1 being used; principally the whole scope is feasible, from Pick's law to the

908 complete set of Stefan-Maxwell equations (Fott and Schneider 1984). Since the only component of importance for the gas-liquid mass transfer is hydrogen, which has a limited solubility in the liquid phase, the simple two-film model along with Pick's law was used, giving the flux expression N*Li,j = k'u,j(c*Gi,j,/Ki-c*uj)

(4)

For the liquid-solid interface, a local quasi-steady state mass balance takes the form N*Li,jas+r*g,pB=0

(5)

In case that the liquid-solid mass transfer is rapid, the bulk and surface concentrations coincide, and the rate expression is directly inserted in the balance equation, which becomes dc*u,j/cit = r*i,jPB - 2(TLJ ELJ)zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA '' (c*Li,j - CQU) (6) The surroundings of the monolith is described by the concept of complete backmixing, which leads to the following overall mass balance for the components in the surrounding liquid phase: dcoLi.j/dt = TL"VS(2c*Li,j - coLi,j,)aLj - CQLJ)

(7)

The treatment of the gas phase is analogous to the liquid phase. The flux describing the gas-liquid mass transfer is given by eq. (4). Consequently, the dynamic mass balance for the monolith channels can be written as dc*Gi,j/dt = -N*LijaL - 2(TGJ BOJ) "^ (coij-CoGi)

(8)

For the monolith surroundings, the concept of complete backmixing is again applied leading to the formula zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA dCoGi/dt = TG"\Z(2c*Gij - CoGi,j,)CXGj - CQGJ)

(9)

The model for the schematic system (Fig. 3) consists of the simple ODEs (3) (or (6)), (7), (8) and (9), which form an initial value problem (IVP). In the case that pure hydrogen is used, its pressure is kept constant and the liquid-phase components are nonvolatile, the gas-phase balance equations (8)-(9) are discarded and the gas-phase concentration in eqs (3) and (6) is obtained e.g. from the ideal gas law. The initial conditions, i.e. the concentrations at t=0 are equal everywhere in the system and the IVP can be solved numerically by any stiff ODE-solver.

4. Application: Catalytic Three-Phase Hydrogenation of Citral in the Monolith Reactor Hydrogenation of citral was selected as an example, because it nicely illustrates a case with complex stoichiometry and kinetics, which is typical for fine chemicals. The stoichiometric scheme is displayed in Fig. 4. The reaction system is relevant for the manufacturing of fragrancies, since some of the intermediates, name citronellal and citronellol have a pleasant smell. Thus the optimization of the product yield is of crucial importance. Isothermal and isobaric experiments were carried under hydrogen pressure in the monolith reactor system at various pressures and temperatures (293-373K, 2-

909 40bar). The catalytic material was nickel over Al-washcoated cordeorite support. Hexan was used as a solvent. Samples were withdrawn from the reactor and analyzed by gas chromatography (Aumo et al. 2002).

The fit of the model to experimental data is displayed in Fig. 5. The product distribution depends dramatically on the reaction conditions: at low temperatures and hydrogen pressures the system worked under kinetic control, and the desired intermediate products were obtained in high yields. As the temperature and hydrogen pressure were increased, the final product was favoured, and the process was evidently shifted towards mass-transfer control. The individual mass-transfer coefficients were estimated by using the molecular diffusion coefficient of hydrogen in liquid phase (Reid et al.l988) along with the hydrodynamic film thickness (Irandoust and Andersson 1989). Since the film thickness depends on the local velocity, the mass transfer coefficient was different in different channels. The rate equations describing the reaction scheme (Fig. 4) have been presented in a previous paper of our group (Tiainen 1998). The weighted sum of squares between measured and estimated concentrations was minimized by a hybrid simplex-Levenberg-Marquardt algorithm implemented in the simulation software Modest (Haario 1994). The model equations were solved in situ during the parameter estimation by the backward difference method. The estimated parameters were the kinetic and adsorption equilibrium constants of the system. The simulation results revealed that the model was able to describe the behaviour of the system. The parameter values were reasonable and comparable with values obtained in previous studies concerning citral hydrogenation in a slurry reactor (Tiainen 1998). zyxwvutsrqponm

0.9 0.8

[

Citral

0.7 0.6 0.5

—^Qtrormelal

0.3

I

H|HJ(

y

Citronnelol

^***»^.ft

o

0.4

o*^ o

o

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF o ^^^^^ b zyxwvutsrqponmlkjih zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFED 0.11 ^ ^ « o ^ o —©—— o.ol zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONML uy^ro Ot ""^^-^^^ , R, —^ ^

Figure 4. Stoichiometry of citral hydrogenation over Ni-alumina.

1

Figure 5. Fit of model {-) to experimental data (o) citral hydrogenation in the monolith reactor system.

5. Notation A a c K k' L N n n'

area or cross-section area-to-volume ratio concentration gas-liquid equilibrium ratio overall mass transfer coefficient monolith channel length flux amount of substance flow of amount of substance

t V V z a e PB T

time volume volumetric flow rate dimensionless length coordinate fraction of volumetric flow rate through one channel hold-up catalyst bulk density residence time

910 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Subscripts and superscripts G gas ch channel i component index j monolith channel index

liquid L solid (catalyst) surface S mixing volume (tank) T zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ inlet to the mixing volume 0 average * value

Merged parameters (XGJ=WGJ/SWGJ TGJ= L/WGJ

CXLJ=WLJ/IWLJ

TG= VGT/(Ach SWGJ)

TL= VLT7(Ach IWLj)

TLJ= L/WLJ

6. References Aumo, J., Lilja, J., Maki-Arvela, P., Salmi, T., Sundell, M., Vainio, H., Murzin, D., 2(X)2, Catal. Letters (in press). Baldyga, J., Bourne, J.R., 1999, Turbulent mixing and chemical reactions, Wiley. Fott, P., Schneider, P., 1984, Recent advances in the engineering analysis of chemically reacting systems (Ed. L.K. Doraiswamy), Wiley Eastern. Haario, H., 1994, MODEST - User's Guide, Profmath Oy, Helsinki. Irandoust, IS., Andersson, B., Ind. Eng. Chem. Res. 28, 1685-1688. Kapteijn, P., Nijhuis, T.A., Heiszwolf, J.J., Moulijn, J.A., 2001, Catalysis Today 66, 133-144. Levenspiel, O., 1999, Chemical Reaction Engineering, Wiley (3^^ Ed.). Nijhuis, T.A., Kreutzer, M.T., Romijn, A.C.J., Kapteijn, F., Moulijn, J.A., 2001, Catal. Today 66, 157-165. Reid, R.C., Prausnitz, J.M., Poling, B.E., 1988, The Properties of Gases and Liquid, McGraw Hill. Tiainen, L.P., 1998, Doctoral thesis, Abo Akademi, Turku/Abo.

7. Acknowledgements The work is a part of the activities at the Abo Akademi Process Chemistry Group within the Finnish Centre of Excellence Progranmie (2000-2005)) by the Academy of Finland. Financial support from the National Agency of Technology (TEKES) and Finnish Graduate School in Chemical Engineering (GSCE) is gratefully acknowledged. AEA Technology is gratefully acknowledged for the special license agreement for the CFD software.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

911

Modelling the Dynamics of Solids Transport in Flighted Rotary Dryers P.A. Schneider, M.E. Sheehan and S.T. Brown James Cook University, School of Engineering Townsville Queensland 4811 Australia

Abstract

This paper proposes a simple dynamic solids transport model for flighted rotary dryers, which results by discretising the dryer in the axial direction into a series of equivolume elements. Each resultant element is partitioned into two zones; onezyxwvutsrqponmlkjihgfedcbaZYXW active and the other passive. Solids interchange between the active and passive zones is included, leading to a tanks-in-series/parallel approach, traditionally used by reaction engineers. Modelling solids transport in this manner allows the residence time distribution (RTD) characteristics of the rotary dryer to be elucidated. In this work gPROMS is used to simulate the proposed rotary dryer model. Data from a 100 toime per hour raw sugar dryer is reconciled against the dynamic solids transport model, by estimating overall solids transport coefficients.

1. Introduction/objectives The Australian raw sugar industry faces increasing competition in a highly competitive world market. Now, more than ever, export quality standards must be ensured. As the last unit operation in the manufacture of raw sugar, the rotary dryer plays a key role in meeting increasingly stringent product quality specifications. Given the high capital cost of additional drying capacity (approximately AUS$2M), it is prudent to investigate how existing dryer capacity can be better utilised. A key step in any optimisation involves the development of a dynamic model of the process in question. Rotary sugar drying involves the simultaneous and coupled cooling and drying of a wet crystalline feed, using a counter current air stream as a heat and humidity sink. This is shown schematically in Figure 1. Ait Out r-.

.1

..

I Air In

Solids In . 5 Out

Figure 1: Schematic view of a flighted rotary dryer, showing cross current airflow.

A key issue in modelling rotary dryers, which has largely been neglected to date, involves the incorporation of a dynamic solids transport model. Many previous workers

912

(DouglaszyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA et ai, 1993, Duchescne et ai, 1996 and Sheehan and Schneider, 2000), developed mass and energy balance relations for rotary dryers, in which they made two key assumptions. • The hold-up of solids in the dryer is uniform and always at steady state. • All of the solids in the dryer participate in drying. The first assumption is not valid due to feed variations, which are common to many rotary dryers employed within the Australian sugar industry. In fact, it could be reasonably argued that the dryer is never at steady state and therefore the hold-up can never be uniform along its length. The second assumption is also invalid, since visual inspection of any operating flighted rotary dryer reveals that, while some of the solids do contact the oncoming air stream, a significant portion of the crystals are held-up in the flights or kiln along the dryer floor and, thus, do not interact with the oncoming air stream. A key objective of this work was to develop a dynamic model of solids transport through an industrial flighted rotary dryer that addressed the above two assumptions and would form the base upon which the mass and energy balances could be superimposed zyxwvutsrqponm

2. Methods 2.1. Solids transport modelling Solids transport down a flighted rotary dryer is complex and can be attributed to solids rolling and cascading. These mechanisms are complex and, while descriptive, would find very little direct application in improving the control of a rotary dryer. When modelling solids transport in a flighted rotary dryer, it can be observed that the solids behave in one of two ways. Solids either actively curtain, thereby gaining exposure to the counter current air stream, or travel passively (in the flights or along the dryer floor) and therefore do not participate in drying. Thus the solid phase may be subdivided into two categories, active and passive. This is pictured in Figure 2 a), which shows a schematic cross section of a flighted rotary dryer. Figure 2 b) shows an idealised conception of the active and passive solid phases. This concept assumes that passive solids are contained within a well-mixed element, while active solids are held within another, parallel well-mixed element. ,^ Passive solids

Active solids

a)

b)

Figure 2: a) Cross sectional view of a flighted rotary dryer, featuring active and passive solids phases, b) Idealised element, showing active and passive solids interaction.

913 The flow of solids out of thezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA i^^ element, m^, is assumed to be proportional to the mass of solids within that element, m^, giving rh: =kmi

(1)

The coefficient A: is a constant of proportionality, which describes the propensity of solids to depart the i^^ element. Thus the dynamics of solids mass hold-up in the passive and active elements are: dm,

(^i~lp

dt

\ - ^i,p)- hm,p + h^i,c

dm,dt

•=

h^i,p-h^Ua

(2)

(3) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDC

where the subscripts p and a refer to passive and active solids respectively and the transport coefficients are k^ (passive-to-passive), k2 (passive-to-active) and ^3 (activeto-passive). Furthermore, the dynamics of the concentration, w, of a trace component in the active and passive elements is determined as d^i,p dt

hWi-\,p^i-\,p

-^i,p^i,p)-\h^i,p^i,p

-h^i,a^i,a)

~^^i^P

V

Av

(4) zyxwvutsrqponmlkj

/

(5) The approach taken to model the dynamics of solids transport in a full-scale flighted rotary dryer combines approaches taken by Duchescne et al (1996) and Matchett and Baker (1988). Consider N dryer elements in series, as shown in Figure 3, in which solids flow from one element to the next as passive solids. In the present model, active solids would interact with a counter current air stream. Modelling the entire dryer is simply a matter of repeating the above equations (2-5) N times and giving suitable inlet flows for solids and air. N-l Solids In

Air Out

Solids

Air In

Figure 3: Schematic representation of a flighted rotary dryer, featuring active and passive solids phases.

914 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 2.2. Solids transport dynamic simulation The above equations for solids transport were implemented within gPROMS. This was done by creating separate gPROMS Models, which described the transfer of solids into, and out of, the passive and active elements. These gPROMS Models were then linked together by a third gPROMS Model into a variable number, N, of elements in series. The combined gPROMS Model was then formulated into a gPROMS Process, which executed the dynamics under varying conditions. A variety of steps were taken to verify the gPROMS code, such as mass balance closure on total solids in the dryer and a reconciliation of inlet and outlet tracer mass. 2.3. Industrial tracer experiments A 100 t/h flighted rotary sugar dryer at CSR Sugar Limited's Invicta Sugar Mill, located in North Queensland, was used as a case study to evaluate the proposed model. Approximately 0.5 kg of elemental lithium, as saturated lithium chloride solution, was injected into the sugar inlet end of the rotary dryer over a 40 second time frame, once the dryer had reached (close to) steady state operation. Samples of raw sugar leaving the dryer were taken and later analysed for lithium by atomic absorption spectrometry. It should be noted that a simulation results of the proposed solids transport model was used as a guide to determine when the dryer outlet stream should be sampled, in order to "catch" the peak of the residence time distribution (RTD) curve. In this way an information-rich signal was gained, which was invaluable for model validation and parameter estimation purposes, while at the same time reducing experimentation costs.

3. Results

The gPROMS simulation was tested under a variety of conditions, in order to evaluate the dynamic behaviour of the solids in the dryer. Figure 4 shows the effect of a series of step changes in the inlet feed flow rate to the total mass hold-upzyxwvutsrqponmlkjihgfedcbaZYXWVUT (i.e. active and passive) in the first, middle and last dryer elements. As expected, the first element behaves very much like a first order system, while the middle and last elements have a sigmoidal shape, characteristic of higher order systems. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPON *^9*^ -,

(f

|V

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

'SB 500 -

/

^

11 :

1 475 J

S S

\\ -y^^.^—

1

\ \ ' /^ VV 425 -zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

1 450S

\\

50

75

1

First

/

Midde

100 125 150 175 200 225 250 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF Time [min]

Figure 4: Model dynamic response of total solids mass in selected dryer elements.

915 The results of the industrial tracer study are shown on Figure 5. Laboratory analyses of elemental lithium in the raw sugar samples taken from the dryer, expressed in parts per million, are shown by the data points. These data are of excellent quality, considering the conditions under which the experiment was carried out. It is interesting to observe the extended "tail" of the RTD curve, indicating that there is some back mixing of solids in the dryer unit, which justifies the choice of the series-parallel structure of the proposed model. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

k, = 353.4084 k2 = 11.09308 kj = 50.83923

g 2^ n S 20

lis

Time (min]

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONML

Figure 5: Full scale rotary dryer RTD data and gPROMS estimation of transport coefficients.

Parameter estimation was performed in gPROMS, which attempted to minimise the error between the predicted and actual lithium concentrations exiting the dryer. The optimal values for the transport coefficients are shown on Figure 5. Before this estimation was carried out, the plant data was normalised so that the area under the RTD curve was equal to unity, matching the conditions in the gPROMS simulation. The smooth curve in Figure 5 represents the optimised RTD from the gPROMS solids transport model, based on optimised parameter values for solids transport (A:i, ^2»^3 )• It is important to note that the transport coefficients were set globally for all elements along the dryer and did not change locally. It is clear that the proposed model structure well describes the steady state RTD of the flighted rotary raw sugar dryer. While the optimised gPROMS simulation agrees well the plant data, there are a few shortcomings of the model. First, the number of elements, N, had to be chosen manually, since it was not possible to optimise this parameter within gPROMS' estimation routines. However, once a reliable method had been developed for the estimation of the transport coefficients, multiple optimisations were run across a range of N values. Using this manual method, the optimum number of elements was determined to be 50.

916

Another important shortcoming of the proposed model is that the transport coefficients are not physically meaningful. However, this shortfall is more than made up for in terms of model simplicity gains. At steady state, the mass ratio of active solids to total solids in the dryer,zyxwvutsrqponmlkjihgfedcbaZY a , is related to the transport coefficients according to zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIH

a = -^^^^ = - A ~

(6)

The optimised parameters for k2 and k^ yield a mass ratio of active to total solids of 18%, which is comparable to results presented by Matchett and Baker (1988) in their experimental study of rotary dryers. This encouraging result is more a matter of serendipity, since there are no physical constraints in our model to guarantee this result, for any given set of RTD data. As a result, a was fixed to a value of 20% and adjustment was made only to ki and k^ in their estimation (i.e. since k2 is now fixed by a ) . A parameter estimation procedure was set-up in gPROMS, but failed to deliver meaningful estimates for the transport coefficients. The reasons for this are unclear and are currently being investigated.

4. Conclusions and Outlook This study proposes a simple approach to modelling the dynamics of solids transport within a flighted rotary dryer. The approach taken was to model the system in a seriesparallel formulation of well-mixed tanks. The concept of active and passive solids is important, since it will lend itself well to the addition of mass and energy balance relations. This model formulation predicts the RTD of the system. Industrial RTD data was obtained from a 100 tonne per hour dryer and compared with the model predictions. gPROMS parameter estimation has delivered overall transport coefficients for this system. The transport coefficients are not independent, nor completely physically meaningful. However, they produce a very simple model formulation, which forms the basis for more detailed rotary dryer models incorporating mass and energy balances. Future work will see the development of a full dryer model based on the proposed solids transport model. Refinements will be made to the model to incorporate the effects of solids moisture and interaction with the counter current air stream.

5. References Douglas, P.L., Kwade, A., Lee, RL. and Mallick, S.K., 1993, Drying Tech., 11(1), 129-155. Duchescne, C, Thibault, J. and Bazin, C, 1996, Ind. Eng. Chem. Res., 35, 23342341. Matchett, A.J. and Baker, C.G.J., 1988, Particle Residence Times in Cascading Rotary Dryers Part 2 - Application of the Two-stream Model to Experimental and Industrial Data. J. Separ. Proc, Vol. 9, 5-13. Sheehan, M.E. and Schneider, P.A., 2000, Modelling of Rotary Sugar Dryers: Steady State Results. Proceedings of Chemeca 2000, Perth.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

917

On-Line Process Optimisation: Parameter T\ming for the Real Time Evolution (RTE) Approach

Sebastian Eloy Sequeira\ Miguel Herrera^, Moises Graells^, Luis Puigjaner^ Chemical Engineering Department. Universitat Politecnica de Catalunya ^ETSEIB, Av. Diagonal 647 Pav. G,zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJ T P Barcelona (08028) Spain ^EUETIB, Comte d'Urgell, 187, Barcelona (08036) Spain

Abstract This paper describes a methodology proposed for tuning RTE (Real Time Evolution) parameters. RTE has been introduced in a previous work (Sequeira et al., 2002) as a new approach to on-line model-based optimisation. Such strategy differs from classical Real Time Optimisation in that waiting for steady state is not necessary and also in the use of simple optimisation concepts in the solution procedure. Instead, current plant set points are periodically improved around the current operation neighbourhood, following an also periodically updated model. Thus, Real Time Evolution (RTE) is based on a continuous improvement of the plant operation rather than on the formal optimisation of a hypothetical future steady state operation. In spite of using a simpler scheme, the proposed strategy offers a faster response to disturbances, better adaptation to changing conditions and smoother plant operation regardless the complexity of the control layer. However, a successful application of such strategy requires an appropriate parameter tuning, this is: how often set points should be adjusted and what does exacdy neighbourhood mean. Although the optimal values of these parameters strongly depends on the process dynamics and involves complex calculations, this work uses a simple benchmark to obtain general guidelines and illustrates the methodology for easily parameter tuning as a function of the process information typically available.

1. RTO and RTE Fundamentals The classical RTO loop (Marlin and Hrymak, 1997; Perkins, 1998) consists of subsystems for measurement validation, steady state detection, process model updating, model-based optimisation and command conditioning. Once the plant operation has reached steady state, plant data are collected and validated to avoid gross error in the process measurements, while the measurements may be reconciled using material and energy balances to ensure consistency of the data set used for model updating. After validation, the information is used to estimate the model parameters so that the model represents correctly the plant at the current operating point. Then, the optimum controller set points are calculated using the updated model, and they are sent to the control system after a check by the command conditioning subsystem. Real Time Evolution has been introduced as an alternative to current RTO systems. The key idea is to obtain a continuous adjustment of set point values, according to current operating conditions and disturbance measurements (those which affect the optimum location) using a steady state model. Table 1 summarises and compares the relevant features of both approaches. The steady state information is used by RTE only for data

918

reconciliation and model updating, while the core of the system is the recursive improvement, which does not need the process to be at steady state. zyxwvutsrqponmlkjihgfedcbaZYXW

Table 1: Functional sequences for RTO and RTE. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM RTO

RTE

Data acquisition / Data pre-processing IF STEADY

IF UNSTEADY

Data acquisition / Data pre-processing zyxwvutsrq IF STEADY

Data Validation

Data Validation

Model Updating

Model Updating

IF UNSTEADY

Optimisation (optimal set-point values) Check steadiness

Improvement (Best small set-point changes)

Implementation

Implementation

The improvement algorithm consists in the following: given the current point, and current information about the disturbances, simulate a few changes in the decision variables in a pre-defmed small neighbourhood around the current point using a steady state model. The output of this algorithm is the best point in terms of the steady state objective function, which needs also to satisfy the required constraints. Thus, RTE approach can be seen as a variant of the EVOP strategy (Box, 1969), which relies in a model instead of plant experiments and avoids wasting resources in non-profitable trial moves. In addition, it does not require waiting for steady state so that an adequate tuning of RTE parameters allows the system to follow pseudo steady states, hence improving the economical performance even under continuous disturbances.

2. The Parameter Tuning Problem For a given process and a given set of disturbance patterns entering to the system, the tuning problem consists in finding the values for RTE parameters: time between executions, DT, and the "small" neighbourhood of maximum allowed changes, represented by a vector z- Following, the influence of such parameters over the system performance is summarised, and some guidelines are extracted to properly adjust them for the process under consideration (the Williams Otto reactor, Williams and Otto, 1960, as modelled in Sequeira et al, 2002 is used in this work). 2.1. Neighbourhood size z When the improvement procedure is repeated n times, the local optimum is expected to be found with an acceptable degree of accuracy. The greater the values in z the more inaccurate the result. The lower the values in z the higher the possibilities of being trapped in a saddle point, being affected by rounding errors (note that at every point

919

requires solving a non-linear equation system) and the lower the possibility of reaching the final value within the given number of iterationszyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPON (n). Then, z can be considered as a parameter of an optimisation algorithm (in this case, the recursive application of the improvement). Given that the optimisation procedure only determines the best steady state, its tuning can be then de-coupled from the process dynamics. In this way, the tuning problem can be stated as findingzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJI x such that the distance between the true optimal point (f^) and the found using the an RTE recursivezyxwvutsrqponmlkjihgfedcbaZY algorithm (J^^) is minimised for all the expected conditions:

min

1r i/p [f'''(xA)-r'(M z = — zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (1) p\

1=1

r\^)

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

V

where 4,^=7 to m, is the discretisation of the range of possible values for the disturbances with economical influence. Obviously, only in few cases is possible to identify the "true" optimal, but it can be approximated by a reference optimisation method able to give the optimal with the desired degree of accuracy. Additionally, p will commonly be assigned to two (Euclidean distance). Then, the procedure becomes: - Identify the range of variability of the disturbances to evaluate the 4 values Select the reference method for estimating f^^ - Solve the minimisation problem (eq. 1). In addition, an appropriate scaling of/and the decision variables will likely allow using the same value for the vector x components, and thus reducing substantially the computational effort. Figure 1 shows the value of zm for increasing values of xusing different and arbitrary values of n. Note that for a changing n there is an acceptable range rather than a punctual optimum (in this casey ^ 5 , 6 and 7). This fact is indeed desirable for the overall procedure, as will be explained in a subsequent section. 1

1

1

•\

— n = 20 1

' \ 0.1

1

/7 = 35 n = 50

\

V

'

4

\

1

*

\ . - - - • . . . /•

V

JT''/!^''^

Z^"

0.05

1

1

1

1

1

1

1

Figure 1: Influence of x on z and its tuning. 2.2. Execution period DT A given disturbance p triggers the RTE procedure that will periodically improve the set points until no further improvement is possible. By changing the RTE frequency (7/D7)

920

different set points profiles are obtained for the same disturbance pattern. Thus, the question is the determination of the best value ofzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONML DT. In order to compare economical performance, the Mean Objective Function is used:

\ioF{e)de MOF(t) = ^ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (2)

t-L zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

where 10F denotes the hypothetical on-line measurement of the objective function (Instantaneous Objective Function), and to is a reference time (in this work, the time at which disturbance occurs). The effect of DT on the system performance has been studied by exciting the system with different ramp disturbances (with the same final values) and applying RTF with different DT values. Figure 2 summarises some of the MOF profiles obtained. Charts a and b correspond to disturbances that favour the steady state objective function, being the opposite for charts c and d. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDC

MOF

MOF

Time

Time

Figure 2: Response of RTF system for different ramp disturbances and the influence of DT It has been observed, that when the disturbance makes the steady state objective function decrease (in this case when dp/dt < 0), the smaller the DT value, the better the performance in terms of MOF. There is a point (DTa) from which the benefit of reducing DT is negligible. Besides, as the slope of the disturbance increase, the DTa value decreases. On the other hand, when the disturbance makes the steady state objective function increase (in this case dp/dt > 0), the bigger the DT value, the better the performance in terms of MOF. There is also a point (DTa) from which the benefit of increasing DT is negligible. In addition, DTa value increases with the slope of the disturbance.

921 Such observation is summarized in Figure 3, where the dotted area indicates the region of "good"zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA DT values according to the current value of dp/dt. This suggested a short term on-line tuning of DT, (Adaptive RTE, ARTE) which follows for instance the straight dashed-line in Figure 2 according to currents values of dp/dt. This ARTE policy has been then applied over a long-term simulation for a sinusoidal disturbance. It has been compared with an RTE strategy using different fixed DT values and also with no action as a reference. The results in Figure 4 indicate, in opposition to the previous though, that for a persistent disturbance using a fixed DT value works better than the short term ARTE approach, being the latter better only in the small region corresponding to the initiation of the disturbance. That can be explained considering that the MOF profiles, although showing relevant information about the performance, are hiding essential information about the capacity of reacting to the next value of the disturbance (the current decision variables' values). Therefore, although during an initial time interval bigger values of DT lead to better performance in terms of MOF (Figures 3a and 3b), the corresponding process state is not so well prepared for new disturbances as is in the case of lower values of DT. This means that the peaks in Figures 3a and 3b in the no action curves correspond just to an inertial effect, which disappears in the case of persistent disturbances.

zyxwvutsrqponmlkjihg -

- - - ARTE „....„ OT=U\p. DT^=DT,/2 DTj=DTj/2 DTj=DT,/2 DT,=200 No Action

' MMw^'^^

Figure 4: System performance for a Figure 3: Variation ofDTa with dp/dt zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJ sinusoidal disturbance

However, as shown in Figure 4, it can be seen that there is again a DTa value from which further improvement by decreasing DT values is not perceptible, thus being that DTa the desired value for DT. Obviously, such DT value depends on the disturbance frequency, rather than its instantaneous derivative, and then, an adaptive RTE tuning procedure is expected to produce better results, when based in a mid-term and periodical characterisation of the disturbance in frequency terms (i.e. Fourier Transform). Unfortunately, given the non-linearity of the system, a simple linear identification (i.e. using the step response) is not enough appropriate to trust in, for this specific case. zyxwvutsrqponml

3. The proposed tuning procedure The proposed methodology for the tuning procedure consists in the following basic steps:

922 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Estimate DTa Do Make DT*=DTa Determine n as Tr/DT (Tr is the settling time of the process) Find the x value that minimises (Section 2.2) Characterise the disturbance in terms of amplitude and frequency Find DTa (Section 2.3) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDC Loop Until DTa=DT* It should be noted that Z'/^T must not exceed the capabilities of the control system. In such case, the values in z ^iH ^^^^ to be increased. Besides, the extension to several disturbances, although not studied here, is expected to be given by the dominant frequency. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

4. Conclusions This work briefly shows some findings about the influence of RTE parameters on the process economical performance. As a result, a methodology for an adequate tuning of such parameters is proposed. It is shown how the parameters related to control variables can be tuned just by using the steady state model. On the other hand, the time parameter needs both, the characterisation of the disturbance in terms of amplitude and frequency and a further testing over a dynamic simulation of the process. In addition, a periodical characterisation of the disturbances allows an on-line adaptation of the parameters.

5. References Box, G.; Draper, N.; Evolutionary Operation: A Statistical Method for Process Improvement Wiley, New York, 1969. Marlin, T.E.; Hrymak, A.N. (1997). In ICCPC. AIChE Symp. Ser.(316), 156. Perkins, J.D. (1998). In FOCAPO; J.F. Pekny and G.E. Blau, Eds. AIChE Symp. Ser., 94 (320), 15. Sequeira, S.E., Graells, M., Puigjaner, L. (2002). Ind. Eng. Chem. Res., 41, 1815. Williams, T.J.; Otto, R.E.; (1960) A.I.E.E. Trans., 79 (Nov), 458.

6. Acknowledgements One of the authors (S. E. S.) wishes to acknowledge to the Spanish "Ministerio de Ciencia y Tecnologia" for the financial support (grant FPI). Financial support from CICYT (MCyT, Spain) is gratefully acknowledged (project REALISSTICO, QUI-991091).

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

923

Multi-Objective Optimization System MOON on the Internet Yoshiaki Shimizu, Yasutsugu Tanaka and Atsuyuki Kawada Department of Production Systems Engineering,Toyohashi University of Technology, Toyohashi 441-8580, Japan, email: [email protected]

Abstract Recently, multi-objective optimization (MOP) has been highly required to deal with complex and global decision environment toward agile and flexible manufacturing. To facilitate its wide application, in this paper, we have implemented a novel method named MOON^ (Multi-Objective optimization with value function mode led by Neural Network) as a Web-based application. By that, everyone can engage in MOP readily regardless of his/her deep knowledge about MOP. Also its client-sever architecture requires us to prepare only a web browser, and realizes the usage independent of users' computer configuration and free from maintenance of the software. After outlining the solution procedure of MOON^, the proposed system configuration will be explained with an illustration.

1. Introduction To support agile and flexible manufacturing in complex and global decision environment, multi-objective optimization (MOP) is increasingly interested in solving various problems in chemical engineering (Shimizu, 1999; Bhaskar, Gupta, and Ray, 2000). To avoid stiffness and shortcomings encountered in the conventional methods, we proposed a new prior articulation method named MOON^ (Multi-Objective optimization with value function modeled by Neural Network) (Shimizu and Kawada, 2002). To facilitate its wide application, in this paper, we have implemented its algorithm as a Web-based application. It is realized as a client-sever architecture through common gateway interface (CGI) so that everyone can use the system regardless of his/her own computation environment. After presenting the algorithm of MOON^ briefly, configuration and usage of the proposed system will be shown illustratively.

2. Solution Procedure through MOON^ The problem concerned here will be described generally as follows. subject to x ^ X, (p.l)zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Min f{x) = {fiix),f2ix),"-jN(x)} where x denotes a decision variable vector, X a feasible region, and / an objective function vector some elements of which conflict and are incommensurable with each other. Generally speaking, MOP can be classified into the prior articulation method and the interactive one. However, conventional methods of MOP have both advantages and disadvantages over the other. For example, since the former derives a value function

924

separately from the searching process, decision maker (DM) will not be worried about the tedious interactions during the searching process as will be in the later. On the other hand, though the later can elaborately articulate the attainability among the conflicting objectives, the former will pay little attention on that. Consequently, the derived solution may be far from the best compromise of DM. In contrast to it, MOON^ can not only resolve these problems but also handle any kinds of problem, i.e., linear programs, non-linear programs, integer programs, and mixed-integer programs under multiobjectives by incorporating with proper optimization methods. zyxwvutsrqponmlkjihgfedcbaZYXW

2.1. Identification of value function using neural networks First we need identify a value function that integrates each objective function into an overall one. For this purpose, we adopted a neural network (NN) due to its superior ability of the nonlinear modeling. Its training data is gathered through pair comparisons regarding the relative preference of DM among the trial solutions. That is, DM is asked to reply which he/she likes, and how much it is between every pair of the trial solutions. Just like AHP (Analytic Hierarchy Process, Saaty, 1980), such responses will be taken place by using the linguistic statements, and then transformed into the score as shown in Table 1. After doing such pair comparisons overzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONML k trial solutions\ we can obtain a pair comparison matrix whose i-j element ^y represents the degree of preference of/^ compared with/' (Refer Fig.3 appeared in the later). Table 1. Conversion table. Linguistic statements Equally Moderately Strongly Demonstrably Extremely Intermediate judgment between the two adjacent

a^1 3 5 7 9 2,4,6,8

After all, the pair comparison matrix provides totally li' training data for NN with a feed forward structure that is consisted of three layers. The objective values of every pair, say,/' and/^ become the 2N inputs, and an i-j element fly one output. Depending on the dimension of the inputs, appropriate numbers of hidden node are to be used. Using some test problems, we ascertain that a few typical value functions can be modeled correctly by a reasonable number of pair comparisons as long as the number of objective function is less equal to three (Shimizu, 1999) By viewing thus trained NN as a function VNN such that: {f\x),f\x)}^^^-^aij^R\ it should ne noticed that the following relation holds.

Hence we can rank the preference of any trial solutions easily by the output from NN that is calculated by fixing one of the input vector at an appropriate reference, say/*.

^ Under mild conditions, total number of comparison is limited to k(k-l)/2.

925 zyxwvutsrqpo Vmif(x);f)=a.R

(2)

Since the responses required for DM are simple and relative, his/her load in the tradeoff analysis is very small. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 2.2. Incorporation with optimization methods Now the problem to be solved can be described as follows. (p.2) Max Vj^^(f (jc),/^)

subject to

XE X

Since we can evaluate any solution from VNN under multi-objectives once x is prescribed, we can apply the most appropriate optimization method for the problem under concern, i.e., nonlinear program, direct search method, and even more metaheuristic method like genetic algorithm, simulated annealing, tabu search, etc. Also we can verify the optimal solution of (p.2) locates on the Pareto optimal solution set as long as Eq.(l) holds (Shimizu and Tanaka, To be appeared). If we try to use the algorithm that requires gradients of the objective function like nonlinear programs, we can calculate them conveniently by the following relation. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQP

dv,,(f{x)j^) _f av^^(/(x),/^) Ya/(jc) dx

I

^fix)

(3) 1 dx zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM

We can complete the above calculation by applying the numeric differentiation for the first term in R.H.S. of Eq.(3) while deriving the analytic form for the second term..

3. Implementation as Web-Based Application Due to the various reasons such as little knowledge about MOP, computer environment, etc., it is not necessarily easy for everyone to engage in MOP. To deal with such circumstances, we implemented MOON^ on the Internet as a client-server architecture that enables us to carry out MOP readily and effectively. Core of the system is divided into a few independent modules each of which is realized using the appropriate implementation tool. The optimizer module solves a single objective optimization problem through incorporating the identified value function, specifying the problem as a Fortran programming format, and compiling it by Fortran compiler. Though only sequential quadratic programming (SQP) is implemented presently, various methods are possibly available as mentioned already (GA was applied elsewhere; Shimizu, 1999). The identifier module provides a modeling process of the value function based on the neural network where a pair comparison is easily performed just by mouse click operation on the Web page. Moreover, the graphic module generates various graphical illustrations for easy understanding about the results. The user interface of the MOON^ system is a set of Web pages created dynamically during the solution process. The pages described with HTML (hypertext markup language) are viewed by users' browser that is a client of the server computer. The server computer is responsible for data management and computation whereas the client takes care of input and output. That is, users are required to request a certain service and to input some parameters. In turn, they can receive the service through visual and/or sensible browser operation. In practice, the user interface is a program creating HTML pages and transferring information between the client and the server. The programs creating HTML pages are

926

programmed using CGI programming languages named Perl. As the role of CGI, every treatment is carried out on the server side, and any particular tasks are not assigned to the browser side (See Fig.l). Consequently users are not only free from the maintenance of the system such like update, issue, reinstall, etc. but also are regardless of their computation environment like operating system, configuration, performance, etc. zyxwvutsrqponmlkjih

Figure 1. Task flow through CGI.

Though there are several sites serving (single-objective) optimization library (e.g., http://www-neos.mcs.anl.gov/), none is known regarding MOP except for NIMBUS (Miettinen and Makela, 2000, http://nimbus.math.jyu.fi/) so far. However, since NIMBUS belongs to an interactive method, it has the disadvantages mentioned already. On the other hand, the articulation process of MOON^ is separated from the searching process, DMs can engage in the interaction at their own paces, and will not be worried about by the hurried/idle responses like the interactive methods. And it should be noted that the required responses are not only simple and relative but also DMs need not any particular knowledge about the theory of MOP. Such easy usage, small load in the tradeoff analysis, and maintenance-free features will be expected to facilitate the decision making from a comprehensive point of view that should be required for agile and flexible problem solving in chemical engineering. The URL of the system is http://www.sc.tutpse.tut.ac.jp^esearch/multi.html. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQP

4. Illustrative Description of Usage As a demonstration of the Web-based MOON^, we provide a bi-objective design problem regarding decision on the strength of material (Osyczka, 1984) and a three objective problem. To grasp the whole idea and the solution procedure of MOON^, these examples are most valuable. We also provide the third entry of the web page for the original user problem. Below taking the first example, we will explain about the demonstration of the example problem. Moving to the target Web-page, we will find a formulation of the problem.

927 zyxwvut F

8

1 max

O 00 II

(S 1

II Q

1

• A

0 (7) h,(x) = J:,-5^2 = 0 (8) ^«»,»H3W^S»*«riB»«*«» zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ

g2(Ar)=75.2-A:2>0 g3(jc) = A:2-40>0

ggPHg

• ig- Mlp//sc-iW)on2.n)to8eluLaco/c«r-fanAawad«-«andv/tx1-«/8teo13ex

»Sk ' • -»£».' n K i

^)»M>i*.^w>*» ; : $ » « ^«>*ft J f i ^ ::;3V7> 'Jfi>--» ^ : ; » - x ^«-c&yi>yj-^ y^'-Koy^ ^ Q W K H ^ « s » i d « £

Result of optimization 1 Now computing! 1 Result will be sho^m below. 1 Donw now! 1 1

• Value of objective function o fl= • iE';?'53f'ii.CM;^?::OKi J9oi • • ^'i

\

of2=o,oc!0'53?4ie5:;8?r]

1

• Value of decision variables « il= 2g2.S5S022S44?S0352 x2=5fi.57iS045SS;4204e

1 1

Objective function :fl

*;L;; 1

s.»t*»« "

1

1 .......

+

-

+

p;

Tu* Jan 21 i7iieie« sees

^i tS

^ ™ 2.S*-e4 ; ' 2.• • -0 4

"*

3 ..*•«

2 »»*»6

1

Objective functIon:f2 4.Se-e4 4 • • -0 4 3.3*-e4



.

..

' * .

i.5^-e4

*

*.

.

:

5.• • -• 5 .dir Tu* J*n 21 17lte>e« 2»e3

^ f5J -^ &«

Figure 4. Page representing a final result.

928

wherezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA xi and X2 denote the tip length of the beam and the interior diameter respectively as shown in Fig.2. Inequalities Eqs.(4)-(8) represents appropriate design conditions. Moreover, objectives /i and /2 represent volume of the beam and static compliance of the beam respectively. Then input page for problem description is provided to input the equations of the objective functions and constraints under the format similar to the Fortran language. After the repeated processes of input and confirmation, a set of trial solutions for the pair comparisons is generated arbitrarily within the hull convex spanned by the Utopia and the nadir solutions^. Now for every pair of the trial solutions, DM is required to make a pair comparison through mouse click of radio buttom indicator. After showing the pair-comparison matrix thus obtained (See Fig.3), and checking its inconsistency from the AHP theory, the training process of NN will start. Its training results are presented both numerically and schematically. The subsequent stages proceed as follows: select an appropriate optimization method (presently only SQP is available); input the initial guess of SQP for the optimization search; click start button. The result of the multi-objective optimization is shown schematically compared with the Utopia and the nadir solutions (See Fig.4). If DM would desire further articulation, additional search might be taken place before a satisfying solution has been found. In this case, the same procedures will be repeated within a narrower searching space around the earlier solution to improve it. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK

5. Conclusion Introducing a novel and general approach for multi-objective optimization named MOON^, in this paper, we have implemented its algorithm as a Web-based application. It is unnecessary for everyone to have any particular knowledge about MOP, and to prepare the particular computer environment. They need only a Web browser to submit their problem, and to indicate their subjective preference between the pair of trial solutions generated automatically by the system. Eventually, it can facilitate the decision making from a comprehensive point of view that should be required to pursue the sustainable development in process systems. An illustrative description outlines the proposed system and its usage. Further studies should be devoted to add various optimization methods as applied elsewhere (Shimizu, 1999; Shimizu and Tanaka, to be appeared) besides SQP, and improve certain user services that enable us to save and manage their private problems. The security routine for usage is also important aspects left for the future studies.

6. References Bhaskar,V., Gupta, K.S. and Ray, K.A., 2000, Reviews in Chem. Engng., 16, 1. Miettinen, K. and Makela, M.M., 2000, Comput. & Oper. Res., 27, 709. Osyczka, A., 1984, Multicriterion Optimization in Engineering with Fortran Programs, John Willey & Sons, New York. Saaty, T.L., 1980, The Analytic Hierarchy Process, McGraw-Hill, New York. Shimizu, Y., 1999, J. Chem. Engng. Japan, 32,51. Shimizu, Y. and Kawada, A., 2002, Trans, of Soc. Instrument Control Engnrs., 38, 974. Shimizu, Y. and Tanaka, Y., "A Practical Method for Multi-Objective Scheduling through Soft Computing Approach," JSME Int. J., To be appeared. ^ For example, a Utopia is composed ofy;(x,*) whereas a nadir is of minjfj{x{^), (i=l,"-, N) where jc,* is the optimal solution of the problem such that "maxy-(jc) subject to x^X."

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

929

Reduced Order Dynamic Models of Reactive Absorption Processes S. Singare, C.S. Bildea and J. Grievink Department of Chemical Technology, Delft University of Technology, Julianalaan 136, 2628 BL, Delft, The Netherlands

Abstract This work investigates the use of reduced order models of reactive absorption processes. Orthogonal collocation (OC), finite difference (FD) and orthogonal collocation on finite elements (OCFE) are compared. All three methods are able to accurately describe the steady state behaviour, but they predict different dynamics. In particular, the OC dynamic models show large unrealistic oscillations. Balanced truncation, residualization and optimal Hankel singular value approximation are applied to linearized models. Results show that a combination of OCFE, linearization and balanced - residualization is efficient in terms of model size and accuracy.

1. Introduction Many important chemical and petrochemical industrial processes, such as manufacture of sulphuric acid and nitric acid, soda ash, purification of synthesis gases, are performed by the reactive absorption of gases in liquids, in large scale processing units. For example, the whole of fertilizer industry relies on absorption processes. Rising energy prices and more stringent requirements on pollution prevention impose a need to continuously update processing conditions, design and control of industrial absorption processes. Traditionally, the design of Reactive Absorption Processes (RAP) relies on equilibrium models, whose accuracy has been extensively criticised both by academic and industrial practitioners. In contrast, the rate-based approach (Kenig et al. 1999), accounting for both diffusion and reaction kinetics, provides very accurate description of RAP. Solving such models requires discretization of the spatial co-ordinates in the governing PDEs. This gives rise to a large set of non-linear ODEs that can be conveniently handled for the purpose of steady state simulation. However, the size of the model becomes critical in a series of applications. For example, real-time optimisation requires fast, easy-tosolve models, because of the repetitive use of the model by the iterative algorithm; in model based control applications, the simulation time should be 100 to 1000 times shorter than the time scale of the real event. This work investigates the use of reduced order RAP models for the purpose of dynamic simulation, controllability analysis and control system design. Three different discretization methods, namely orthogonal collocation (OC), finite difference (FD) and orthogonal collocation on finite elements (OCFE), are compared. All three methods are able to accurately describe the steady state behaviour. However, the predicted dynamic

930 behaviour is very different. In particular, the OC dynamic models show large unrealistic oscillations. In view of control applications, different reduction techniques, including balanced truncation and residualization and optimal Hankel singular value approximation, were applied to linearized models. Results show that a combination of OCFE, linearization and balanced - residualization is efficient in terms of model size and accuracy. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

2. Model Description The reactive absorption column is modelled using the well-known two-film model (Fig. 1). In this model, the resistance to mass transfer is concentrated in a thin film adjacent to the phase interface and the mass transfer occurs within this film by steady-state molecular diffusion. AxialzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA z co-ordinate represents the length of the column. Gaseous component, A diffuses through the film towards liquid bulk and in the process reacts with the liquid component, B. In the present model, assumptions of plug flow, constant temperature and pressure are made. Dynamic mass balance equations in the non-dimensional form for gas bulk, liquid bulk phase can be written as: 2.1. Bulk gas phase mass balance It is assumed that no reaction occurs in the bulk gas phase and the gas film. dY. er-

dY. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

^ - r a

(Y

-h

C )

dr zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA dz

B.c.atz = oy.,=y.,,

(1)

where. Da is Damkohler number, aj and hj Q = A, B components) are dimensionless mass transfer coefficients and Henry's constants, respectively, r is the ratio between gas and liquid residence times. GB

GF

LF

LB

Fig 1. Schematic of reactive absorption column model.

931 zyxwvutsrqp

2.2. Bulk liquid phase The second order reaction occurs in the bulk of the liquid, as well as in the liquid film. zyxwvutsrqpon

(l-s)

'-^ = dr

dz

^-0.-^1 - DzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE «C,.-C,,(l-0 dx

B.C.atz=l,C.,=C,, I.C.atr = 0,C.,

(2)

=C°M(Z)

where, 0j are dimensionless diffusion coefficients. 2.3. Liquid film mass balance Neglecting the fast dynamics, application of Pick's law of diffusion gives rise to the following set of second order differential equations. d'C. ax B.C.atJc = 0 atx = l

m.(Y.-h.'C\

)=

dC

(3) dx zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPON

C , =C zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB j.L

J

Here, Ha is Hatta number which represents the ratio of kinetic reaction rate to the mass transfer rate. As a test case, the data reported by Danckwerts and Sharma (1966) is chosen. The dimensionless parameters are Da = 1.87 10 ^, a^ = 37.92, /3A = 6.94, /?5 = 3.82, HaA = 15.48, Has = 20.88, m^ = 5.46, HA = 1.62, r = 0.325.

3. Solution Method 3.1. Steady state The complete model of reactive absorption column is solved using three different discretization methods: (1) Orthogonal collocation (OC), (2) Finite difference (FD) and (3) Orthogonal collocation on finite elements (OCFE). In case of OC and OCFE, roots of Jacobi orthogonal polynomial are used as collocation points. FD method requires two discretization schemes in the axial direction: backward finite difference method (BFDM, 2nd order accuracy) in up-axial direction for bulk gas phase; forward finite difference method (FFDM, 2nd order accuracy) in down-axial direction for bulk liquid phase. In FD scheme, liquid film equations are discretized using central finite difference method (CFDM) of 4th order accuracy. The whole set of equations are written in gPROMS, which solves the set of algebraic non-linear equations using NewtonRaphson method. The different numerical methods are summarized in Table 1. Results of steady state calculations are shown in Figure 2, where gas and liquid concentration profiles along the column are depicted.

932 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Table 1. Discretization method and number of variables and equations involved. Discretization method OC FD OCFE

# discretization points Axial Film 17 7 51 21 21 13

# of variables and equations 289 2295 609

FD scheme with 51 and 21 discretization points in the axial and film co-ordinate respectively is taken as a basis for comparison with OC and OCFE method. In OC scheme, 15 and 5 internal collocation points (axial and film co-ordinate respectively) and in OCFE scheme, 5 and 3 finite elements with 3 internal collocation points in each finite element are needed. In SS simulation, OC scheme results in the lowest number of variables and provides good accurate results. But it is found that it can not be used beyond 22 discretization points due to ill-conditioning of the matrix calculations. In such situation, OCFE provides improved stability with slightly increased number of variables. As seen, FD requires the largest number of variables to get accurate results. It should be noted that the definition and use of the dimensionless variables allows a robust solution of the model equations, easy convergence being obtained for all three discretization methods and for very crude solution estimates (for example, all concentrations set at 0.5). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

3.2. Dynamic The dynamic simulation of RAC model was carried out for the above three cases. The dynamic response of gas and liquid outlet concentrations, YAOouh CBLOUI-^ to changes in inlet flow rates FMG^ ^VL and concentrations YAGim CBUH was investigated. Figure 3 presents results for a 0.05 step change in YACin (similar results were obtained for the other inputs). The expected, realistic response is a gradual increase of outlet concentration occurring after a certain dead time. The computed response showed oscillations, which is attributed to numerical approximation of the convection term. This effect is discussed by Lefevre et. al (2000), in the context of tubular reactors. zyxwvutsrqponmlkjihg 2.50 -aSi—B—aAD/OE

2.00 h DC (15,5) - FD (50,20) r OCFE (5,3)

1.50

- s - O C (15,5) o" 1.00 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK — FD (50,20) ^OCFE(5,3)

0.50 0.00

zyxwvutsrqponmlkjihgfedcbaZYXWVUT 0.2

0.4

0.6 Z

Fig 2. Steady state profile for different discretization methods.

0.8

933 zyxwvutsrq

200 180 160 E zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 140 a. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA €L 120 100 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM 80 60 1/ BFDM(2) - 50 40 OC 20 -I , zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONML 1 0 0.2 0.25 0.05 0.1 0.15

1

1

TTl M/ y^

-OCFE

Time/[-]

Fig. 3. Dynamic response of gas-outlet concentration to a step change of gas-inlet concentration. OC scheme produces large oscillatory response right from the start, without any dead time. Thus, it is not suitable for dynamic simulation purposes. In the case of OCFE and FD, the oscillatory behaviour starts after some dead time. The amplitude is much smaller compared to OC. As expected, oscillations are reduced by increasing the number of discretization points. Taking into account the size of the model and the shape of dynamic response, the OCFE seems the preferred scheme. Further, we used the "Linearize" routine of gPROMS to obtain a linear model. Starting with the OCFE discretization, the linear model has 48 states. This might be too much for the purposes of controllability analysis and control system design. Therefore, we applied different model-reduction techniques (Skogestad and Postlethwaite, 1996). Fig. 4 compares the Bode diagrams of the full-order model and the models reduced to n = 10 states by different techniques. For the frequency range of interest in industrial application (10 rad / time unit, corresponding roughly to 5 rad/min), the balanced residualization offers the best approximation.

0.1

1

10

100

1000

(o I [rad / dimensionless time]

Fig 4. Comparison of reduced-order models obtained by different techniques.

934 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 20 15

ICXFE zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ n=48, n=15v OCFE zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA n=5 Residualization

\

n=10

^ - ^ / ^

nonlinear

\f^\

0.1

0.25 zyxwvutsrqponmlk 0.05 0.1 0.15 1000 10 100 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJ Time/['] zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

1

CO I [rad / dimensionless time]

Fig. 5. Comparison of linear models of different orders, a) Bode diagram, b) Step response. Figure 5 compares the effect of number of states that are retained in the reduced order model, using both the Bode diagram and step response. The linear model predicts well the dead time and the speed of response. For n= 15, the full and reduced-order models coincide. Reasonable results are obtained for n = 10. If the model is further reduced (n=5), it fails to predict the dynamic behaviour. From the time response presented in Figure 4, it seems that a second-order model including dead time should suffice. It is possible to identify such model, using for example, real plant data. However, we are interested to obtain the model starting with first-principles model. This is the subject of current research.

4. Conclusions - A dynamic model of reactive absorption column is developed in non-dimensional form. Three discretization methods; OC, FD, OCFE are used to solve the model equations. For steady state process synthesis and optimisation, orthogonal collocation based methods are found accurate and robust. - For dynamic simulation^ pure OC is unsuitable. OCFE is found to give realistic representation of column's behaviour, together with a small-size model. This presents a good option to FD scheme. Linear model reduction techniques are further applied to reduce the model for control design purpose. Balanced residualization with 15 states approximates satisfactorily the column dynamics. - In the future work, more complex reaction schemes, Maxwell-Stefan equations for diffusion, heat balance, axial dispersion term, hydrodynamics, thermodynamics, tray columns will be included in the model.

5. References Danckwerts and Sharma, 1966, Chem. Engrs. (London), CE 244. Kenig, E.Y., Schneider, R., Gorak, A., 1999, Chem. Eng. Sci. 54, 5195. Lefevre, L., Dochain, D., Magnus, A., 2000, Comp. & Chem. Engg. 24,2571. Skogestad, S. and Postlethwaite, L, 1996, Multivariable Feedback Control - Analysis and Design, John Wiley & Sons, Chichester.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

935

Separation of Azeotropic Mixtures in Closed Batch Distillation Arrangements S. Skouras and S. Skogestad Norwegian University of Science & Technology, Department of Chemical Engineering Sem Saelandsvei 4, 7491, Trondheim, Norway e-mail: [email protected], [email protected]

Abstract Batch time (energy) requirements are provided for the separation of ternary zeotropic and heteroazeotropic mixtures in three closed batch column configurations. Two multivessel column modifications (with and without vapor bypass) and a conventional batch column operated under the cyclic policy, were studied. The multivessel column performs always better than the conventional column and the time savings vary from 24% up to 54 %. Moreover, by eliminating the vapor bypass in the multivessel, additional time savings of 26% can be achieved for a zeotropic mixture. However, the multivessel with the vapor bypass should be used for the heteroazeotropic mixtures.

1. Introduction The multivessel batch column is a combination of a batch rectifier and a batch stripper. The column has both a rectifying and a stripping section and therefore it is possible to obtain a light and a heavy fraction simultaneouslyfi*omthe top and the bottom of the column, while an intermediate fraction may also be recovered in the middle vessel. Two modifications of the multivessel are studied here. First, the vapor bypass modification where the vapor from the stripping section bypasses the middle vessel and enters the rectifying section and second, a modification where both liquid and vapor streams enter the middle vessel. We refer to the first modification as conventional multivessel and to the second one as multivessel without vapor bypass. The third column configuration studied in this work is a conventional batch column (rectifier) operated with the cyclic policy. We refer to this column as cyclic column. The cyclic policy has been noted before in the literature by Sorensen and Skogestad (1994) and is easier to operate and control. All column configurations are shown in Fig. 1. Batch time comparisons are provided for the separation of one zeotropic and two heteroazeotropic systems. We consider batch time as a direct indication of energy consumption since the boilup is constant for all columns. The columns are operated as closed systems. In the multivessel a ternary mixture is separated simultaneously in one such close operation and the final products are accumulated in the vessels (Wittgens et al, 1996). In the cyclic column the products are separated one at each time and for a ternary mixture a sequence of two such closed operations is needed. An indirect level control strategy based on temperature feedback control is implemented as proposed by Skogestad etal (1997).

936

b) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM

a)

Condenser

JS§ H-^

XL-.^

L^

-^

Nr r

c)

Rectifying section

Nr h

"^^

Middle Vessel

section -Middle Vessel

^/5 H—T —-^ Ns

Stripping section

HS^—1'^°' '

" ^

N

-^

Stripping

Ns

Reb oiler

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPON

Figure 1. a, b) Multivessel batch column with and without vapor bypass, c) Cyclic batch column.

2. Simulations 2.1. Zeotropic systems zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA The zeotropic system of methanol-ethanol-1-propanol was studied. Multivessel column: The zeotropic mixture is separated simultaneously in one closed operation. All three original components are accumulated in the three vessels at the end of the process, as shown in Figure 2a. Multivessel column without vapor bypass: The separation is performed as mentioned above. With this modification the light component is depleted faster from the middle vessel. This leads to improved composition dynamics in the middle vessel and it can be advantageous for some separations, as we will show later. Cvclic column: The separation is performed in two cycles that resembles to the direct split in continuous columns. During cycle 1 the light component (methanol) is accumulated in the top vessel (Fig 2b). Cycle 2 is almost a binary separation of the two components left in the still. The intermediate component (ethanol) is discharged from the top vessel, while the heaviest one (1-propanol) remains in the still (Fig 2c).

A*'

IIMIMWK^^^«M«

A \

j ^

'^ ^

.'^^...

• .liiili • 1"

''""""T^

Figure 2. a) Simultaneous separation of a zeotropic mixture in the multivessel column, b, c) Separation of a zeotropic mixture in two cycles in the cyclic column.

937 zyxwvutsrqpon 2.2, Azeotropic systems zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Two classes of heteroazeotropic systems were studied, namely classes 1.0-2 and 1.0-la. Skouras and Skogestad (2003) provided simulated results for the separation of different classes of heteroazeotropic systems in a closed multivessel-decanter hybrid.

2.2.7.zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Topological class 1.0-2 Water and 1-butanol form a heterogeneous azeotrope and an immiscibility gap over a limited region of ternary compositions exists. The stability of the stationary points of the system and the distillation line map modeled by UNIQUAC are shown on Figure 3a. One distillation boundary, running from methanol (unstable node) to the binary heteroazeotrope (saddle) divides the composition space in two regions. The system belongs to Serafimov's topological class 1.0-2 (Hilmen, 2002). Multivessel column: For separating a heteroazeotropic mixture of this topological class a decanter has to take the place of the middle vessel. The mixture is separated simultaneously in one closed operation with an initial built-up period. During this period the composition profile is built-up and the heteroazeotrope accumulates in the middle vessel (Fig. 4a). At the second (decanting) period the heteroazeorope is decanted and the organic phase is refluxed back in the column. The aqueous phase accumulates in the middle vessel, while methanol and 1-butanol are accumulated in the top and bottom vessel, respectively, as shown in Fig. 4b. Cyclic column: The separation is performed in two cycles with a built-up period in between. During Cycle 1, methanol is accumulated in the top vessel (Fig 5a). Then a built-up period is needed where the heteroazeotrope accumulates in the top. Cycle 2 is a heteroazeotropic distillation with a decanter taking the place of the top vessel. The aqueous phase is gradually accumulated in the top vessel (see Fig. 5b) and the organic phase is refluxed back in the column. The still is getting enriched in 1-butanol (Fig. 5b). zyxwvutsrqp Methanol (un) 64.6 "C

a) Serafimovs class 1J)-1a

Serafimov^dass1J)-2 /

' binodal curve at 2S*'C distillation boundary distillation linos

/ /

/

/

/'

j \

/'

hetaz (un) 70.8 "^C

/' \\

w liet.az (^ 92.1 °C

Water (sn) 100.0 °C

Figure 3. Azeotropic systems of a) topogical class 1.0-2 and b) topological class 1.0-la.

938 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

" - ^ binodal cunre at 25 "C

" ~ * binodal curve at 25 "C

-o-o- column liquid profile

-0-0- column liquid profile

composition evolution

composition evolution

zyxwvutsrqponmlkjihgfedcbaZYXWVUTS

1-Butanol 117.7 °C

Figure 4. Separation in the multivessel column, a) Build up period b) Decanting period. 2.2.2. Topological class 1.0-1 a Ethyl acetate and water form a heterogeneous azeotrope and an immiscibihty gap over a Umited region of ternary compositions exists. The corresponding distillation lines map is shown in figure 3b. The system belongs to Srafimov's topological class 1.0-la. Multivessel column: For this class of heteroazeotropic systems the decanter has to be placed at the top of the column. The mixture is separated simultaneously in one closed operation after an initial built-up period. During this built-up period the heteroazeotrope accumulates in the top vessel. At the second (decanting) period the heteroazeorope is decanted and the organic phase is refluxed back in the column. The aqueous phase accumulates in the top vessel, ethyl acetate in the middle vessel and acetic acid in the bottom. At the end of the process three pure products are accumulated in the vessels. Cyclic column: The separation is performed again in two cycles but with a built-up period before the cycles. During this built-up period the heteroazeotrope accumulates in the top vessel. During Cycle 1 this heteroazeotrope is decanted and the organic phase is refluxed back in the column. The aqueous phase is accumulated in the top vessel. Cycle 2 is almost a binary separation between ethyl acetate and acetic acid. The first one is recovered at the top vessel while the second remains in the still. Methanol (un) 64.6 "C

zyxwvutsrqponmlkjihgfedc

' binodal cuive at 2S **C

"^

I- column liquid profile

binodal curve at 25 * C

0-0- column liquid profile

,. composition evolution

composition evolution

het.az (^ 921°C

Water (sn) 100.0 °C

Figure 5. Separation in the cyclic column a) Cycle 1 b) Cycle 2.

,az (^

Water (sn)

rc

100 o°c

zyxwvutsrqpo

939 zyxwvutsrqpo

3. Results All simulations were terminated when the specifications in all vessels were fulfilled. Results are provided for two specification sets. i) Zeotropic system: x^pec = [0.99, 0.97, 0.99], x ^ c = [0.99, 0.98, 0.99] In the second set higher purity is required for the intermediate component. ii) Azeotropic mixture of class 1.0-2: x^pec= [0.99, 0.97, 0.99], x ^ c = [0.99, 0.98, 0.99] The heteroazeotrope is the intermediate 'component' (saddle). In the multivessel it is accumulated in the middle vessel/decanter. After decantation the aqueous phase is accumulated in the middle vessel. In the cyclic column the aqueous phase is the top product of Cycle 2. The specification for the aqueous phase (Xaq=0.98) in the second set is close to the equilibrium value (Xaq^''^=0.981) determined by the binodal curve at 25°C. iii) Azeotropic mixture of class 1.0-la: x^pec=[0.97, 0.97, 0.99], x\pec=[0.98, 0.97, 0.99] The heteroazeotrope is the light 'component' (unstable node). After decantation the aqueous phase is accumulated in the top vessel/decanter in the multivessel column. In the cyclic column the aqueous phase is the top product of Cycle 1. The specification for the aqueous phase (Xaq=0.98) is close to the experimental equilibrium value (Xaq"'P=0.985) determined by the binodal curve at 30°C. Charging of the column, preheating, product discharging and shutdown are not included in the time calculations. All these time periods would be the same for both the multivessel and the cyclic column. The only exception is the product discharging period, which is higher for the cyclic column, since the products are separated one at each time and they have to be discharged twice. All columns have sufficient number of trays for the given separarion. Same number of stages was used in both the multivessel and the cyclic column on order to be fair in our time comparisons. A modified multivesesel without a vapor bypass (Fig lb) was studied. The conventional multivessel (Fig la) with the vapor bypass has an inherent inability to 'boil away' the light component fi-om the middle vessel. The idea behind the modified multivessel is that the vapor stream entering the middle vessel will help the light component to be boiled off faster, thus, improving the composition dynamics in the middle vessel. The results in Table 1 prove that this is true. For the zeotropic mixture the modified multivessel is 26% faster. The improvement is more pronounced for the separation of the first heteroazeotropic mixture of class 1.0-2, where the time savings go up to 37%. This is because the accumulation of the aqueous phase takes place in the middle vessel (for this class of mixtures) and it is very time consuming. Therefore, the improved middle vessel dynamics have a greater effect on the separation of a heteroazeotropic mixture of class 1.0-2 than on a zeotropic mixture. A rather surprising result is the one observed for the separation of the heteroazeotropic system of class 1.0-la. The modified multivessel does not exhibit any significant advantage over the conventional one (7% time savings for specification set 1) and it can be even slower (6% more time consuming for specification set 2). The explanation is simple. For systems of class 1.0-la the heteroazeotrope is the unstable node and it is accumulated in the top vessel. Therefore the liquid-liquid split and the accumulation of the aqueous phase is taking place in a decanter in the top. The dynamics of the top

940

vessel dominates the separation. The improved dynamics of the middle vessel in the modified multivessel are not playing an important role anymore. zyxwvutsrqponmlkjihgfedcbaZYXWVU

Table 1. Batch time calculations and time savings (basis: conventional multivessel). zyxwvutsrqponm Zeotropic system [0.99,0.97,0.99] [0.99,0.98,0.991 Heteroazeotropic systems Class 1.0-2 [0.99,0.97,0.99] [0.99,0.98,0.99] Class 1.0-la [0.97,0.97,0.99] [0.99,0.98,0.99]

Conventional multivessel (with vapor bypass) 3.6 hr 3.9 hr

Modified multivessel (w/o vapor bypass) -26% -26%

Cyclic column

3.1 hr 4.6 hr

-29% -37%

+28% +24%

2.6 hr 3.7 hr

+7% -6%

+54% +44%

+53% +46%

However, a modified multivessel for the separation of heteroazeotropic mixtures is problematic from the practical point of view. It is not practical to have a decanter where a vapor phase is bubbled through. Also the decanter is operated in a temperature lower than that of the column and a hot vapor stream entering the decanter is not very wise. A look to all the results presented in this work reveal that the multivessel column is in all cases preferable over the cyclic column in terms of batch time (energy) savings. For the separation of azeotropic mixture the modified multivessel without the vapor bypass seems to be the best choice, with time savings up to 52% compared to the cyclic column. For the separation of heteroazeotropes, time savings and practical considerations lead to the choice of the conventional muhivessel with the vapor bypass. Time savings vary fi'om 25% up to 50% depending on the mixture separated. Beside the time savings achievable by multivessel distillation one should also mention its much simpler operation compared to the cyclic column. The final products are accumulated in the vessels at the end of the process and there is no need for product change-overs.

4. Conclusions The multivessel column is superior to the cyclic column, in terms of batch time (energy) consumption, for all separations studied here. A modified multivessel column without vapor bypass is proposed for the separation of zeotropic systems. However, the conventional multivessel configuration with vapor bypass is proposed for the separation of heterogeneous azeotropic systems.

5. References Hilmen, E.K., Kiva, V.N., Skogestad, S., 2002, AIChE J., 48 (4), 752-759. Skogestad, S., Wittgens, B., Litto, R., Sorensen, E., 1997, AIChE J., 43 (4), 971-978. Skouras, S., Skogestad, S., 2003, Chem. Eng. and Proc, to be pubUshed. Sorensen, E., Skogestad, S., 1994, PSE '94, 5'^ Int. Symp. on Proc. Syst. Eng., 449-456. Wittgens, B., Litto, R., Sorensen, E., Skogestad, S., Comp. Chem. Engng., S20,1041-1046.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

941

Numerical Bubble Dynamics Anton Smolianski'*, Heikki Haario^ and Pasi Luukka^ "Institute of Mathematics Zurich University CH-8057 Zurich, Switzerland email:[email protected] ^Laboratory of Applied Mathematics Lappeenranta University of Technology P.O. Box 20, FIN-53851 LPR, Finland email:heikki.haario@ lut.fi, [email protected] Abstract A computational study of the dynamics of a gas bubble rising in a viscous liquid is presented. The proposed numerical method allows to simulate a wide range offlowregimes, accurately capturing the shape of the deforming interface of the bubble and the surface tension effect, while maintaining a good mass conservation. With the present numerical method, the high-Reynolds number wobbling bubble regime exhibiting unsymmetric vortex path in the wake has been successfully simulated. The computed time-evolution of bubble's interface area, position and rise velocity shows a good agreement with the available experimental data. Some results on bubble coalescence phenomena are demonstrated. Our studies reveal that plausible results can be obtained with two-dimensional numerical simulations, when a single buoyant bubble or a coalescence of a pair of bubbles is considered.

1. Introduction The rise of a gas bubble in a viscous liquid is a very complicated, non-linear and nonstationary hydrodynamical process. It is usually accompanied by a significant deformation of the bubble, indicating a complex interplay between fluid convection, viscosity and surface tension. The diverse shapes of the bubble resulting from this deformation cause a large variety of flow patterns around the bubble, and vice versa. A number of experimental studies have addressed this problem. Early studies include the rise of a bubble in an inviscid and a viscous liquid, see (Hartunian & Sears 1957), (Walters & Davidson 1962) (Walters & Davidson 1963), (Wegener & Parlange 1973) and (Bhaga & Weber 1981). Approximate theoretical solutions have been obtained for either low (Taylor & Acrivos 1964) or high (Moore 1959) Reynolds numbers under the assumption that the bubble remains nearly spherical. We employ the level-set (see (Sethian 1999)) method that permits to compute topological changes of the interface (like mergers or breakups). We use thefiniteelement method that relies on a global variational formulation and, thus, naturally incorporates the coefficient jumps and the singular interface-concentrated force. The combination of finite elements and the level-set technique allows us to localize the interface precisely, without an introduction of any artificial parameters like the interface thickness. As a whole, our computational method takes an advantage of combining the finite element spatial discretization, the operator-splitting temporal discretization and the level-set interface representation. In (Tomberg 2000) a combination of thefiniteelement and the level-set methods has been recently used to simulate a merger of two bubbles in a viscous liquid; however, the method is restricted to low Reynolds number flows only.

942

Using the presented computational method we provide a systematic study of diverse shape regimes for a single buoyant bubble, recovering all main regimes in a full agreement with available experimental data (for detailed analysis see (Smolianski et al. )). Next, we present results on the bubble coalescence phenomena. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ

2. Numerical Method As a simulation tool we employ a general computational strategy proposed in (Smolianski 2001) (see also (Smolianski et al. )) that is capable of modeling any kind of two-fluid interfacial flows. The dynamics of a gas bubble in a liquid can, thus, be considered as a particular application of this computational approach. We consider an unsteady laminar flow of two immiscible fluids. Both fluids are assumed to be viscous and Newtonian. Moreover, we suppose that the flow is isothermal, thus neglecting the viscosity and density variations due to changes of a temperature field. We assume also that the fluids are incompressible. Presuming, in addition, the fluids to be homogeneous, we may infer that the densities and viscosities are constant within each fluid. We utilize the sharp-interface (zero interfacial thickness) approach; the density and viscosity have, therefore, a jump discontinuity at the interface (see, e.g., (Batchelor 1967)). We assume that the interface has a surface tension. We also suppose that there is no mass transfer through the interface (i.e. the interface is impermeable), and there are no surfactants present in the fluids (hence, there is no species transport along the interface). The surface tension coefficient is, thus, assumed constant.

Our computational approach for numerical modelling of interfacial flows can be summarized as follows: Step 0. Initialization of the level-set function and velocity. zyxwvutsrqponmlkjihgfedcbaZYXWVU For each n-th time-step, n = 1,2,...: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJ 1. Computation of interface normal and curvature. 2. Navier-Stokes convection step. 3. Viscous step. 4. Projection step. 5. Level-set convection step. 6. Reinitialization step. 7. Level-set correction step. The steps 1.-7. are performed successively, and each of the steps 2.-5. may use its own local time-increment size. On each step the last computed velocity is exploited; the viscous and projection steps use the interface position found on the previous global timestep. It is also noteworthy that the steps 5.-7. can be computed in a fully parallel manner with the step 2. The whole algorithm is veryflexible;it permits, for instance, to compute unsteady interfacial Stokesflowjust by omiting the Navier-Stokes convection step.

3. Bubbles in Different Shape Regimes Figure 1 shows the typical bubble shapes and velocity streamlines in the frame of reference of the bubble. Although all experimental results correspond to three-dimensional bubbles and our computations are only two-dimensional, a qualitative comparison is possible. The comparison enables us to conclude that our numerical bubble shapes are in a good agreement with the experimental predictions of (Bhaga & Weber 1981) and (Clift et al. 1978).

943 zyxwvutsrqpo

1^

zyxwvutsrqponmlkjihgfedcbaZYXW

TOI

w

(a)

(b)

(c)

(d)

(e)

zyxwvutsrqponmlkjihgfedcbaZ

(f) zyxwvutsrqponmlkjihgfed

Figure 1. Different computed shapes of bubbles: (a) spherical with Re=I, Eo=0.6, (b) ellipsoidal with Re-20 Eo=1.2, (c) dimpled ellipsoidal cap with Re=35, Eo=125, (d) skirted with Re=55, Eo=875, (e) spherical cap with Re=94, Eo-115 and (f) wobbling with Re=1100, Eo=3.0; pi/p2 = 10^* )Ui/)U2 = 10^. As it is seen from thefigure,all basic shapes are successfully recovered with the parameter values lying exactly within the limits given in (Clift et al. 1978). The interesting phenomena are observed in the case of a wobbling bubble. The wobbling typically appears with sufficiently high Reynolds numbers when the Eotvos number is, roughly, in the range between 1 and 100 (see (Clift et al. 1978)). Since the typical range for the Reynolds number corresponding to the wobbling motion is approximately the same as for the spherical cap regime, the wobbling bubble (see Figure 2) retains a nearly spherical cap shape. However, at later stage of the motion, a remarkable flattening of the bubble top can be observed (Figure 2). The bubble bottom undergoes permanent deformations resulting from the unstable and unsynmietric evolution of the bubble wake. In particular, the unsymmetric pairs of secondary vortices are clearly observed in the wake as the consequence of asynchronous separation of the boundary layer from different sides of the bubble surface. This flow pattern bears some resemblance to the von Karman vortex path typically formed behind a rigid body in a highly convectiveflow.We are unaware of any other successful numerical simulations in the wobbling bubble regime.

4. Results on Coalescence of Bubbles We consider the rectangular domain of the unit width with two initially circular bubbles inside; the radius of the upper bubble is equal to 0.25 and the radius of the lower one is 0.2. Bubbles have a common axis of symmetry. We prescribe zero velocity field at the initial moment. The dynamics of the bubbles, to a large extent, depends on the initial distance between them and on the magnitude of the surface tension. If the surface tension is high enough, no merger happens, the bubbles develop nearly ellipsoidal shapes and rise separately (see, e.g., (Unverdi & Tryggvason 1992)). Hence, in order to simulate a merger process, we take comparably small surface tension coefficient. Figures 3-4 illustrate the process of bubble merger in different shape regimes. During the rise of the bubble, two opposite signed vortices are created in the wake of the larger bubble. This produces a lower pressure region behind the large bubble and generates flow streaming into the symmetry line of theflow.As a result, the front portion of the small bubble becomes narrower and sharper. The head of the lower bubble almost catches up with the bottom of the upper one. In the next moment, the two bubbles merge

944 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

0

0.2 0.4 0.6 0,8

0

0.2 0,4 0 6

08

zyxwvutsrqponmlkj

Figure 2. The rise of a wobbling bubble. Re=1100, Eo=3.0; pi/p2 = 10^, A*i//^2 = 10^, h = 1/80 into a single bubble. At this time, the interface conjunction forms a cusp singularity that is rapidly smoothed out by viscosity and surface tension.

(a)

(d) (b) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDC (e) (c) (f)

Figure 3. Merger of two spherical bubbles; Re = 2, Eo — 1.2, pi/p2 = lO'^, 10, h = 1/40.

MI/A*2

Bubble coalescence in spherical shape regime is shown in Figure 3. Due to considerable rigidity of the bottom of the upper bubble, the liquid rather quickly becomes squeezed out of the space between the bubbles, and the bubbles merge. In ellipsoidal shape regime, the bottom of the upper bubble deforms under the influence of the lower bubble, thus, making it possible to preserve a thin liquid film between the bubbles. The upper bubble develops a dimpled-ellipsoidal rather than an ellipsoidal shape. When the bottom of the upper bubble cannot deform any more, the liquid film between the bubbles starts getting thinner, and, finally, the lower bubble merges with the upper one. The results agree with the computations by (Chang et al. 1996), by (Tomberg 2000), by (Unverdi & Tryggvason 1992) and compare favorably with the numerical predictions by (Delnoij el al. 1998) who found a qualitative agreement with available experimental data.

=

945

zy

Figure 4. Merger of two ellipsoidal bubbles; Re = 20, Eo = 1.2, pi/p2 = 10^, /X1//X2 = 10, /i = 1/40.

5. Discussion We have presented the results of a computational study on two-dimensional bubble dynamics. Despite the seeming insufficiency of a two-dimensional model for the quantitative analysis of three-dimensional bubble evolution phenomena, we have been able to obtain a good qualitative agreement with the available experimental data. Since our numerical method captures the bubble interface as well as the surface tension effect and the mass conservation with the 2nd order accuracy, we managed to recover all basic shape regimes within the experimentally predicted ranges of problem parameters. In particular, we successfully simulated the wobbling bubble regime remarkable by its unsymmetric vortex path pattern and a highly convective nature. Since the wobbling and the sphericalcap regimes are characterized by very high Reynolds numbers, it was essential to have a numerical method capable of dealing with convection-dominated flows. On the other hand, the method should beflexibleenough to allow also a computation of a nearly Stokes flow (typical, e.g., for the case of a spherical bubble). We have demonstrated that such a flexibility can be maintained within thefinite-element/level-set/operator-splittingframework. In many cases, a good quantitative agreement has been observed (see (Smolianski et al.) for a thorough comparison of our computational results with available experimental data). This, probably, means that a two-dimensional modeling of bubble dynamics is not so far from being realistic. The preliminary study on the bubble coalescence phenomena also reveal that plausible results can be obtained already with two-dimensional simulations.

6. Acknowledgments This work was supported by the grant 70139/98 of Tekes, the National Technology Agency of Finland.

7. References Baker, G.R. and D.W. Moore, 1989, The rise and distortion of a two-dimensional gas bubble in an inviscid liquid. Phys. Fluids A 1,1451-1459. Batchelor, G.K., 1967, An Introduction to Fluid Dynamics. Cambridge University Press.

zy

946 Bhaga, D. and M.E. Weber, 1981, Bubbles in viscous liquids: shapes, wakes and velocities. J. Fluid Mech. 105, 61-85. Chang, Y.C., T.Y. Hou, B. Merriman and S. Osher, 1996, A level set formulation of Eulerian interface capturing methods for incompressiblefluidflows.J. Comput. Phys. 124,449-464. Clift, R.C., J.R. Grace and M.E. Weber, 1978, Bubbles, Drops and Particles. Academic Press. Delnoij, E., J.A.M. Kuipers and W.P.M. van Swaaij, 1998, Computational fluid dynamics (CFD) applied to dispersed gas-liquid two-phase flows. In: Fourth European Computational Fluid Dynamics Conference ECCOMAS CFD'98, John Wiley & Sons, Chichester, pp. 314-318. Hartunian, R.A. and W. R. Sears, 1957, On the instability of small gas bubbles moving uniformly in various liquids. J. Fluid Mech. 3,27-47. Hnat, J.G. and J.D. Buckmaster, 1976, Spherical cap bubbles and skirt formation. Phys. Fluid 19,182-194. Moore, D.W., 1959, The rise of a gas bubble in a viscous liquid. J. Fluid Mech. 6,113-130. Sethian, A.J., 1999, Level Set Methods and Fast Marching Methods: Evolving Interfaces in Computational Geometry, Fluid Mechanics, Computer Vision, and Materials Science. Cambridge University Press. Smolianski, A., 2001, Numerical Modeling of Two-Fluid Interfacial Flows, PhD thesis, University of Jyvaskyla, ISBN 951-39-0929-8. Smolianski, A., H. Haario, P. Luukka, Computational Study of Bubble Dynamics. To appear in the Intemational Journal of Multiphase Flow. Sussman, M., P. Smereka and S. Osher, 1994, A level set approach for computing solutions to incompressible two-phaseflow.J. Comput. Phys. 114,146-159. Taylor, T.D. and A. Acrivos, 1964, On the deformation and drag of a falling viscous drop at low Reynolds number. J. Fluid Mech. 18,466-476. Tomberg, A.K., 2000, Interface Tracking Methods with Application to Multiphase Flows. PhD thesis. Royal Institute of Technology, Stockholm. Unverdi, S.O. and G. Tryggvason, 1992, A front-tracking method for viscous, incompressible, multi-fluid flows. J. Comput. Phys. 100,25-37. Walters, J.K. and J.F. Davidson, 1962, The initial motion of a gas bubble formed in an inviscid liquid. Part 1. The two-dimensional bubble. J. Fluid Mech. 12, 408-417. Walters, J.K. and J.F. Davidson, 1963, The initial motion of a gas bubble formed in an inviscid liquid. Part 2. The three-dimensional bubble and the toroidal bubble. J. Fluid Mech. 17, 321-336. Wegener, P.P. and J.Y. Parlange, 1973, Spherical-cap bubbles. Ann. Rev. Fluid Mech. 5, 79-100.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

947

EMSO: A New Environment for Modelling, Simulation and Optimisation R. de P. Scares and A.R. Secchi* Departamento de Engenharia Quimica - Universidade Federal do Rio Grande do Sul Rua Sarmento Leite 288/24 - CEP: 90050-170 - Porto Alegre, RS - Brasil * Author to whom correspondence should be addressed, {rafael, arge}@ enq.ufrgs.br

Abstract A new tool, named EMSO (Environment for Modelling, Simulation and Optimisation), for modelling, simulation and optimisation of general process dynamic systems is presented. In this tool the consistency of measurement units, system solvability and initial conditions consistency are automatically checked. The solvability test is carried out by an index reduction method which reduces the index of the resulting system of differential-algebraic equations (DAE) to zero by adding new variables and equations when necessary. The index reduction requires time derivatives of the original equations that are provided by a built-in symbolic differentiation system. The partial derivatives required during the initialisation and integration are generated by a built-in automatic differentiation system. For the description of processes a new object-oriented modelling language was developed. The extensive usage of the object-oriented paradigm in the proposed tool leads to a system naturally CAPE-OPEN which combined with the automatic and symbolic differentiation and index reduction forms a software with several enhancements, when compared with the popular ones.

1. Introduction Simulator is a valuable tool for applications ranging from project validation, plant control and operability to production increasing and costs reduction. This facts among others has made the industrial interest in softwares tools for modelling, simulation and optimisation to grow up, but this tools are still considered inadequate by the users (CheComp, 2002). The user dissatisfaction is mainly related with limited software flexibility, difficulty to use/learn and costly. Besides the lack of software compatibility and the slowness of inclusion of new methods and algorithms. Furthermore, the users have been pointed out some desired features for further development, like extensive standard features,zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA intelligent interfaces among others (Hlupic, 1999). In this work a new tool for modelling, simulation and optimisation of general dynamic systems, named EMSO (Environment for Modelling, Simulation and Optimisation) is presented. This tool aims to give the users more flexibility to use their available resources. The successful features found in the most used tools were gathered and some new methods where developed to supply missing features. In addition, some well established approaches from other areas where used, like the object-oriented paradigm. The big picture of the EMSO structure is shown at Figure 1 which demonstrates the modular architecture of the software.

948

zyxwvuts zyxw Mcxtels library

^

Typical flowsheet

zyxw I

^JS

Initial condition H Initialization: system; NLA / NLASolver

Dynamic system: \ Integration: DAE / DAESolver discontinuity

Reinitialization system: NLA

\Reinitialization: / NLASolver

I

zyx zyx

Dynamic simulation

f

Model: mathematical based language

\

/

Objective Function

Optimiser

V

Dynamic Optimisation

EQUATIONS diff(Ml*L.z) = Feed.F*Feed.z - V.F*V.z - L.F*L.z; diff{Ml*L.h) = q+Feed.F*Feed.h - V.F*V.h - L.F*L.h; sum(L.z) = sum(V.z) = 1; V.T = L . T ; V.P = L.P;

J

Flowsheet: component based language

DEVICES sep_101 str_101 PID_101, valve_101 valve_102

include "thermo"; Model F l a s h VARIABLES in Feed as MaterialStream; out L as MaterialStream; out V a s MaterialStream; in q as Real(Unit="kJ/h"); Ml a s P o s i t i v e (Unit="kniOl") ;

J

\

as Flash; as MassStream; PID_102 a s PID; as ControlValve; as ControlValve;

CONNECTIONS str_l01.Stream to sep_101.Feed; sep_101.V to valve_l01.Stream; sep_101.L.P to PID_101.y; sep_l01.level t o PID_102.y; I PID_101.u to valve_101.uy f

Model: mathematical based language

^

PARAMETERS

e x t Comp a s ChemicalComponent Tr as Temperature; VARIABLES SUBMODELS in T as Temperature; e q u i l i b r i u m a s A n t o i n e ( y _ i = V . 2/ in P as Pressure; T=L.T, P=L.P, x _ i = L . z ) ; in y_i as FractionMolar; h as LiquidHenthalpy(h=L.h, i n H as EnthalpyMolar; T=L.T, P=L.P, x _ i = L . z ) ; EQUATIONS H as VaporHenthalpy{H=V.h, j \ H = sum(y i * ( C o m p . c p . A * ( T - T r ) T=V.T, P=V.P, y _ i = V . z ) ; ^ \ +Comp.cp.B*{T'"2 - T r ' " 2 ) / 2 end Submoc / +Comp.cp.C*(T^3 - T r ^ 3 ) / 3

:7

+Comp.cp.D*(T^4 - T r ^ 4 ) / 4 ) | ^

Figure 1. General vision of the EMSO structure and its components.

949 zyxwvutsrqp

2. Process Model Description

In the proposed modelling language there are three major entities:zyxwvutsrqponmlkjihgfedcbaZYXW models, devices, and flowsheets. Models are the mathematical description of some device; a device is an instance of a model; and a flowsheet represents the process to be analysed which is composed by a set of devices. At bottom of Figure 1 are given some pieces of code which exemplifies the usage of the language. EMSO makes intensive use of automatic code generators and the object-oriented paradigm whenever is possible, aiming to enhance analyst and productivity. 2.1. Model In the EMSO language, one model consists in the mathematical abstraction of some real equipment, process piece or even software. Examples of models are the mathematical description of a tank, pipe or a PID controller. Each model can have parameters, variables, equations, initial conditions, boundary conditions and submodels that can have submodels themselves. Models can be based in pre-existing ones, and extra-functionality (new parameters, variables, equations, etc.) can be added. So, composition (hierarchical modelling) and inheritance are supported. Every parameter and variable in a model is based in a predefined type and have a set of properties like a Z?n^/description, lower and upper bounds, unit of measurement among others. As models, types can have subtypes and the object-oriented paradigm is implemented. Some examples of types declarations can be seen in Figure 2. Fraction as Real(Lower=0/ Upper=l); Positive as Real{Lower=0, Upper=inf); EnergyHoldup as Positive(Unit="J"); ChemicalComponent as structure ( Mw as Real{Unit="g/mol"); Tc as Temperature(Brief="Critical Temperature" Pc as Pressure;

);

Figure 2. Examples of type declarations. 2.2. The flowsheet and its devices In the proposed language a device is an instance of a model and represents some real device of the process in analysis. So, a unique model can be used to represent several different devices which have the same structure but may have different conditions (different parameters values and specifications). Devices can be connected each other to form Siflowsheet(see Figure 1) which is an abstraction for the real process in analysis. Although the language for description of flowsheets is textual (bottom right in Figure 1), it is simple enough to be entirely manipulated by a graphical interface. In which flowsheets could be easily built by dragging model objects into it to create new devices that could be connected to other devices with the aid of some pointing unit (mouse).

3. Consistency Analysis In solving the resulting system of differential-algebraic equations (DAE) of SL flowsheet, prior analysis can reveal the major failure causes. There are several kinds of consistency analysis which can be applied in the DAE system coming from the mathematical description of a dynamic process. Some of them are: measurement units, structural solvability and initial conditions consistency.

950

3.1. Measurement units consistency zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF In modelling physical processes the conversion of measurement units of parameters is a tiresome task and prone to error. Moreover, a ill-composed equation usually leads to a measurement unit inconsistency. For this reasons, in EMSO the measurement units consistency and units conversions are automatically made for all equations, parameter setting and connections betweenzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA devices. Once all expressions are internally stored in a symbolical fashion and all variables and parameters holds its measurement units, the units consistency can be easily tested with the aid of the units measurement handling package RUnits (Soares, 2002). 3.2. DAE solvability Soares and Secchi (2002) have proposed a structural method for index reduction and solvability test of DAE systems. With this method, structural singularity can be tested and the structural differential index can be reduced to zero by adding new variables and equations. Such variables and equations are the derivatives of the original ones with respect to the independent variable. EMSO makes use of this method, allowing the solution of high-index DAE problems without user interaction. The required derivatives of the variables and equations are provided by a built-in symbolic differentiating system. 3.3. Initial conditions consistency Once a DAE system is reduced to index zero the dynamic freedom degree is determined. So, the initial condition consistency can be easily tested by an association problem as described by Soares and Secchi (2002). This approach is more robust when compared with the index-one reduction technique presented by Costa et al. (2001).

4. External Interfaces Usually each simulation software vendor has its proprietary interfacing system, this leads to heterogeneous systems. Recently, the CAPE-OPEN project (CO-LAN, 2002) has published open standards interfaces for computer-aided process engineering (CAPE) aiming to solve this problem. EMSO complies with this open pattern. The interfaces are implemented natively rather than wrapping some other proprietary interface mechanism, and CORBA (OMG, 1999) was used as the middleware. The extensive usage of the interfaces turns its efficiency a priority. For this reason some modifications for the numerical CAPE-OPEN package (CO-LAN, 2002) where proposed. This modifications consists in changing some function calling conventions, more details can be seen in Soares and Secchi (2002b).

5. Graphical User Interface The graphical user interface (GUI) of EMSO combines model development, flowsheet building, process simulation and results visualising and handling all in one. EMSO is entirely written in C++ and is designed to be very modular and portable. In running tasks there are no prior generation of intermediary files or compilation step, everything is made out at the memory. The software is multithread, allowing real-time simulations and even to run more than one flowsheet concurrently without blocking the GUI. Furthermore, calculations can be paused or stopped at any time. The Figure 3 shows the EMSO GUI, it implements a Multiple Document Interface (MDI).

951

Re

^dim

I«lb

^idm

"HI 125-i

L-:|S fla$h2 " " - f

flashlOl

DEVICES flashlOl as separator; J as Massstream; slOl

^ . p Vaables J | j Paameters

Icontrol Ivalve

i i ImbalCond 51 f

as Picontrollf as controlvalv

eqSefvej

: S ^

pcontrol waive

hSavet HSefver

as Plcontroll{

i f

$101

zyxw

Liquid Holdup

75-1

• ^ bquid Flow

50

a s CQj

9 1

ConJd

Model and Flowsheet editing

^ Bi^jVafBbtes

feb?-

2000

"IT

m ructoR

separatos

OtipU |^£M'f^'|S^^-^^^

n q COUIVllAOdH B;j BasK model ppj llashO model B;| HttWZmodel

i f BiJPIDniodel BiJ P»Bor model

Waive

%

OPTIONS

N

CONNECTIONS

J^B is assumed and first order rate expression r^ = /:Q introduced. All parameters are assumed constant. A homogeneous model according to mass transport and a heterogeneous model according to heat transport is considered. Under these assumptions the set of governing equations (Thullie J. et al. 2001) is:

dz

f \ (1) Da exp zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 1 (l-.)

955 zyxwvutsrqpo

'l = s,(e,-e)zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (2) az

r 11 Y

— 1 (i-n) (3) dr zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 3^, -y

Initial conditions are:

And boundary conditions are: i9 (z,0) = ??o

The conditions of a change in flow direction after

i?(0,r) = ??o

cycling time

z

is:

i^i^iz.T^) = i9f^\^-z,T~), where: r^ and r~ denote right hand side limit and left hand side limit at the time moment r^. A new temperature after the mixing point is calculated according to equation: ^m = ( G ^ I ^ I + < ^ o ^ . )/G



^11

(4)

2 2/

The assumption of ideal mixing at the injection point was done. A standard finite difference method was used to solve the set of equations (1-3).

3. Results The use of cold gas injections gives a significant decrease in maximum catalyst temperature and a moderate decrease in conversion. At the very beginning of the start up procedure the temperature and conversion profiles are similar for both processes regardless the injection, because the initial catalyst bed temperature dominates (Figures 2-3). When the injection point has a central position (the catalyst bed was divisible into two equal parts and the injection is situated between them) the significant decrease in maximum catalyst temperature was observed in comparison to standard operation. The simultaneous decrease in conversion was not so significant especially for short cycling times (Figure 2). A decrease of Tk^ of about 30% caused 23% decrease in conversion (Figures 2-3). The results of calculations suggest that in the case of the operation limit to maximum catalyst temperature, a cold gas injection is a useful solution. The injection point should be placed in the middle of catalytic bed, with exception of the case when the limit is so high that there is no possibility to reach cycling steady state (Figure 4). For this case the of injection should be placed near the entrance to the reactor (Figure 5). This results in higher rjmin in comparison to standard conditions. The points along the curves in Figure 5 are give for different positions of injection points with injection point equal to Vi L at the edge.

956 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

0.8

-

injection 1/2 L - cycle 2 • injection 1/2 L - cycle 82 no injection - cycle 2 no injectbn - cycle 82

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJI " ^

injection 1/2 L - cycle 2 injection 1/2 L - cycle 82 no injection - cycle 2 no injection - cycle 82 0.6

0,8

1

Figure 2. Gas temperature profiles along the reactor when inter-stage injection is applied and in standard operation (St = 20, Da = 0.0496, O) = 0,1, ^ = 1, i&ko=2,eo = l, G2=107cGi).

y

0,2

0.4

0,6

0,8

Figure 3. Conversion profiles reactor when inter-stage is applied and in standard (St = 20, Da = 0.0496, co = 1^0 =2,1^0= 1, G2=10%Gi).

/

1

along the injection operation 0.1, J3 = 1,

It J"^^^^ ,

^^jc^^!^^^

— M — Injection 1/4 L ^

L

^

zyxwvutsrqp

"*

^*"-

J H

- — E j e c t i o n 1/2 L 1

a

y

—A—injection 3/4 L X — • — no injection

1 ^

J

. . . X - - . injection 1/4 L ^

1 }

. . ^ . . . injection 1/2 L 1 -.-A-.-injection 3/4 L

[

. . . e . . - n o injection

J

10% Gi

- ^ T =4

c

1

Figure 4. Comparison of r]mm = / (^kmax) for Figure 5. Comparison of rimm = / ( ^ m J different location of injection points, and two for different cycling times (St = 20, injection flow rates (St = 20, Da=0.0496, Da=0.0496, CO = 0.1, /? = 7, ?%o= 2, (0 ^0.1, P=l, i^o= 2,1^0=1 G2 =5%Gi). 1^0=1 G2 =5%Gi).

When the cycling time is increasing, regardless the location of injection point, the number of cycles which gives pseudo-steady state operation decreases (Figure 6). This means that one should perform a lot of short cycles or not so many long cycles. The most profound decrease in maximum temperature with comparatively small decrease of minimum conversion is observed for small cycling times.

957 For small mass flow rate of injection gas no influence on to the start up time is observed. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

ooo222fififi9fififlfiAA*A*f*****A^*^f

080000000

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA ^i o • zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

3,5



o

AA^^^^^^^^T•



AA

• • • • •• • ddddAAAAA^AAAAAAAAfii •

QpnaanDiiiDDDaannnaipaaaaaaaaaijiaaaaaaaaail] • injection -1/2 ' A injection - 3/4

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR ^Tc=3

• no injection

2.5

D injecti(Mi -1/2 A injection - 3/4 f

1>- ^ = 6

o no injection

10

20

40

30

50

zyxwvutsrqponmlk

number of cycling

Figure 6. Comparison of catalyst maximum temperature for different cycling times for the case with cold gas injection and standard operation (with no injection) (St = 20, Da = 0.0744, co=0.l J3= 7, i^o= 2, i9o== I G2 = 10% d).

4. Conclusions 1. The results of calculations suggest that the use of cold gas injections gives the significant decrease in maximum catalyst temperature and moderate decrease in conversion. 2. The best position of the injection point is usually the middle of the catalyst bed. However when the temperature limit is so high that PSS is not achieved the flow rate of the injection should be lowered. If it is not possible the place of the injection should be moved in the direction of the reactor inlet. 3. When cold gas injection is used the most profound decrease in maximum temperature with comparatively small decrease of minimum conversion is observed for small cycling times.

5. Symbols j)a =

k^L-exp

5/ = ^ ^ : ^ ^ P-S*"o

(-y)

_ Damkohler number, - Stanton number,

k

- rate constant, 1/s

koo - frequency coefficient, 1/s

958 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA X

- dimensionless space variable,

L

- length of reactor, m

- dimensionless adiabatic temperature rise,

r^

- ration rate, kmol/m s

L zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA AO

fi =

E zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA - dimensionless activation y =



t? = -

T

energy,

- temperature, K

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA - dimensionless gas temperature, TQ - inlet temperature, K

'0

- dimensionless temperature of catalyst, • dimensionless time.

S'L

- catalyst temperature, K UQ

- lineary velocity, m/s

- ratio of heat capacities of gas X - space variable, m to catalyst. {^• ^)pk-^pkzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA ,2/^3

Gy

- specific surface area, m /m"

a^

- heat transfer coefficient, J/m^Ks

^^'

- concentration of reference component, kmol/m^

e

- void fraction, m^/m^

Cp

- specific heat, J/kg K

p

- density, kg/m^

E

~ ration activation energy, kJ/kmol

T]

- conversion of component A

G

- flow rate of gas, kmol/s

R

- g a s constans, kJ/kmol K

AH

- heat of reaction, kJ/kmol

t

- time, s

6. References Eigenberger, G. and Niken, V., 1994, Internal. Chem. Eng. 34,4-16. Matros, Yu., 1989, Studies in surface science and catalysis, vol. 43. Utrecht The Netherlands. Matros, Yu., 1985, Elsevier, Amsterdam. Matros, Yu., and Bunimovich, G., 1996, Catal. Rev. Sci. Eng. 38, 1. ThuUie, J. and Burghardt, A., 1995, Chem. Eng. Sci. 50, 2299-2309. Thullie, J. and Kurpas, M., 2002, Inz. Chem. Proc. 23, 309-324 (in Polish). Thullie, J., Kurpas, M., Bodzek, M., 2001, Inz. Chem. Proc. 22, 3E, 1405 (in Polish).

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

959

Numerical Modeling of a OK Rotor-Stator Mixing Device J. Tiitinen Helsinki University of Technology, Mechanical Process Technology and Recycling, P.O.Box 6200, FIN-02015 HUT, Finland E-mail: [email protected]

Abstract The flov^ field induced in an Outokumpu TankCell and by a OK rotor-stator mixing device in a process size flotation cell v^as simulated by cfd using two main grid types. The effects of different grid types were investigated with structured and unstructured grids. The geometry used was axisynmietric and a sector of 60 degrees of the tank with periodic boundaries was modeled. Finally, validation measurements and calculations with a laboratory size flotation cell were done. A "hybrid" grid for a laboratory size OK flotation cell with unstructured cells in the rotor domain and structured cells in the stator and tank domain was generated. Preprocessing and computational mesh generation process of complicated geometries like the OK rotor-stator mixing device can take considerably long time with regular structure type grids. This type of geometry is where meshes with irregular structure can be used much easier and with less processing time. Preprosessing and grid generation was done with the commercial Fluent Gambit 1.3. The CFD code Fluent 5.5 was used in the simulation. Standard k-e turbulence model and standard wall functions were used. Multiple reference frame method was used in all simulations instead of the computationally slower sliding mesh method. Simulations were done in one phase (/). Calculated velocity fields on horizontal and vertical planes, pressure distributions on rotor and stator surfaces and turbulent magnitudes were compared with structured and unstructured grid types. No grid dependency was found. Comparisons between velocity and turbulence results measured using the LDV (Laser Doppler Velocimetry) technique and CFD modeling were done. The predicted velocity components agreed well with the values obtained from LDV. The standard k-e model underpredicts the k level in the flotation cell compared to measured values. It was shown that a CFD model with periodicity, hybrid grid and MRF approach can be used for detailed studies on the design and operation of the Outokumpu flotation cell.

1. Introduction Froth flotation is a complex three phase physico-chemical process which is used in mineral processing industry to separate selectively fine valuable minerals from gangue. The importance of the mineral froth flotation process to the economy of the whole industrial world is important. As costs has increased in mining industry and ore grades and metal prices decreased, role of mineral flotation has become even more important. The flotation process depends among many other factors on control of the pulp aeration, agitation intensity, residence time of bubbles in pulp, pulp density, bubble and particle size and interaction and pulp chemistry. Development of flotation machines has been carried out as long as there has been minerals flotation to achieve better flotation performance and techniques. This

960

development has mainly been build from advices from practice or thumb rules. The development and research work has been mainly focused on the control of the chemistry not on the hydrodynamics of the flotation cell. This paper reviews the detailed hydrodynamics of Outokumpu flotation cells by using Computational Fluid Dynamic (CFD) modelling. This includes different computational grid type dependency defining in the CFD model and examining the flow pattern induced in the cell as well as validating the model. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQP

2. General Outokumpu flotation cell development began in the late 1960s to satisfy company's own needs for treating complex-sulfide ores at Outokumpu's mines in Finland. Main properties for flotation machines can be defined: keep mineral grains in suspension in pulp, near ideal mixing disperse sufficient amount of fine airbubbles to the pulp make flotation environment advantageous for bubbles and minerals interaction, sufficiently turbulent conditions in the contact zone high selectivity develope a quiescent upper zone to flotation tank to avoid bubble-particle separation energy efficiency, low power and air consumption secure efficient discharge of concentrate and tailings The flotation cell mechanism by Outokumpu in general consists of: flotation tank rotor and stator air feed mechanism pulp feed- and discharge mechanism concentrate ducts pulp level regulators

The Outokumpu's rotor profile was originally designed to equibalance the hydrodynamic and static pressures, allowing a uniform air dispersion over surface of the blades. The blade design also provides separate zones for air distribution and slurry pumping. Figure 1 shows Outokumpu cell desing in general. Industrial cell size can be from 5 m^ to 200 m^. Figure 2 is a close up demonstration of air distribution and slurry pumping. Mineral flotation is a three phase process where solid phase consists of mineral grains, liquid phase is mainly based on water and air as gaseos phase. In Outokumpu flotation cells air is conducted through hollow shaft to the rotating rotors direct influence. Intense eddy currents throw bubbles and pulp against the stator blades and through the gaps of the vertical blades and are distributed evenly to the flotation cell. Hydrofobic minerals attached to bubbles are lifted up to the froth area and to the concentrate duct. zyxwvutsrqponmlkjihgfed

Figure 1. Flotation cell.

Figure 2. OK rotor-stator close up.

961 zyxwvutsrqp

3. CFD Modelling

Mesh generation plays two main roles in CFD modelling. First, mesh generation consumes most of the total time used to analysis. Second, the quality of the computed solution is substantially dependent on the structure and quality of the computational mesh. The attributes associated with mesh quality are node point distribution, smoothness and skewness. Building a valid computational mesh is a separate species of science which can be separated to structured and unstructured grid generation. Choosing appropriate mesh type will mainly depend on the geometry of the flow problem. Figure 3 shows general 3D grid cells types accepted by most of the CFD solvers. Figure 4 shows an example of 3D multiblock structured grid and unstructured tetrahedral grid. zyxwvutsrqponm

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Figure 3. 3D grid cell types. Figure 4. Structured and unstructured grid.

WamTWtdgit

Pyramid

When choosing appropriate mesh type for flow problem there are some issues to consider: Many flow problems involve complex geometries. The creation of a structured mesh for such geometries can be substantially time-consuming and perhaps for some geometries impossible. Preprocessing time is the main motivation for using unstructured mesh in these geometries. Computational expense can be a determinant factor when geometries are complex or the range of length scales of the flow is large. Hexahedral elements generally fill more efficiently computational volume than tetrahedral elements. A dominant source of error in calculations is numerical diffusion. Amount of numerical diffusion is inversely related to the resolution of the mesh. Also numerical diffusion is minimized when the flow is aligned with the mesh. In unstructured mesh cases with tetrahedral elements the flow can never be aligned with the grid. Using and combining different types of elements as a hybrid mesh can be a good option and bring considerable flexibility in mesh generation.

4. Computational Model 1 The flow and mixing simulations for grid type dependency calculations were carried out for the flotation tank geometry showed in figure 5. The tank is cylindrical and has six symmetrically placed vertical baffles and one horizontal baffle on the cylindrical walls.

962

zyxwvutsr Table 1. Tank specifications. Tank diameter (mm) Tank height (nmi) Shaft diameter (mm) Rotor diameter (nam) Rotor bottom clearance (nmi) Rotor speed of rotation (rpm)

3600 3600 160 825 83 100/160

Figure 5. Flotation tank geometry for CFD. Flotation tank is equipped with Outokumpu rotor and stator mixing device. The geometrical details of the tank are given in Table 1. For symmetry reasons it was sufficient to model only a part of the domain. The smallest symmetry of the geometry is 60** which contains one impeller blade and one vertical baffle. Two different type of grids were studied. Both grids were refined adaptively. After adaption both grids had about 450 000 cells. Steady state multiple reference frame approach was used in Fluent 5.5 simulations for modelling the rotor rotation in the stationary tank. Standard k-e turbulent model and standard wall functions were used. Simulations were done in liquid (water) phase.

zyxwv zyxw zyxwvutsrq

Computational model 1 results CFD simulation results, consisting of velocity vectors and distributions, pressures in rotor and stator area and turbulence quantities show similar results with both mesh types. No significant grid dependency between structured and unstructured grid types were found. Resultant velocities at two levels are shown in figure 6 and turbulent kinetic energy at rotor stator area in figure 7. ValooMy Magnttuda 0.3m

5

2

Vslocily Magnttiid* OAn

AV

\V \

0, OO

s

l^^atruct

\

1

IH^ulTucI

1 3

i

V> < x — J

0,50

1.00

f9

^ 1,50

2,00

0, 00

^'^^ ^^^w— 0,50

1,00

_.• 1,50

Figure 6. Resultant velocities.

Figure 7. Turbulent kinetic energy at rotor-stator area.

2, DO

963 zyxwvutsrqp

5. Computational Model 2

As a result from computational model 1 a hybrid grid for a laboratory size OK flotation cell with unstructured cells in the rotor domain and structured cells in the stator and tank domain was generated. The tank was a cylindrical, unbaffled tank with Outokumpu's rotor-stator mixing device. Computational geometry is shown in figure 8. Geometrical details of the tank are given in table 2. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFED Table 2. Tank diameter (mm) Tank height (mm) Shaft diameter (mm) Rotor diameter (mm) Rotor bottom clearance (mm) Rotor speed of rotation (rpm)

1070 900 57 270 27 328

Figure 8. Computational geometry.

A 60** sector of the tank was modelled with periodic boundaries on the sides of the sector and symmetric boundary on the top of the tank to describe the free surface. Standard wall functions were employed on all wall boundaries of the computational domain. Fluent's multiple reference frame steady state approach was used with k-e turbulence model. Calculation was done in one phase (water).

Computational model 2 results Results from computational model 2 were compared to Laser Doppler Velocimetry (LDV) measurements done to a similar flotation cell at CSIRO Thermal and Ruids Engineering laboratory. The velocity vectors predicted by CFD and the velocity vectors obtained from LDV are compared in figure 9. LDV is shown on the left and CFD velocity field on the right side. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE

0.2

0.4

0.6

0.8

1

r/R

Figure 9. Velocity vectors from LDV and CFD. Reasonable agreement is obtained between measured and calculated flow fields. Rotor creates a jet stream in the radial direction towards the cylindric wall. Two main flows circulates back to the impeller, one through the top side and second from the lower side of the stator. The comparisons of the measured and computed mean velocity components for radial direction as a function of cell height is showed in figure 10.

964 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA - • _ r / R = 0 2 4 ,| LDV f/f«.24.

I"

- • - r / F W ) . 4 7,

1

LDV r/FW47, TKKCFD

LDV r/fW)7S, TKKCR3|

L

5\ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA ?"

V Ur(irt.)

"' u;} the corresponding interval arithmetic operation {X opzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Y) is defined, whose result is an interval containing every possible number produced by {x op >'}, xeX, ye Y. We will use the notations of Alt and Lamotte (2001), denoting [a v ^]= [min(a,b),max(a,b)\, x^ = min(|jCi|,\x21) and x^ =max(|xi|,|x2|). Interval multiplication by a scalar is defined as yxX = \}Xi vyx2]The standard interval arithmetic operations are defined as: X+Y =

[{x,+y^)v(x2+y2)]

(1) zyxwvutsrq (2)

y^xX,

XxY

OGX,0^y

[[min{xiy2'^2>'i}n^ax{xiyi,X2}'2}l

Oe Xfie Y

(3)

f[(^c/>'Jv(x,/3;Jl O^X,O^Y

X/Y = \

(i/y^)xX,

(4)

OGX,0^y

The inner interval operations are defined as: X+Y = X-Y

= [

[{x,+y2)v{x2+y,)]

(5)

[{x,-yi)v{x2-y2)]

(6)

[{xcyd)''{xdyc)\

0€X,0€Y (7)

l[max{jc,)'2.J'^23'i}"^i"{*^i>'i'^2>'2}J

Oe X,OG y (8)

The guaranteed lower and upper bounds for the function values can be estimated applying standard interval operations with the intervals instead of the real operations in the algorithm to calculate the function values. These bounds may be used to solve the global optimization problem. A disadvantage of interval methods is the dependency problem (Hansen, 1992) and because of this, the estimated bounds of an objective function are not tight, especially when a problem is given by a code developed without foreseeing the application of interval arithmetic. If the interval is sufficiently small so that operators in all the operations are monotonic, the exact range of a function for given interval data can be obtained by correctly using the standard or inner operations depending on whether the operands have the same monotonicity or not (Alt and Lamotte, 2001). Operations are summarized in Table 1.

991 zyxwvutsrqpo Table 1. The interval operation on two monotonic operands.

+ X /

Have the same monotonicity Standard interval operation (1) Inner interval operation (6) Standard interval operation (3) Inner interval operation (8)

Do not have the same monotonicity Inner interval operation (5) Standard interval operation (2) Inner interval operation (7) Standard interval operation (4)

The difficulty is to know the monotonicity of the operands. This requires the computation of the derivatives of each subfunction involved in the expression of the function being studied, which needs a large amount of work. Alt and Lamote (2001) have proposed the idea of random interval arithmetic which is obtained by choosing standard or inner interval operations randomly with the same probability at each step of the computation. It is assumed that the distribution of the centres and radii of the evaluated intervals is normal. The mean values and the standard deviations of the centers and radii of the evaluated intervals computed using random interval arithmetic are used to evaluate an approximate range of the function: zyxwvutsrqponmlkjihgfedcbaZYXWV

k

radii' r^ceneers

-^M radii +^0^^,..\

(9)

where jUcentres is the mean value of the centres, jUradu is the mean value of the radii, CFrada is the standard deviation of the radii, a is between 1 and 3 depending on the number of samples and the desirable probability that the exact range is included in the estimated range. Alt and Lamotte suggest that a compromise between efficiency and robustness can be obtained using a=l.5 and 30 samples. The standard deviation of the centres was not used in calculations because in the experiments done here it was always very small. Random interval arithmetic has been applied to compute ranges of some functions over small intervals. Alt and Lamotte showed that random interval arithmetic provides ranges of functions which are much closer to the exact range than the standard interval arithmetic for single variable problems. Random interval arithmetic assumes that operators in all operations are monotonic. This may be the case when intervals are small and there is only one interval variable. When intervals are wide, as they can be in process engineering problems, operators cannot be assumed to be monotonic. Independent variables cannot be assumed monotonic either. Therefore such random interval arithmetic uses inner interval arithmetic too often and provides results which are too narrow when intervals are wide, so it cannot be applied to global optimization directly. Standard interval arithmetic provides guaranteed bounds but they are often too pessimistic. Standard interval arithmetic is used in global optimization providing guaranteed solutions, but there are problems for which the time of optimization is too long. Random interval arithmetic provides bounds closer to the exact range when intervals are small, but it provides too narrow bounds when intervals are wide. We would like to have interval methods that are less pessimistic than using standard interval arithmetic and less optimistic than with random interval arithmetic. We expect that the random interval arithmetic will provide wider or narrower bounds depending on

992 the probability of standard and inner operations at each step of the computation. Balanced random interval arithmetic is obtained by choosing standard and inner interval operations at each step of the computation randomly with predefined probability. Standard interval arithmetic is used when the probability is 1. Inner interval arithmetic is the case when the probability is 0. The influence of probability of the choice on the subsequent ranges of functions should be experimentally investigated and this is reported here. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

3. Experimental Study of the Balanced Random Interval Arithmetic

Balanced random interval arithmetic with different probabilities of standard and inner interval operations was used to evaluate ranges of several objective functions of difficult global optimization problems over random intervals. One case of typical results is illustrated using the objective function of a multidimensional scaling problem with data from soft drinks testing (Mathar, 1996, Green et al., 1989) and these results are presented here. Ten different soft drinks have been tested. Each pair was judged on its dissimilarity and the accumulated dissimilaritieszyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONML Sij are the data for the problem. The goal of this multidimensional scaling problem is to find the best configuration of 10 objects representing each drink in the two-dimensional space which would help to interpret the data. The objective function of the problem is zyxwvutsrqponmlkjihgfedcbaZYXWV / ,

x2

V

/(X)=S

(10)



5

> ^

' J • •'

{3350

r33

J33

H

\J^'"

/^ 32

31

>31»

33

' Vv

150

Si} 3»

1 -h average

35

3?

3!

3t

36

8-h average

Figure 3: Real observation (ordinate) vs. PLS prediction scales.

100 m

VH 32.60 3280 3,01 M

zy

33.« 33.60 3180 3»,» 3450

24-h average

(abscissa) for different

time

5. Conclusions Overall, must be Specific • • •

it was found that low production points tend to dominate other variables, and so treated separately, and that medians give slightly better results than averages. recommendations are to: Remove low production days using a percentile or threshold. Use medians instead of averages, if possible. The optimal time scale depends on the intended application. Generally, the same X variables are prominent regardless of the time scale used, but the goodness of fit of the model is heavily influenced by the sampling frequency of key process parameters.

6. References Broderick, G., Paris, J., Valade, J.L. and Wood, J., 1995, Applying Latent Vector Analysis to Pulp Characterization, PaperijaPuu, 77 (6-7): 410-419. Harrison, R., Leroux, R. and Stuart, P., 2003, Multivariate Analysis of Refiner Operating Data from a TMP Newsprint Mill, PAPTAC 2003 Conference, Montreal, Canada. Kooi, S., 1994, Adaptive Inferential Control of Wood Chip Refiner, Tappi Journal 77(11): 185194. Kresta, J.V., Marlin, T.E. and MacGregor, J.F., 1994, Development of Inferential Process Models Using PLS, Computers and Chemical Engineering 18 (7):597-611. Lupien, B., Lauzon, E. and Desrochers, C , 2001, PLS Modelling of Strength and Optical Properties of Newsprint at Papier Masson Ltee, Pulp and Paper Canada 102(5): 19-21. Saltin, J.F. and Strand, B.C., 1995, Analysis and Control of Newsprint Quality and Paper Machine Operation Using Integrated Factor Networks, Pulp and Paper Canada 96(7): 48-51. Shaw, M., 2001, Optimization Method Improves Paper/Pulp Processes at Boise Cascade, Pulp and Paper, March, 43-51. Strand, W.C, Fralic, G, Moreira, A., Mossaffari, S. and Flynn, G, 2001, Mill-Wide Advanced Quality Control for the Production of Newsprint, IMPC Conference, Helsinki, Finland. Wood, J.R., 2001, Controlling Wood-Induced Variation in TMP Quality, Tappi Journal 84(6): 3234.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

1031

A Decomposition Strategy for Solving Multi-Product, Multi-Purpose Scheduling Problems in the Paper Converting Industry Jemstrom P., Westerlund T. and Isaksson J.* Abo Akademi University, Process Design Laboratory Biskopsgatan 8, FIN-20500 Abo * UPM-Kymmene, WalkiWisa nN-37630Valkeakoski

Abstract In the present paper multi-product, multi-purpose, daily, short-term production planning in the paper converting industry is considered. In the actual production process finite intermediate storage is used and in the discrete operations, successive operations may start before proceeding stages have been completely finished. Such features enable flexible manufacturing as well as efficient production plans. The scheduling problem is, thus, a challenging problem to solve, especially since the number of orders to process, even in the short-term production plan typically exceeds 100. With the presented decomposition method even problems this large can be solved.

1 Introduction Multi-product, multi-purpose facilities are common in the industry of today, as they offer flexibility in the manufacturing process. The downside of these kind of facilities is that they are more difficult to operate than dedicated ones, especially when it comes to the production planning. Inflexiblemanufacturing systems, incorporating multi-purpose machines, there is seldom a single known bottleneck. Instead the bottleneck moves according to what is produced and when the production of each product has started. Many heuristics have been developed to address the problem, Abadi(1995) and Franca et al.(1996) present some methods using iterative methods. Different approaches may be used, but it turns out that the models become very large with growing number of orders. The specific problem considered may be modeled by using a continuous time formulation and the effectively solved by using decomposition methods. Care must be taken when choosing decomposition strategy. Two different decomposition approaches are used, one product based and one time based. Real world problems can be solved using both decomposition schemes. The solution time also depend on the choice of objective function.

1032 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

2 Problem Definition The paper converting process considered in the example is the process of printing and laminating. The machine part consists of 18 machines, divided into printing machines, laminators and cutters. There are six of each type of machine, but all machines have different characteristics. Four of the cutters are physically attached to a corresponding laminator, and these machine pairs can be modelled as just one machine. All set-up times are sequence dependant. Each product needs one to seven production steps when produced. The average is three steps, one of each operation. For each product there is predetermined production plan regarding which machines to use, limiting the problem to being a strict scheduling problem without assigning tasks. A schematic overview of the factory is shown in figure 1. Material flow is possible between any units, with the exception of the cutters and laminators, 1 to 4, connected physically to each other. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

[

Laminator 1 Laminator 2 Laminator 3 Laminator 4 Laminator 5 Laminator 6

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM

Figure 1. Schematic layout of the factory considered.

3 Basic Formulations 3.1 Allocation constraints The problem is formulated as a before-after problem with a continuous time representation. A job on a machine is done either before or after another one. Mathematically it can be expressed as: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA ^i,m ^ tj^m I Pj,m

y ^j,m ^ ^i,An i Pi,m)

(1)

where / and j are jobs, t is the time a job starts, p notes the processing time for the job including set-up times, and m is the machine. The disjunctive formulation must be rewritten in order to be solvable with existing optimization tools. This process can also be automated, Bjorkqvist and Westerlund (1999). The formulation can be rewritten using a Big-M formulation.

ti,m-tj^rn+M'yij


ti-¥piyiel

(4)

,where / is the set of all products in the production plan. In the case of tardiness every product has its own due-date, and this makes the products more distinguished. The search space is more limited, which shows in the solution time. Minimize SCwf-T^)

(5)

1=1

where 7/ is the tardiness for job /, and wf is the weight for the tardiness for the same job. If the total tardiness is zero, then the solution is known to be optimal. Schedules that may obtain solutions with only a few products beeing late are easier to solve than tight schedules with more products beeing late, in the globally optimal solution.

4 Decomposition Strategies Two decomposition methods have been compared, one product based and one time based one. Roslof et al.(2001) developed an algorithm for decomposing large problems for a single machine. The product based decomposition presented here is mainly based on what Roslof et al. presented. A feasible schedule for the whole set of products to be produced is built up starting with a few products, and then adding until the whole set is scheduled. With a feasible solution, containing the whole set of products, better solutions can be obtained by releasing and rescheduling a few products at a time. The complexity of

1034 each updating procedure is greatly reduced compared to the complexity of scheduling the whole problem at once. The solution can not be guaranteed to reach global optimum, but a good sub-optimal solution is usually enough for practical industrial use. With increased computational power the amount of jobs released in each iteration can be increased. This is a great advantage over heuristic methods, that usually do not benefit from increased computational power.

The other decomposition strategy is based on time scope. In this case we observe that the order stock changes constantly as new orders arrive and old ones are completed. zyxwvutsrqponmlkjih 4.1 Product based decomposition The product based decomposition works in two stages, a build-up and a rescheduling stage. In the build-up stage there is an initial schedule containing only a few products. A few products at a time are added to the initial schedule, and the set is rescheduled with the earlier set preserving their internal order. The procedure is repeated until all products are scheduled. New products inserted

zyxwvutsrqponmlkjihgfedc

i_l__i__i__l__l__3 Product sequence from previous schedule

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR

Figure 2. Illustration of build-up process.

The idea behind the build-up procedure is illustrated in figure 2. As shown in Roslof et al.(2001), the order in which the products are inserted has impact on the quality of the schedule obtained. It is therefore of use to use the same reordering, or post processing, strategy as proposed in the same paper. When all products are inserted, some are released and the system is rescheduled. This improves the quality of the solution, as the elements akeady scheduled can not change place in the sequence during the build-up. The results from tests with actual production data is shown in figure 3, for the total makespan, and in figure 4, for total tardiness. In both cases two different strategies using the product-based decomposition were compared to the case when solving a single problem, marked as "MILP" in the figures. In the decomposition runs products were added by two at a time as well as by four at a time in the other run. It shows that less products added at each stage improves computational effectiveness. In the case of the makespans there were no deviations in the objective function between the strategies. With total tardiness there was a trend that showed decreasing quality with increasing computational effectiveness. All tests were made using CPLEX 7.0 on 650MHz Athlon processor with 256MB RAM on Linux platform. Default settings were used in all cases, CPLEX(2000). The solutions obtained using direct MILP formulations look attractive, but the cpu-time to reach these is beyond practical daily use.

1035 zyxwvutsrqp SoMng tbnaa, total tardiiwaa

SoMng thm, total makaspan

60 50 - 6 initial, 2added - 6 initial, 4 added

/

- - 6 initial, 2 added

/ / /zyxwvutsrqponmlkjihgfedcbaZYXWVU ' • 6 initial,

fl40

f I30 I20

/

\

/

/

4added

zyxwvutsrqponmlkjih

10

Figure 3. CPU-times for total makespan

Figure 4. CPU-times for total tardiness.

Table 1. Objective value when using total tardiness as objective function. Strategy Products 6 10 14 18 22 26 30

2-added Obj CPU-s 0 0.58 0.88 3759 5250 1.17 2.34 7667 24045 4.79 8.61 39248 60880 18.63

4-added CPU-s Obj 0 0.58 3759 0.8 2.21 5250 7576 16.51 23954 35.09 39158 55.16 60772 436.85

direct MILP CPU-s Obj 0.5 0 41.14 3664 350 4997 (40% int gap) 7344 (61% int gap) >9500 na na na -zyxwvutsrqponmlkjih

4.2 Time based decomposition For the production it is enough to have a good schedule that ranges a few days into the future. For the sales department it is good practice to have some sort of schedule of previous orders when taking new ones. The proposed time based decomposition strategy takes both production and sales aspects into account. This decomposition strategy uses preferably the product based decomposition as base in a two step iterative process with two different objective functions. When solving scheduling problems formulated as MILP-problems, feasible solutions are usually found at an early stage. These solutions are in many cases good ones, but the quality is hard to determine due to large integer gaps. The decomposition strategy is to build up a feasible schedule containing all known orders. This large set is solved only until a few feasible solutions are given, due to time limitations. The second stage is to reschedule a smaller set of products taken from the beginning of the big schedule. The smaller set is solved to optimum, and the answer is then returned to the large problem in the form of additional constraints. The larger problem may then be resolved. In the

1036 smaller set there are also constraints from earlier runs in the form of the detailed schedule obtained earlier. It is worth noting, that the objective function for the smaller set must contain a minimization of the makespan. If not, the system may perform worse than when using only the whole set of orders alone. In its simplest form the sequence may be to minimize tardiness for the whole set of products, and then minimizing makespan for the smaller set. Solving times are determined by the methods used for the different phases. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

5

Conclusions

Both decomposition schemes are usable in practical applications. Both have been used succesfully to schedule real life situations with more than one hundred orders. The time based model provides a good short term daily schedule, as well as a good overwiev of the order situation, with acceptable solving times. Both decomposition methods suffer from reduced performance with growing number of orders, but there is a difference depending on the objective function used. When using total makespan as objective function the problem becomes quickly unsolvable without decomposition methods. When using the tardiness as objective function the problem is no longer as size dependant as it is with total completion time. Instead there is a clear indication that the time it takes to solve the time is more related to the tardiness. Creating a tight schedule takes longer than creating a loose one. In some instances, involving large amounts of orders, it may be necessary to use both methods together. If the smaller set of products, for the operational plan, in the time-based decomposition method can not be solved in one run, the product based decomposition can be applied with success. The solutions provided may be suboptimal, but unlike other heuristic methods it is possible to improve quality with increased computational power.

6 References Abadi I., (1995). Flowshop scheduling problems with no-wait and blocking environments: A mathematical programming approach, Ph.D. thesis. University of Toronto. Bjorkqvist J., Westerlund T, (1999). Automated reformulation of disjunctive constraints in MINLP optimization. Computers and Chemical Engineering Supplement, pp. SUS14 Franca P., Gendreau M., Laporte G., Muller F. (1996). A tabu search heuristic for the multiprocessor scheduling problem with sequence dependent setup times. International Journal of Production Economics, 43, pp. 79-89 Roslof J., Harjunkoski I., Bjorkqvist J., Karlsson S., Westerlund T. (2001). An MILPbased reordering algorithm for complex industrial scheduling and rescheduling. Computers and Chemical Engineering, 25, pp. 821-828 CPLEX - CPLEX 7.0 Reference Manual, ILOG Inc., (2000)

7 Acknowledgements Financial support from the European Union-project Vip-Net (G1RD-CT2000-00318) is gratefully acknowledged.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

1037

Utilization of Dynamic Simulation at Tembec Specialty Cellulose Mill Mohamad Masudy Tembec Inc. CP 3000, Temiscaming Quebec Canada JOZ 3R0, email: [email protected]

Abstract A high-fidelity dynamic simulator has been developed for Tembec Specialty Cellulose mill in Temiscaming in order to facilitate evaluation of mill upgrade alternatives. Areas of interest were primarily the screen room and the oxygen delignification stage, where it was desired to acquire an understanding of how the proposed upgrades were going to affect the overall process. Utilizing simulation early in the process design stage has delivered substantial benefits including tighter design and smoother startup while minimizing the design life cycle. Since the startup of the new mill the simulator has evolved into a full mill dynamic simulator, which has been interfaced to the mill data historian PI. This approach has made it possible to evaluate several different process and operational modifications based on real process conditions. The model has been utilized for several applications including the mill energy balance, effluent flows and mill water balance. Incorporating the grade specifications in the simulator has been instrumental in verifying the process dynamics during a grade transition. The existing control strategy along with the interlocks has also been implemented in order to facilitate study of control and operational strategies and to identify the process and equipment limitations.

1. Introduction Tembec Specialty cellulose mill in Canada is part of the sulfite pulp group operations in Canada. In 1998 to secure the long-term productivity and quality goals the mill decided to evaluate several alternatives to modernize the screen room and the bleachery. The project goals were to, • Reduce manufacturing costs by employing cost effective and reliable operation • Increase mill throughput to protect and increase Specialty market share • Meet the environmental and quality requirements of the Specialty pulp market Early in the process design, the mill decided to utilize simulation to outline and evaluate several mill upgrade alternatives. This approach made it possible to test hypotheses and assess different alternatives at a small cost and has helped with de-bottlenecking of the process before the actual systems were built. Steady state simulation was utilized to provided the required mass and energy balance, while dynamic simulation has later been used to verify the design parameters like equipment sizing.

1038

Gradually the steady state model has been enhanced with dynamic capabilities and connected to the mill data historian PI. The model has then been validated against actual mill data, which has made it possible to resolve discrepancies of the model. The actual control strategy has also been modeled in order to facilitate analysis of grade and production rate changes. Some of the applications are minimizing the overflows, water usage and prediction of pulp quality parameters. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPON

2. Process Description The cooked chips from 11 batch digesters are blown into a holding vessel called the blow tank. The pulp is then processed in the coarse screening stage to remove the knots from the pulp stream. The accepts from this stage is stored in another vessel used to feed the fme screening stage remove the remaining oversize material. The rejects, if not removed, will cause problems in the downstream process and the final product requiring larger amounts of bleaching chemicals. In the next washing stage, the dissolved solids, i.e. lignin, resin, red liquor, acids, etc., are removed from the pulp via a counter current washing strategy by addition of hot shower water and by utilizing vacuum reinforced gravitational force (drainage). The washing equipment that has been utilized is a Chemiwasher that borrows its design and drainage principles from the Fourdrinier type wet end of a paper machine. The concentrated filtrate is used for the dilution demand upstream and the excess is sent to the evaporators. Unlike the previous stages, the Oxygen and delignification stage after the Chemi-washer aims at removing the remaining solids by chemical additions rather than mechanical separation. In the Oxygen tower, the remaining lignin is oxidized. Further downstream, in the Extraction tower, the lignin extraction takes place. A press stage is also installed in between these two stages. After the extraction stage tower, the pulp is washed in a three stage counter current drum washers. The excess filtrate from this stage after cooling is sent to the effluent treatment plant. Further downstream, there are several bleaching stages along with washing stages after each bleach tower. After the bleaching plant, the pulp is finally dried using two drying machines, which operate very similar to other paper machines. Figure 1 and 2 illustrate the schematic overview of the screen room and the oxygen delignification bleaching areas.

3. The Steady-State Model The goal for developing the steady state simulation was to create a process model with sufficient degree of detail so that different process designs could be evaluated. The model also incorporated the equipment and process constraints. The data used for the existing equipment were based on actual mill data while models for several new pieces of equipment were based on data from the suppliers. The design alternatives, one by one, were then simulated and evaluated. The model was further enhanced with some dynamic capabilities in order to see the impact of equipment sizing on the overall performance. This concerns primarily the storage capacities in the countercurrent flow of pulp and liquor in each stage. The model has also been completed to account for dissolved solids carryover and evaporator efficiency, which has served as a tool for specifying equipment efficiencies in negotiation with equipment suppliers. This

1039 approach made the results quantifiable for the management to approve the mill upgrade project in 2000. Since then the proposed design has been implemented without significant problem with startup or operational concerns. Other areas where this model has been used are process modifications in order to reduce water demand and improve the heat recovery system. This model was created using WinGems simulation software by Pacific simulation. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Figure 1 : Screen room area.

Figure 2 : Oxygen delignification bleaching area.

1040 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

4. The Dynamic Model Steady state process simulation has traditionally been used as a tool for preliminary process design assessment in the pulp & paper industry. But due to the complexity and interaction between different design parameters and equipment constraints, the traditional methodology often provides incomplete and sometimes misleading information. To be able to accurately assess the performance and suitability of alternative process designs, dynamic simulation is indispensable in dealing with complex engineering systems. Furthermore by taking the equipment and process constraints into consideration, process bottlenecks can be identified early in the design stage. The results can then be used to estimate the capital costs and potential paybacks of different design and operational alternatives. In order to facilitate this task, the steady-state model was converted to a full dynamic model, which was further extended to a mill wide model covering digesters to the drying machines and the steam plant. Much of work was devoted to ensuring that all necessary and relevant process streams and equipment were accounted for and correct. The dynamic simulation software utilized here was Cadsim plus by Aurel systems.

5. Model Validation The original steady state model was based on the design data. After the mill upgrade was completed, the dynamic model has been validated against the actual mill data acquired from the mill data historian. In order to be able to run the model at more than normal operating conditions, there has been a need to make the model robust enough to handle the different extreme conditions such as startups and shutdowns. Therefore the control strategy and DCS logic had to be implemented. Furthermore a DDE link between the mill data historian PI and the dynamic model was established. This approach has made it possible to verify the simulation controls strategy and tuning parameters based on the actual process conditions.

6. Decision Support Utilizing dynamic process simulation as a tool for decision support in the process industry has both short and long term advantages. Once the new upgrades were commissioned, the mill decided to utilize the model to solve specific problems or optimization issues. To achieve this, the mill simulation and operational costs were modeled to enable some degree of manual optimization. A credible decision support system requires real process data, which in this case was achieved by connecting the simulation model to the PI data historian through a DDE link. This approach is intended primarily for process engineers to study operational or design modifications in the process based on offline process data. The approach is more suited to evaluate what-if scenarios in order to understand how changes in the operational parameters influence the process outputs. The long-term benefits are enhancements in process understanding. Furthermore, it provides a tool so that actual process scenarios can be played back to evaluate past performance, quantify the cost of operation and predict the future benefits. Figure 3 illustrates a snapshot of this model from the actual simulation.

1041 zyxwvutsrqpo

Figure 3 : Virtual bleaching area control room.

There is work underway to use this as an online tool, where the simulation can run along with the actual process to predict the future process behavior based on the operating conditions. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

7. Stochastic and Deterministic Models Tembec also employs multivariate statistical analysis and modeling tools in areas where first principal models are not readily available. By combining these statistical based models of the quality parameters from both laboratory and online measurements with the simulation model, we have been able to predict pulp quality in the presence of variable retention time in the process. This has been used to predict finished product quality such as DCM Resin and SI8. The first principal dynamic model is primarily used to account for variable retention time compensation while statistical model are used to model scarce lab measurements with fixed lag times in each bleaching stage. Simca by Umetrics has been used to extract the statistical models. Figure 4 illustrates pulp quality prediction results for finished pulp viscosity.

1042

zyxwvuts zy PulpM9C.M8(FLS)

Figure 4 : Observed versus predicted pulp viscosity.

YVar(FM_\^9C) YPnedH(FMVi9|)|

zyxw

8. Future Enhancements In some other similar projects we have been using steady-state optimizers along with the simulation to minimize e.g. dissolved solids carryover and energy demand in the evaporation. Although this approach is not a Real Time Optimizer, but in many cases it can be used under normal operating conditions. The analysis of more complex systems can be accomplished utilizing the statistical methods. Another potential benefit of this approach is to utilize Design Of Experiments methodology to run trials on the simulated process. This will facilitate the study of cause and effect and investigation of dependencies in the process.

9. Conclusions Process simulation can provide significant value-in-use in process design. A detailed dynamic simulation can be used throughout the project lifecycle and later serve as a tool for process optimization. This can result in substantial benefits including tighter design, reduced project lifecycle, smoother startups and optimized production. The close match of the simulation to the real process gives confidence that the effect of modifications in process design and operatit)n can be accurately predicted. Utilizing the actual mill data has been a valuable tool to identify feasible operational space and actual disturbances. This has made it possible to study design and operational modifications based on actual process constraints and conditions. Combining statistical modeling techniques can also be a complement to the traditional modeling particularly where these models are not readily available. This has also resulted in reducing the degree of complexity of the simulation.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

1043

Synthesis of Heat Recovery Systems in Paper Machines with Varying Design Parameters Frank Pettersson and Jarmo Soderman Heat Engineering Laboratory, Abo Akademi University Biskopsgatan 8, FIN-20500 Abo, Finland

Abstract The heat recovery system (HRS) is a vital part in a paper machine when it comes to the overall energy economy in papermaking. For a typical nev^sprint machine more than 60% of the exhaust energy from the dryer section can be recovered, resulting in a recovery of about 30 MW. The synthesis task of a HRS is a decision process where the target is on the one hand to achieve maximal energy recovery and on the other hand to obtain this recovery with minimal investment costs. These goals are contradictory and thus the problem is to fmd a solution minimizing the overall costs, considering simultaneously both energy and investment costs. This synthesis task can be performed with e.g. pinch-analyses or optimization methods. One of the first tasks for the designer is to decide which design parameters, including process flow streams, temperatures and heat transfer coefficients are to be applied. This task is in general not trivial and the result will have a great impact on the overall economy of the final HRS. One challenge is how to take into account uncertainties and known variations in some parameters. The desired design must be capable of handling all evolving situations but it should also be the most economical one when considering the duration of the different operational situations. In this work the importance of taking the variations and uncertainties into account in the design stage is shown with a case study.

1. Introduction Designing a HRS system requires decisions about which matches between the streams should be implemented as well as the size of the heat exchangers. Formulating the design task as an optimization problem results in a mixed integer non-linear programming (MINLP) problem. The structure i.e. the decision on how to connect the heat exchangers is defined with binary variables and the areas and temperatures are defined with real valued variables. Most numerical methods for solution of optimal heat exchanger networks (HEN) decompose the problem in order to simplify the solution procedure. The problem can e.g. be decomposed into three subproblems; a linear programming (LP) problem, a mixed integer linear (MILP) and a non-linear programming (NLP) problem as presented in e.g. Floudas et al. (1986). This decomposition technique is based on the same principles as pinch-analysis (Umeda et al. 1978, Linnhoff et al. 1983) where the energy costs are considered as the most dominating and thus the minimum utility consumption is of first priority. It is although obvious that this is not always a correct assumption. A method solving the whole HEN

1044 design problem simultaneously has been presented by e.g. Yee and Grossmann (1990). One major problem with the simultaneous methods is that they are often restricted when it comes to the number of streams and complexity of the models due to the computational work. The HRS design problem has been solved as an MINLP problem by Soderman et al. (1999) by discretization in temperature intervals. For all possible matches between these temperature intervals, areas and transferred heat can be evaluated in advance. The problem can thus be stated with linear constrains and a nonlinear objective function, due to the economy of scale. Evolutionary programming methods are general stochastic techniques that can deal with large problems without being restricted by non-convexities and discontinuities in the models. For stochastic techniques the solution cannot be guaranteed to be the global optimum. On the other hand, the techniques work with a large set of possible solutions candidates and are thus assumed to have good properties in screening the most promising regions of the feasible search space for multimodal problems. The design of HEN has been addressed with different hybrid methods in e.g. Athier et al. (1997) where simulated annealing together with NLP was used and Lewin (1998) who solved the problem with a genetic algorithm and an NLP subproblem.

Heat exchanger networks are usually designed assuming fixed design parameters representing nominal operating conditions. However, the operating conditions may change considerably during time. Changes may be imposed by varying production loads, used raw materials or produced products. The changes can also be a result of normal degradation of the equipment as e.g. fouling of the heat transfer surfaces. Some of the changes are known while others can only be estimated. Considering these changes in the design stage may be of utmost importance so that poor overall economical designs can be avoided. Modeling design uncertainties in a mathematical framework was initiated by Halemane and Grossmann (1983). An approach to deal with uncertainties, described as probability distributions, is to use multi-period models. With a large number of uncertain parameters the problem size will, however, become extensive. This is especially true when a high accuracy is desired and a lot of periods have to be used. A hybrid method based on evolutionary programming and non-linear programming for HRS design problems under uncertainties has been presented by Pettersson and Soderman (2002). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF

2. Paper Machine A modern paper machine may have a yearly production of 350.000 ton of paper and a machine speed up to 2000 m/min. The papermaking process is highly energy consuming. For manufacturing one ton of newsprint about 600 kWh electricity and 1500 kWh heat energy is required. Almost all the heat energy is used in the drying section. Drying of the paper web from about 35-50% dry content to about 90% dry content is performed in the dryer section of the paper machine where cylinders are heated with low pressure steam. For removal of moisture, supply air at 95°C is blown into the paper

1045 hood resulting in an exhaust air from the hood with a moisture content of about 150 gH20/kg dry air at a temperature of 80°C. Almost all heat energy used in the process can be found in the exhaust air and it is thus very suitable for attempts to recover the heat. The main target with heat recovery is naturally considerable cost savings.

The heat recovered can be used to heat a variety of streams. The recovered heat is transferred to e.g. the circulation water heating the machine room ventilation air, to process water and to the heating of supply air to the dryer section. This can be seen from Fig. 1 where the main streams are depicted. In the figure no decisions about the matches in the heat recovery section are yet indicated. The moist exhaust air from the hood is removed at three different locations. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHG

*- Fbperweb

Outdoor exhaust 30=0

Process wcffer

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Qjtdoor air

^-O

Fresh water zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Fig. 1. The HRS design problem with no matches indicated in the HRS section. To decide the configuration of the HRS we thus have to consider three hot streams and three cold streams, which is quite a small number of streams when comparing to general HEN synthesis problems. The condensation taking place in the heat exchangers make the task more complicated when both the heat capacities and also the overall heat transfer coefficients (Fig. 2) vary strongly. 0.6

0.06

0.4^

0.04

U

U 0.21

0.02

0: 60 50 \ T

Ol 60 \

40 ""^\

'''''

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE

_._„^.—

30 80

70

60

50

40

50 -. 30 80

70

60

50

40

T Fig. 2. Heat transfer coefficients (kW/m^K) for moist air (135 g H20/kg dry air) to process water (left) and supply air (right) for different combinations on temperatures.

1046 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

3. Case Study In this study two HRS networks are obtained with the hybrid method presented in Pettersson and Soderman (2002). The first one is obtained when nominal operational parameters are considered and the second is obtained when estimated variations on some parameters are also observed. The design task is the one depicted in Fig. 1 with the following objective function for evaluation of the quality of a possible solution. mini =zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA K{k,n^ +k,A'^ + M a . +^54':)+^7e/i

(1)

K is the annualizing factor and Aaa is the total heat transfer area for the air-air heat transfers given in 1000 m^. ttaa indicates the number of different air-air matches and thus ki is a fixed cost for each new match, ki an area cost factor and k^ is a constant with values typically between 0.6 and 1. In a similar manner the investment costs for the airwater matches are obtained using constants k^, ks and k^. QHU is the amount of hot utility needed for all cold streams to be heated to desired temperature levels after the heat recovery section and k-j is an annual hot utility cost factor. All heat exchangers are considered to be of countercurrent type. Costs for cooling are not included because the exhaust air is simply released outdoors in a paper machine. Investment costs for heat exchangers for external heating are also excluded from the problem because they are assumed to be needed in startup situation and are therefore included in the final design. The investment costs for these are also proportionally low, due to the greater temperature differences. The factors used in the objective function are: A;i=0.02, /:2=0.24, A:3=0.6, A:4=0.04, ks=0.22 and A;6=0.8 when the unit on investment costs is in M€. The factor 0.16 has been used to annualize the investment costs, corresponding to a 10 % interest rate and a 10 year depreciation time. The hot utility cost is 16 €/MWh.

When searching for the point-optimal design, the moisture content of the exhaust air is fixed at the most probable operational condition jc=135 gH20/kg dry air. The flows of process water, supply air and circulation water are fixed at respectively 70, 70 and 200 kg/s and for each of the three locations for removal of moist air the flow is 30 kg/s. With these parameters, the obtained point optimal structure and the heat exchanger areas are indicated in Fig. 3. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB H1

H2

H3

H1 H2 H3 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

MOOrrf C 1 Air k >1100m* cizyxwvutsrqponmlkjihg -zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA T^rwv«» Q3100m*

*

^

2100m*

?

1700m*

Fig. 3. Point optimal design.

C 3 Process water

^rY4Qft7

C 2 Glued water

C 3 Process water 12001^ I

I

Fig. 4. Flexible design obtained considering variations.

1047

Several parameters are though known to vary during operation e.g. the moisture content of exhaust air from the dryer section, the mass flow of process water and circulation water, most initial temperatures and heat transfer coefficients. The variations do not occur due to unsuccessful process control, but due to changing manufactured paper qualities or due to changing outdoor or fresh water temperatures. In this work, the variations in moisture content of exhaust air and the flow of process water have been considered as important and expected to have an impact on the solution. The variations on moisture content and process water flow are estimated using normal distribution functions. The objective function is thus reformulated so that a weighted mean of the different possible operational points can be evaluated by. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR

-11:

.yflTT

exu

((i-¥bx + cx^ -^dF + eF'' + faF}lxdF C^) exd 25? zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA ,72^

incorporating expected mean values, //,, and variances, 5,. The parameterszyxwvutsrqponmlkjihgfedcb a,...,f'm the approximating function are obtained by using e.g. a linear least squares formulation on a number of evaluations at different operational points. The assumed variations in this case are thus described by a mean moisture content of 135 gH20/kg dry air with a variance of 20 gH20/kg dry air. Similarly for the flow of process water, a mean flow of 70 kg/s and a variance of 5 kg/s have been estimated to describe the evolving operational points best. Considering these variations, the resulting design can be seen in Fig. 4. Comparing the two obtained designs at the basic operation point (x=135 and F=70), the annual costs are for the point-optimal design, 0.634 M€ while they are 0.697 M€ for the flexible one. Thus, the point-optimal is 9% cheaper. On the other hand, when taking into account the expected variations, the annual overall costs for the point-optimal design is 0.806 M€ while they are 0.761 M€ for the flexible design. Now, it can be observed that the point-optimal solution is 6% more expensive than the flexible one. The differences between the costs for the point-optimal design and the flexible design are illustrated in Fig. 5. The negative values in Fig. 5 indicate that the point-optimal solution is cheaper at these operational conditions.

Fig. 5. Differences between costs (M€/a)for the point-optimal and the flexible design.

1048

One may argue that selecting the most commonly occurring operational condition is not completely fair, and that a more demanding operational point should be selected. With parameters values fixed at x=120 and F=75, a new point-optimal design, shown in Fig. 6 can be obtained resulting in an annual cost of 0.792 M€ when the whole range of operational points is considered. It can be noted that this design is still 4% more expensive than the flexible one. The differences between the costs for the point-optimal design and the flexible design are illustrated in Fig. 7. zyxwvutsrqponmlkjihgfedcbaZYXWVUTS

H2

H1

p zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 1

1

1

1

.

1

1

20

Figure 4: Experimental versus multiple-step-ahead predictions of water bed content for all experimental data (600points) MSE=0.0887. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO

^• Y-'-T

V~.'4

K-J LAM 80

1G0

Samples

Figure 5: Multiple-step-ahead predicted values of water production rate in dry basis and experimental values for culture 3.

Samples

Figure 6: Multiple-step-ahead predicted values of bed temperature and experimental values for culture 1.

1078 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

5. Conclusions Results show very good estimation capacities using the first proposed scheme on validation data while succeeding to furnish only good water bed content estimations when working under the second scheme. The results confirm the capacity of this kind of neural model to track complex dynamic systems when a priori knowledge is conveniently introduced. Hence, the developed model can be used on-line, for example in a non linear model predictive control scheme.

6. References Aguiar, H.C. and Filho, R.M., 2001, Neural network and hybrid model: a discussion about different modeling techniques to predict pulping degree with industrial data, Chem. Eng. Sci., 56:565-570. Gontarski, C.A., Rodrigues, P.R., Mori, M. and Prenem, L.F., 2000, Simulation of an industrial wastewater treatment plant using artificial neural networks, Comp. Chem. Eng., 24:1719-1723. Peria y Lillo, M., Perez-Correa, R., Agosin, E. and Latrille, E., 2001, Indirect measurement of water content in an aseptic solid substrate cultivation pilotscale bioreactor, Biotech. Bioengn., 76(1):44-51. Vlassides, S., Ferrier, J.G. and Block, D.E., 2001, Using historical data for bioprocess optimization: modeling wine characteristics using artificial neural networks and archived process information, Biotech. Bioengn. 73(l):55-68. Zorzetto, L.F.M., Filho, R.M. and Wolf-Maciel, M.R., 2000, Process modelling development through artificial neural networks and hybrid models, Comp. Chem. Eng. 24:1355-1360.

7. Acknowiegments Fondecyt Grants 1010179 and 1020041 (Chilean Government) and Ecos-Conicyt Grant C99-B01 (French Cooperation).

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

1079

On-Line Monitoring and Control of a Biological Denitrification Process for Drinking-Water Treatment M.FJ. Eusebio, A.M. Barreiros, R. Fortunato, M.A.M. Reis, J.G. Crespo, J.RB. Mota* Departamento de Quimica, Centro de Quimica Fina e Biotecnologia, Faculdade de Ciencias e Tecnologia, Universidade Nova de Lisboa, 2829-516 Caparica, Portugal

Abstract Online monitoring and control of the biological denitrification process in a cell recyle membrane reactor has been developed and implemented at laboratory scale. The system has been tested with a real groundwater contaminated with nitrate. It is shown that a simple feedforward control strategy that adjusts the feed rate of the carbon source to maintain the optimum inlet carbon/nitrate ratio value is effective at reducing both nitrate and nitrite concentrations in the treated water below the maximum admissible values.

1. Introduction In many areas of the world groundwater is the primary source of drinking water. Unfortunately, groundwater supplies are increasingly contaminated with nitrate, which exceeds the maximum admissible value of 50 mg NOJ/L set by the World Health Organisation and European Community (ENV/91/24, March 18,1992). Water contamination by nitrate is caused by the intensive use of chemical fertilisers and untreated industrial and domestic wastewaters (Bouchard et al., 1992). The Biological denitrification process eliminates nitrate by completely reducing it to gaseous nitrogen. This is in contrast to physico-chemical remediation processes, such as ion exchange, reverse osmosis and electrodialysis, in which the pollutant is just transferred and/or concentrated. The major disadvantages of the conventional biological denitrification process are the microbial and secondary contamination of treated water (Bouwer and Crowe, 1988; Lissens et al., 1993a,b). Microbial contamination is mainly caused by the presence of the microorganisms used in the biological process and can be eliminated by using ultra/microfiltration membrane bioreactors. The membrane effectively retains the microbial culture inside the reactor so that it may be operated under low hydraulic residence time. It has been previously demonstrated that the membrane bioreactor ensures a high nitrate removal rate (up to 7.7 Kg NO^/m^ reactor-day) and a residual concentration of nitrate and nitrite, in the treated water, below the maximum admissible values (Barreiros et al., 1998). The secondary contamination of drinking water is due to the presence of organic soluble materials, which are produced during the biological treatment process (metabolic by-products) and/or are added in excess as electron donors for the biological nitrate reduction. In order to avoid contamination of the treated water by residual carbon, the amount of electron donor added must be set according to the nitrate concentration in the polluted water. Ideally, this amount should be equal to the quantity required for the dissimilative nitrate reduction plus the amount required for cell growth (assimilation) and maintenance (Blaszczyk, et al., 1981; Her and Huang, 1995; Constantin and Fick, 1997). If nitrate is not fully reduced to gaseous nitrogen, intermediary accumulation, mostly of nitrite, is

1080 likely to occur. In fact, the toxicity of nitrite is higher than that of nitrate; the maximum admissible value for nitrite has been set at 0.1 mg NO^/L (ENV/91/24th March, 1992). The concentration of nitrate in groundwater has seasonalfluctuationsdue to climatic and environmental factors. In order to have an efficient denitrification process, the amount of carbon source must be regulated according to thefluctuationsof nitrate concentration. This objective can be ensured by using an adequate control strategy. The aim of the present study is: • To develop and implement an on-line monitoring strategy for the biological denitrification process in a cell recycle membrane bioreactor;

• To develop a simple, yet effective, control scheme to maintain the nitrate and nitrite concentrations below the maximum admissible values for drinking water by adjusting the feed rate of the carbon source. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO

2. Experimental Setup

The denitrifying mixed culture was obtained from sludge taken from a wastewater treatment plant, enriched in a synthetic medium (Barreiros, et al., 1998), and grown under anoxic conditions at 28°C and pH 7.0. The groundwater employed (Estarreja, Portugal), which was contaminated with nitrate concentrations in the range 140-190 mg NO^/L, was supplemented with phosphate before each run. The experimental setup is shown in Figure 1. It comprises a cell recycle membrane bioreactor, measuring equipment and sensors, and an online monitoring and control system. The membrane reactor consists of a stirred vessel with an effective volume of 0.45 L, coupled to a membrane module. The contaminated water is pumped tangentially along the membrane surface with a cross-flow velocity of 1 m/s, generating two streams: a permeate stream free of cells (treated water), and a retentate stream (with cells) which is zyxwvutsrqpon

Figure 1: Schematic diagram of the membrane bioreactor and online monitoring system. Thick lines represent streams, whereas thin lines represent transmission signals.

1081 recirculated to the reactor. The system is operated continuously by feeding it with contaminated water to be treated and removing part of the permeate free of nitrate and nitrite. The permeate is partially recycled to the system in order to guarantee the desired hydraulic residence time for each experiment. A hollow fibber polysulfone membrane with an effective area of 0.42 m^ was used throughout this study. The internal diameter of thefibersis 0.5 mm. The membrane molecular weight cut-off is 500 kDalton to completely retain suspended solids, supracoUoidal material, and micro-organisms. The hydraulic permeability of the membrane at 28°C is 875 L/m^ . h . bar. The online monitoring and control system measures the nitrate, nitrite and dissolved organic carbon (DOC) concentrations, using an adjustable sampling rate and controls the flow rate of the carbon source added to the bioreactor. To check the accuracy of the manipulated variable, this flow rate is also measured by recording the weight change of the carbon source. The permeation conditions of the membrane are inferred by measuring the transmembrane pressure at the inlet, outlet, and permeate of the ultrafiltration system. A snapshot of the console window of the monitoring and control interface is reproduced in Figure 2. The software interface was implemented in Lab view. The cell concentration was determined by optical density (OD) measurement at 600 nm and compared with an OD versus dry weight calibration curve. Nitrate and nitrite concentrations were measured using a segmented flow analyzer (Skalar analytic^^). Nitrite detection was based on the colorimetric reaction with N-(l-naphthyl)-ethylenediamine; nitrate was detected as nitrite by the same method after reduction by hydrazine. DOC was also measured using the segmented flow analyzer. Carbon compounds were detected as

.jysJiLi zyxwvutsrqponmlkjihgf

J oooooo

OOOOOO OOOOOO 000000 OOOOOO 000000 000000 000000 000000 OOOOOO 000000 OOOOOO

zyxwvutsrqponmlkjihgfedcbaZYX

Figure 2: Snapshot of the console window of the monitoring and control interface developed in LABVIEW. The peaks represent online calibrations of the nitrate and nitrite measurements against a standard sample (lOOppm or lOppm).

1082

C02 in a refractive index detector after digestion with persulfate by UV radiation. Acetate was measured by a High Pressure Liquid Chromatography (HPLC) using a reverse phase column (Hamilton PRP-X300). Because of the inherent characteristics of the instrumentation used to monitor the nitrate, nitrite and DOC concentrations, the online measured values of these variables are delayed by 5,10, and 20 minutes respectively. Note, however, that these delays do not reduce the effectiveness of the monitoring and process control system since they are in general much smaller than the characteristic time of the disturbances in a real influent stream. zyxwvutsrqponmlkjihg

3. Results and Discussion As stated in the introduction, the ratio of carbon consumed to nitrate reduced (C/N) is the key variable to effectively control the denitrification process. Using the results presented here, we shall show that when the carbon source is added according to an optimum inlet C/N ratio value, both nitrate and nitrite concentrations in the treated water are kept below the maximum admissible values. Figure 3 shows the measured concentrations of nitrate, nitrite, and DOC during the denitrification process subjected to different inlet C/N ratios. The system was firstly operated to steady state using an inlet C/N ratio of 1.55. During this initial transient period (approximately 30 hours) both nitrate and nitrite accumulate, after which the concentrations of both pollutants in the treated water drop to values below the maximum admissible ones. The purpose of this preliminary experiment was to simulate the startup of the water treatment plant. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

CD

E

d oo zyxwvutsrqponm

Figure 3: Measured nitrate, nitrite and COD concentration histories in the outlet stream during the denitrification process subjected to different inlet C/N ratios (whole run). • , nitrate; m, nitrite; #, COD.

1083

C/N=139 I aN=1.29 I aN=1.39 zyxwvutsrqponmlkjihg C/N=1.29 C/N=1.55 I zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

mr

E o 4= c zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

8 g O

lb

£ Q

o o zyxwvutsrqponm

(0

I z

Figure 4: Measured nitrate, nitrite and COD concentration histories in the outlet stream during the denitrification process subjected to a continuous cycling of the inlet C/N ratio value between 1.29 and 1.39. A, nitrate; m, nitrite; ^, COD. Then, several tunings of the inlet C/N ratio value were performed to determine the optimum operating value and assess the responsiveness of the system. The C/N ratio was firstly decreased to 1.29, it was observed that the system responded quickly to the imposed step change in the inlet C/N ratio (Figure 4). Both nitrate and nitrite concentrations increased. Under these new operating conditions, the treated water did not met the quality requirements of a drinking water. The nitrite concentration was above 0.1 mg NO^/L, although the nitrate concentration was below the maximum admissible value. Nitrate accumulation was caused by the limitation of carbon due to the low C/N ratio used. The inlet C/N ratio was then increased from 1.29 to 1.39. Again, the system responded fast and reduced the nitrite concentration below 0.1 mg NO^/L. Finally, the system was subjected to a continuous cycling of the inlet C/N ratio value between 1.29 and 1.39 to test its responsiveness. The results confirm that the optimum inlet C/N ratio value that avoids nitrate and nitrite accumulation is in the range 1.3 < C/N < 1.4. This C/N value is very consistent with the values obtained in continuous tests using pure denitrifying culture and synthetic medium (Barreiros, et al., 1998). It is roughly 30% larger than the value calculated according to the stoichiometry of the dissimilative reduction reaction of nitrate with acetate as a carbon source, and is very close to the value of 1.4 predicted by the empirical equation proposed by Mateju et al. (1992), which also takes into account the amount of carbon used for cell synthesis.

1084 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

4. Conclusions Online monitoring and control of the biological denitrification process in a cell recyle membrane reactor has been developed and implemented at laboratory scale. The system has been tested with a real groundwater contaminated with nitrate. The results presented in this study show that the C/N ratio is the key parameter to guarantee an efficient denitrification process. A simple feedforward control strategy that adjusts the feed rate of the carbon source to maintain an inlet C/N ratio value of 1.39 is effective at reducing both nitrate and nitrite concentrations in the treated water below the maximum admissible values. Moreover, this control strategy based on the C/N ratio is easy to implement in a water treatment plant and does not increase the complexity of its operation at industrial scale. Acknowledgement. Financial support for this work has been provided by Funda^ao para a Ciencia e Tecnologia under contract Praxis XXI 3/3.1/CEG/2600/95.

6. References Barreiros, A.M., CM. Rodrigues, J.G. Crespo, M.A.M. Reis, 1998, Bioprocess Eng. 18, 297. Blaszczyk, M., M. Przytocka-Jusiak, U. Kruszewska, R. Mycielski, 1981, Acta Microbiol. Polon. 30,49. Bouchard, D.C., M.K. Williams, R.Y. Surampalli, 1992, J. AWWA 84, 85. Bouwer, E.J., RB. Crowe, 1988, J. AWWA 80, 82. Constantin, H., M. Fick, 1997, Water Res. 31, 583. Her, J.J., J.S. Huang, 1995, Biores. Tech. 54,45. Liessens, J., R. Germonpre, S. Beemaert, W. Verstrate, 1993a, J. AWWA 85, 144. Liessens, J., R. Germonpre, I. Kersters, S. Beemaert, W. Verstrate, 1993b, J. AWWA 85, 155. Mateju, v., S. Cizinska; J. Krejci, T. Janoch, 1992, Enzyme Microb. Tech. 14,170.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

1085

The Role of CAPE in the Development of Pharmaceutical Products Daniel J. Horner, PhD, BEng, AIChemE and Parminder S. Bansal, BEng, CEng, MIChemE AstraZeneca R&D Charnwood, Bakewell Road, Loughborough, Leicestershire, LEI 1 5RH, E-mail: [email protected], [email protected]

Abstract One of the key challenges facing pharmaceutical companies is to reduce the time to market and cost of goods of their products whilst continuing to comply and exceed stringent regulatory requirements. With the ever increasing need for shorter drug development periods, more efficient tools and methods of working are requried. The role of the Process Engineer in the development of a candidate drug is to actively seek a robust and scaleable process through the application of experimental and theoretical process / chemical engineering science. They are expected to bring a long term view to the process development strategy that ensures SHE issues are raised and resolved, bulk drug capacity requirements are achieved and the most appropriate innovative technologies exploited. In this paper, a variety of CAPE techniques employed at AstraZeneca in order to generate a better understanding of the chemistry and scale-up challenges for our products will be discussed. The use of these tools across the various functions represented on development projects allows for close collaboration and consistent methods of working.

1. Introduction Batch process modelling techniques are being utilised during the drug development process, allowing route selection, equipment requirements, manufacturability, siting and SHE issues to be identified and resolved. The models are highly flexible and can be used to simulate scale-up from laboratory through pilot plant and into full-scale manufacture. The process can be optimised during development through the use of these tools, providing minimal risk to the product along with significant time and cost benefits. During the life cycle of a project a number of different campaigns are undertaken. Scale-up and scale-down issues are arguably the most important areas of work for process engineers. An understanding of the scientific fundamentals that affect scale-up; mass and heat transfer phenomena, heterogeneous reactions, crystallisation, isolation and drying, mixing/agitation, reaction kinetics and safety are critical to successful production. Traditionally, they have been the remit of the Process Engineer, resulting in an exclusively engineered-focussed solution. The use of CAPE tools in conjunction with experimental work promotes a collaborative approach, improving the interface between science and engineering to find the best technical solution to a problem.

1086

The development of powerful dynamic simulation packages has greatly increased the understanding of processes and allows for improved manufacturability and equipment specification. With candidate drugs becoming increasingly complex and in limited supply during development, a general lack of information exists. The use of property prediction tools is important to ensure the dynamic models are as accurate as possible. The use of process control software to control scale-down reactors within our process engineering laboratory provides an important link between laboratory preparations and pilot plant and ultimately full-scale manufacture. Whilst it ensures consistent production methods and highlights potential manufacturing issues, large amounts of data can be collated quickly providing invaluable scale-up information for later accommodations. The combination of laboratory experimentation and CAPE technology is providing AstraZeneca with the means to reduce costs and development time, whilst producing optimised and robust processes for our products. A number of real-life case studies are detailed below which demonstrate the effectiveness of CAPE tools in the development of pharmaceutical products. zyxwvutsrqponmlkjihgf

2. Case Study 1 - Chlorination / Oxidation of Compound X zyxwvutsrqponmlkjihg

R

"S

R

1.) Acetic acid / water 2.) Chlorine (gas)

X-SulphonylCl This is a step in a process that was recently developed at AstraZeneca. The original process description specified 10 mole equivalents of chlorine to be used. A reaction mechanism was postulated suggesting that only 3 mole equivalents were required, therefore the rest was effectively wasted. Attention was focused around improving the mass transfer of the chlorine. Laboratory experiments showed that the reaction kinetics is extremely fast and highly exothermic. Scale-up mixing utilities, provided with DynoChem, were used to provide required agitation rates in the laboratory to ensure the gas was fully dispersed. A model was developed using DynoChem to enable accurate predictions of scale-up to be made. To ensure its validity, experimental data was also fed into the model, from which the necessary scale-up parameters could be derived. RCl data was used to measure the exothermal activity during the reaction and experiments were performed to assess the saturation concentration of chlorine in the solvent mixture. Laboratory and plant scale temperature trials were carried out and the data used to derive actual heat transfer coefficients. The use of DynoChem allowed the data to be processed efficiently. The model predicted that an excess of chlorine was not required in the reaction if the addition rate was controlled. This finding was confirmed in the laboratory, thereby significantly reducing the raw material and plant scrubber requirements. The batch temperature is limited to less than 15°C and upon scale-up the model showed the

1087 reaction to be heat transfer limited, as opposed to mass transfer limited and so further work was carried out to investigate the effect of different jacket temperatures. From a knowledge of the plant vessel heat transfer characteristics and extrapolation of laboratory data, a jacket temperature was defined that maximised heat transfer without risk of freezing the batch contents at the wall of the vessel. Further scenarios can be modelled with improved mass and heat transfer, enabling the process engineers to confidently define the equipment requirements for future campaigns. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

LSL Batch

-l-HCI(mol) -2-BatchTemp. (°C ) -3-RSR(mol) -4-lnt_1 (mol) -5-lnt_2 (mol) -6-Product(mol)

0

50

100

150

200

Time (min) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFE Figure 1. Typical Model output from Dynochem plotted in Microsoft Excel. Although this graph is fairly complicated, it serves to show the vast amount of information that can be gleaned from a single model. It is possible to make crude predictions for the depletion of reactants and formation of products. The reaction at this scale shows a peak temperature of 14.7°C and chlorine addition time of three hours, which is in very close agreement to that experienced on plant. Figure 2 shows the effect of improving the heat and mass transfer in this system. This allows the process engineer to quickly focus attention on the important parameters of the system. The figures used here are arbitrary, but show that by improving mass and heat transfer, significant reductions in reaction time can result. Thus, the process engineer can focus on the key parameters to improve the process.

1088 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Improved Heat and Mass Transfer

-1-Ha(mol) -2-BatchTennp. (°C ) -3-RSR(mol) -4-lnt_1 (mol) -5-lnt_2(mol) -6-Product (mol)

50

100

150

200 zyxwvutsrqponmlkjihgfedcbaZYXWVUT

Time (min)

Figure 2. Chlorination model with improved heat/mass transfer. zyxwvutsrqponmlkjihgfedcbaZYX

3. Case Study 2 - Modelling distillation in work-up of Compound X Distillation is a widely used unit operation in pharmaceutical manufacture to remove components from a system to an acceptable level. The process engineer is able to assist in the selection of the optimum solvent system, providing vapour-liquid equilibrium information and predictions of the efficiency of the separation. A recent example highlighted the effectiveness of CAPE tools in the design and prediction of distillation performance. The solvent used for the chlorination of compound X is acetic acid. However, cooling crystallisation in acetic acid resulted in poor physical form, resulting in problematic isolation and drying. Alternative solvents were investigated and crystallisations from toluene were found to provide excellent physical form. Due to temperature sensitivity of the product to degradation, a reduced pressure distillation was required to perform the solvent swap. The toluene-acetic acid system is a reasonably well understood system and a plethora of data has been published. The Detherm database (www.dechema.de) is a valuable process engineering tool and a source of credible physical property data. However, if published data is not available, the principles described below can be applied to almost any system. Vapour-liquid equilibrium (VLE) data was modelled using SMSWin and Aspen Properties and in this case validated against published data. Property prediction software (ProPred) was used to model the properties of compound X using group contribution methods, allowing the effect of the compound upon the vapour liquid equilibrium to be investigated.

1089 The VLE data was then fed into the batch distillation modellers available at AstraZeneca (SMSWin and Aspen Batch Frac) to predict the composition of the distillate over time. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA —I

ClJ)

C^)

•e

3RDDIST I

n!^

^ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

1 3RDCHAR

\-

[ 2NDCHAR

[-

- | 2ND0IST I

C^

—I

C^

1STDIST I

zyxwvutsrqp

zyxwvutsrq I 3: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK I PRODUCT I

C^

Figure 3. Schematic Representation of Batch Distillation Model (Aspen Plus). This particular model allows up to three charges to be made to the still, which in this case comprises a charge to define the initial pot composition and two intermediate toluene charges. The model is set-up to distil to a pre-defined pot volume in between charges. For the compound X system it was found that the concentration of acetic acid in the pot had fallen to an acceptable level following the third distillation. Although this is a relatively simple model, it allows the process engineer to screen for suitable solvents for all solvent swaps, as well as provide the process chemists with optimum operating conditions. Thus, laboratory development time can be focussed on other issues.

4. Case Study 3 - Batch Simulation Aspen Batch Plus is a tool used predominantly by Process Engineers to store process data from various projects being currently developed. Aspen Batch Plus enables the Process Engineer to compile all the data pertinent to the process into a central location, thereby generating a model / simulation of the process. At AstraZeneca, simulations are developed early in the project life-cycle with a view that it will grow as more information becomes available. This tool aids in the process of technical transfer, that is the transfer of all process information from one facility / site to another. The tool allows simple "scale-up" or capacity calculations to be performed and generates a complete mass balance of the chemistry under consideration. Aspen Batch Plus is also used as a scheduling tool, in order to identify potential bottlenecks. An Aspen Batch Plus model was developed for a product that was recently transferred to full-scale manufacture. The process was to be manufactured in a new facility for which the design was copied from an existing plant. The cycle time predictions from the model showed that this design of plant was not capable of producing the required amount of product and further isolation and drying equipment was necessary. The model was also used to predict VOC emission data used for initial abatement design. Outputs from the model were also used to describe the process flow, assist generation of batch records and estimate effluent stream composition.

1090

Other Aspen Batch Plus models developed earlier in the project life-cycle have been used to identify potential throughput issues in the technology transfer plant, provide an estimation of the amount of iodo contaminated effluent from a process and comparison of manufacturing routes to enable potential problems on scale-up to be considered in decision making. The models are also used to evaluate potential manufacturing facilities, not only for full scale but also for development campaigns. zyxwvutsrqponmlkjihgfedcbaZ

5. Conclusions This paper sets out to show, within the vagaries of the pharmaceutical industry, the CAPE tools that can be used in process development. There are a number of issues to be resolved when developing a process, not just process specific, but business (economic) drivers, safety issues, environmental concerns, moral and ethical issues and regulatory requirements. In light of these varied challenges, we have found that not one CAPE package covers every aspect of development and that the combination of a number of specific tools suits the requirements of the pharmaceutical industry. The main theme of this paper is that CAPE tools need to be used in conjunction with more "primitive", but no less valuable tools, such as laboratory work. It is impossible to fully understand and appreciate the issues and challenges of a process by computational modelling alone. The model provides a valuable insight into the process and identifies the parameters that require a more detailed study. This increased understanding means improved process development and more robust processes, that will help to deliver products to the market as quickly and cost effectively as possible. ThezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA development of a pharmaceutical process can be key to its success, especially during the early stages of a project. When processes are not fully developed and understood, problems are encountered in technology transfer, often resulting in "fire-fighting" as issues arise. Another important benefit is the potential cost savings during process development, thereby allowing more products to be developed for less resource. The identification of issues before campaign manufacture also reduces lost time on plant. CAPE tools have enabled the process engineers at AstraZeneca to improve the understanding of the processes being developed. They have also aided cross-functional collaboration between process engineers and process chemists, in particular when considering transfer of a process from the laboratory to the Pilot Plant. Ultimately this enables us to develop cost effective robust processes with minimal SHE impact.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

1091

Developing Phenomena Models from Experimental Data Niels Rode Kristensen", Henrik Madsen^ and Sten Bay J0rgensen^ "Department of Chemical Engineering, ^Informatics and Mathematical Modelling, Technical University of Denmark, DK-2800 Lyngby, Denmark

Abstract A systematic approach for developing phenomena models from experimental data is presented. The approach is based on integrated application of stochastic differential equation (SDE) modelling and multivariate nonparametric regression, and it is shown how these techniques can be used to uncover unknown functionality behind various phenomena in first engineering principles models using experimental data. The proposed modelling approach has significant application potential, e.g. for determining unknown reaction kinetics in both chemical and biological processes. To illustrate the performance of the approach, a case study is presented, which shows how an appropriate phenomena model for the growth rate of biomass in a fed-batch bioreactor can be inferred from data.

1. Introduction For most chemical and biological processes first principles engineering methods can be applied to formulate balance equations that essentially provide the skeleton of an ordinary differential equation (ODE) model for such a process. What often remains to be determined, however, are functional relations in the constitutive equations for phenomena such as reaction rates and heat and mass transfer rates. These phenomena models are often difficult to determine due to the fact that finding a parametric expression with an appropriate structure to match the available experimental data is essentially a trial-and-error procedure with limited guidance and therefore potentially time-consuming. In the present paper a more systematic procedure is proposed. The key idea of this procedure is to exploit the close connection between ODE models and SDE models to develop a methodology for determining the proper structure of the functional relations directly from the experimental data. The new procedure more specifically allows important trends and dependencies to be visually determined without making any prior assumptions and in turn allows appropriate parametric expressions to be inferred. The proposed procedure is grey-box modelling approach to process model development a tailored application of thezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA proposed by Kristensen et al. (2002b), within which specific model deficiencies can be pinpointed and their structural origin uncovered to improve the model. The remainder of the paper is organized as follows: In Section 2 the details of the proposed procedure are outlined; in Section 3 a case study illustrating the performance of the procedure is presented and in Section 4 the conclusions of the paper are given.

1092 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

ODE Oiodel formutalKW

SDEitJOdeJ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA ^ State esumalton ^ Ncmparame^dc

^

^

^

mcRfeB«i&

A

t

f

First engineering principles

Experimental data

Estimate of functional relation

zyxwvutsrqponmlk

Figure 1. The proposed procedure for developing phenomena models. The boxes in grey illustrate tasks and the boxes in white illustrate inputs and outputs. zyxwvutsrqponmlkjihgfedcbaZYX

2.zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Methodology The proposed procedure is shown in Figure 1 and consists offivebasic steps. First a standard ODE model is derived fromfirstengineering principles and the constitutive equations containing unknown functional relations are identified. The ODE model is then translated into a stochastic state space model consisting of a set of SDE's describing the dynamics of the system in continuous time and a set of discrete time measurement equations signifying how the available experimental data was obtained. A major difference between ODE's and SDE's is the inclusion of a stochastic term in the latter, which allows uncertainty to be accomodated, and which, if the constitutive equations of interest are reformulated as additional state equations, allows estimates of the corresponding state variables to be computed from the experimental data. The specific approach used for this purpose involves parameter estimation and subsequent state estimation by means of methods based on the extended Kalman filter (EKF). By subsequently applying methods for multivariate nonparametric regression to appropriate subsets of the state estimates, visual determination of important trends and dependencies is facilitated, in tum allowing appropriate parametric expressions for the unknown functional relations in the constitutive equations to be inferred. More details on the individual steps of the proposed procedure are given in the following. 2.1. ODE model formulation In the first step of the procedure, a standard ODE model is derived and the constitutive equations containing unknown functional relations are identified. Deriving an ODE model from first engineering principles is a standard discipline for most chemical and process systems engineers and in the general case gives rise to a model of the following type: dxt

"df

f(xt,ut,rt,t,0)

(1)

where t G Eistime,a;t G W^ is a vector ofbalanced quantities or state variables, u^ G W^ is a vector of input variables and ^ G M^ is a vector of possibly unknown parameters, and where /(•) G E"* is a nonlinear function. In addition to (1) a number of constitutive equations for various phenomena are often needed, i.e. equations of the following type:

n = ^{xt,ut,6)

(2)

1093 wherezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA n is a given phenomenon and (/?(• €) M is the nonlinear function of the state and input variables needed to describe it. This function is, however, often unknown and must therefore somehow be determined from experimental data. In the context of the systematic procedure proposed in the present paper, the first step towards determining the proper structure of (^(•) is to assume that this function and hence rt is constant. 2.2. SDE model formulation In the second step of the procedure the ODE model is translated into a stochastic state space model with rt as an additional state variable. This is straightforward, as it can simply be done by replacing the ODE's with SDE's and adding a set of discrete time measurement equations, which yields a model of the following type: dxl = f*{xt,Ut,t,e)dt-\-a*{ut,t,e)duj* Vk =h{xl,Uk,tk,9)-\-ek

(3) (4)

where t G E is time, x^ = [xj r^]^ G E"^"*"^ is a vector of state variables, Ut € M"* is a vector of input variables, y^ € EMS a vector of output variables, ^ G E^ is a vector of possibly unknown parameters, /*(•) G E ^ + \ or*(-) G E^^+^^^^^^^+i) and /i(-) G Mf are nonlinear functions, {a;^} is an (n -f 1)-dimensional standard Wiener process and {e^} is an /-dimensional white noise process with e^ G AT (0, S(uk^tk^ 9)). The first term on the right-hand side of the SDE's in (3) is called the drift term and is a deterministic term, which can be derived from the term on the right-hand side of (1) as follows: /V^* ., + a\ J {xt,ut,t,e) =

1

(f{xt,uurt,t,e)\ 1

(5)

where the zero is due to the assumption of constant r^. The second term on the right-hand side of the SDE's in (3) is called the diffusion term. This is a stochastic term included to accommodate uncertainty due to e.g. approximation errors or unmodelled phenomena and is therefore the key to subsequently determining the proper structure of (^(• .) A more detailed account of the theory and application of SDE's is given by 0ksendal (1998). 2.3. Parameter estimation In the third step of the proposed procedure the unknown parameters of the model in (3)(4) are estimated from available experimental data, i.e. data in the form of a sequence of measurements yQ,y^,...,yj^,...,y^. The solution to (3) is a Markov process, and an estimation scheme based on probabilistic methods, e.g. maximum likelihood (ML) or maximum a posteriori (MAP), can therefore be applied. A detailed account of one such scheme, which is based on the EKF, is given by Kristensen et al. (2002a). 2.4. State estimation In the fourth step of the procedure, state estimates are computed to facilitate determination of the proper structure of (^(•) by means of subsequent multivariate nonparametric regression. Using the model in (3)-(4) and the parameter estimates obtained in the previous step, state estimates xl^j^, A: = 0 , . . . , iV, can be obtained by applying the EKF once again using the same experimental data. In particular, since vt is included as an additional

1094

state variable in this model, estimates r^jj^, A: = 0 , . . . , AT, can be obtained, which in turn facilitates application of multivariate nonparametric regression to provide estimates of possible functional relations betweenzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDC rt and the original state and input variables. zyxwvutsrqponmlkji 2.5. Nonparametric modelling In the fifth step of the procedure the state estimates computed in the previous step are used to determine the proper structure of (p{-) by means of multivariate nonparametric regression. Several such techniques are available, but in the context of the proposed procedure, additive models (Hastie and Tibshirani, 1990) are preferred, because fitting such models circumvents the curse of dimensionality, which tends to render nonparametric regression infeasible in higher dimensions, and because results obtained with such models are particularly easy to visualize, which is important. Additive models are nonparametric extensions of linear regression models and are fitted by using a training data set of observations of several predictor variables X i , . . . , Xn and a single response variable Y to compute a smoothed estimate of the response variable for a given set of values of the predictor variables. This is done by assuming that the contributions from each of the predictor variables are additive and can be fitted nonparametrically using the backfitting algorithm (Hastie and Tibshirani, 1990). The assumption of additive contributions does not necessarily limit the ability of additive models to reveal non-additive functional relations involving more than one predictor variable, since, by proper processing of the training data set, functions of more than one predictor variable, e.g. X1X2, can be included as predictor variables as well (Hastie and Tibshirani, 1990). Using additive models, the variation in ffc|jfc, A: = 0 , . . . , A/^ can be decomposed into the variation that can be attributed to each of the original state and input variables and the result can be visualized by means of partial dependence plots with associated bootstrap confidence intervals (Hastie et ai, 2(X)1). In this manner, it may be possible to reveal the true structure of (p{-) and subsequently determine an appropriate parametric expression for the revealed functional relation. Remark. Once an appropriate parametric expression for the unknown functional relation has been determined, the parameters of this expression should be estimated from the experimental data and the quality of the resulting model should subsequently be evaluated by means of cross-validation. A discussion of methods for evaluating the quality of a model with respect to its intended application is given by Kristensen et al. (2002b). Remark. A key advantage of the proposed procedure is that functional relations involving unmeasured variables can easily be determined as well if certain observability conditions are fulfilled, e.g. functional relations between reaction rates, which can seldom be measured directly, and concentrations of various species, which may also be unmeasurable.

3. Case Study: Determining the Growth Rate in a Fed-batch Bioreactor To illustrate the performance of the proposed procedure, a simple simulation example is considered in the following. The process considered is a fed-batch bioreactor, where the true model used to simulate the process is given as follows:

(6)

1095 wherezyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA X is the biomass concentration, S is the substrate concentration, V is the volume of the reactor, F is the feed flow rate, Y is the yield coefficient of biomass, 5 ^ is the feed concentration of substrate, and /x(5) is the biomass growth rate, which is characterized by Monod kinetics and substrate inhibition, i.e.: l^[S) = /in

(7) 'K2S'^-^S-\-Ki zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHG

where /Xmax, Ki and K2 are kinetic parameters. Simulated data sets from two batch runs are generated by perturbing the feed flow rate along a pre-determined trajectory and subsequently adding Gaussian measurement noise to the appropriate variables (see below). Using these data sets and starting from preliminar balance equations, where the biomass growth rate is assumed to be unknown, it is illustrated how the proposed procedure can be used to visually determine the proper structure of /i(5). In the context of the first step of the proposed procedure, an ODE model corresponding to (6) has thus been formulated and the constitutive equation for /i(5) has been identified as containing an unknown functional relation. The first step towards determining this relation then is to assume that /i(5) is constant, i.e. ^{S) = /JL, and translate the ODE model into a stochastic state space model with /i as an additional state variable, i.e.:

dt +

• (Til

0

0

a22

0 .0

0 0

0 0

0" 0 da,; (8) 0

.s 6

3PROD PROBLEM

CA

1I I I

CA

to

t1

t2

t3

Time (years)

t4

6PROD PROBLEM

11

>

1 i

1II1

? oi o

to t1 t2 t3 t4 t5 Time (years)

Figure 2: Investment Decisions Calendar (Site B: black, Site D: grey).

1102

The proposed hierarchical algorithm was tested against the single-level detailed MILP model solved in the full variable space and the results for the two illustrative examples can be found in Table 1. Notice that the reported CPU time for the hierarchical algorithm corresponds to the combined CPU time of the aggregate MILP model plus the summation of the CPU time over all scenario MILPs. Apparently, the hierarchical algorithm outperforms the single-level MILP in terms of computational effort, since the CPU time is reduced by orders of magnitude, while the quality of the derived solution compares favourably with the corresponding solution of the single-level detailed MILP. zyxwvutsrqpo

Table 1: Computational results. 3PR0D Problem Sol. Approach Single-level Hierarchical 223 223 Obj. Function 291s 4s+2s CPU

6PR0D Single-level Hierarchical 130 integer solution 283 found after 10800s 62s+88s

6. Concluding Remarks In this paper, a two-stage, multi-scenario, mixed integer linear programming (MILP) mathematical model was developed to support a holistic approach to product portfolio and multi-site capacity planning under uncertainty, while considering the trading structure of the pharmaceutical company. A hierarchical algorithm was then proposed for the solution of the resulting large-scale MILP problem based on the decoupling of the decision-making levels (strategic and operational). Without compromising on the solution quality, significant savings in computational effort were achieved by employing the proposed algorithm in two illustrative examples. Current work focuses on testing the mathematical framework to larger examples and investigating alternative solution strategies.

7. References Brooke, A., Kendrick, D., Meeraus, A. and Raman, R., 1998, GAMS: A user's guide, GAMS Development Corporation, Washington. Gatica, G., Shah, N. and Papageorgiou, L.G., 2001, In Proc. ESCAPE-11, Kolding, 865. Maravelias, C.T. and Grossmann, I.E., 2001, Ind. Eng. Chem. Res. 40, 6147. Papageorgiou, L.G., Rotstein, G.E. and Shah, N., 2001, Ind. Eng. Chem. Res. 40, 275. Rotstein, G.E., Papageorgiou, L.G., Shah, N., Murphy, D.C. and Mustafa, R., 1999, Comput. Chem. Eng. S23, S883. Subramanian, D., Pekny, J. and Reklaitis, G.V., 2000, Comput. Chem. Eng. 24, 1005.

8. Acknowledgments The authors gratefully acknowledge financial support from the European Union Programme GROWTH under Contract No. GlRD-CT-2000-00318, "VIP-NET: Virtual Plant-Wide Management and Optimisation of Responsive Manufacturing Networks".

zyxwvutsr

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

1103

A Multiagent-based System Model of Supply Chain Management for Traditional Chinese Medicine Industry Qian Li and Ben Hua The Key Lab of Enhanced Heat Transfer and Energy Conservation, Ministry of Education, China, South China University of Technology, Guangzhou 510640, China, email: lq001@37Lnet

Abstract This paper describes an ongoing effort on utilizing agent oriented modelling technique in the modelling and optimisation of supply chain for the Traditional Chinese Medicine (TCM) industry. The paper first analyses the specific nature of TCM supply chain and based on which configures a number of agents that cooperate with each other to provide supports to the TCM supply chain decision makers to improve performance. Some scenarios are considered such as demand pattern and market changes which are supported by the agents in the prototype system under development. Some implementation issues are also discussed.

1. Introduction During the last a few decades. Traditional Chinese Medicine (TCM) with a history of more than 5,000 years has gained more attention because it is natural and safe, with fewer side effects. The TCM industry is undergoing rapid growth both domestic and overseas. To compete in the changing market, TCM enterprises attempt to improve both their production processes and supply chain management. Significant amount of effort has been made developing techniques and software systems to help improve the performance of supply chains. For example. Fox et al. (1999) have proposed an agent oriented software architecture to manage supply chain. Garcia-Flores et al. (2000, 2002) have built an agent based system to simulate supply chain of a coating plant. This paper describes an ongoing effort on utilizing agent oriented modeling technique in the modeling and optimization of supply chain for TCM industry. The special characteristics of TCM industry are considered in designing the system in which several

1104 business activities/processes including planning, procurement, inventory management, etc. are performed by a number of software agents, each performs unique functions. These agents cooperate with each other through information sharing, collaborative

decision making so as to improve the performance of the TCM supply chain. zyxwvutsrqponmlkjihg

2. Supply Chain of TCM Enterprise TCM industries' supply chains have some distinctive characteristics in comparison with that of in other industries. For example, some TCM plants, like the one taken as our background, produce massive types of products, up to several hundred, which caused the difficulty in planning and managing the production and inventory. Raw materials of TCM plant are mostly natural which caused the seasonal and locational variation of quality and cost, and the difficulty in storage. Moreover, the production process is often long and time consuming, not designed to be flexible enough to meet the dynamic demand of the supply chain. All these must be considered when managing the TCM supply chains. Among the difficulties, TCM enterprises found that the most difficult one is how to properly plan production and manage materials in their supply chain under demand and supply variation. It is also a key to lower the inventory and shorten the delivery time, which will reduce the supply chain cost. At present, the departments in the TCM plant such as inventory, procurement and production process are lack of cooperation and information sharing, each makes their own decisions based on local information available. Poor decision results in poor performance. To improve this, an integrated supply chain system of TCM industry is needed which allows business processes to share information and knowledge. The system should also facilitate the coordination and collaboration between components to achieve the optimal planning and material managing for the TCM supply chain.

3. System Design The development of agent technology in recent years has proven an effective approach to solving supply chain management problems. In this work we proposed the development of a multiagent-based system, in which business processes are performed by autonomous software agents. The following subsections identifies these business processes and configured a number of software agents to support them.

1105 zyxwvutsrqpo

Table 1 Agents in the proposed system. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK Agent Type Planning agent

Production management agent Inventory management agent

Procurement agent

Order management agent Supplier agent

Customer agent

Function Perform demand forecasting Use Linear Programming to perform aggregate planning to decide the production and material requirements et al. Convert raw material into products according to production plan. Receive raw material from supplier agent. Deliver products to customer agent according to accepted order. Supervise raw material inventory, inform the procurement agent when replenishment is needed. Decide the procurement time and quantity. Publish the requirements of raw material to supplier agents. Receive quotation from supplier agents. Select appropriate supplier and place the order. Collect orders from customer agent. Query and decide if the order will be accepted. Receiving inquiries from procurement agent and send the quotation back. Fulfill contracts. Place orders to order management agent. Receiving delivery.

3.1. Identification of agents Based on a business process modelling, a number of software agents have been identified as listed in Table 1. Each of these agents provides a kind of functions that can conduct the business processes. 3.2. Multiagent system of TCM supply chain Figure 1 shows how these software agents work together to manage a TCM supply chain. In this figure, software agents are represented by rounded rectangle. The links between rectangles represent information flow (represented by thiner lines) and material flow (represented by thicker lines) transfer between agents. The arrows indicate the direction of information flow or material flow. There are three main process identified in our system. Order fulHllment process The order fulfillment process begins when the customer agents place an order to the order management agent (OMA), the order often contains

1106 Information flow Planning Agent Supplier Agent 1,2...

Procurement Agent

"^

Material flow

Order Management Agent

Customer Agent 1,2... zyxwvutsrqponmlkjihgfedcba Production Agent zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ

IT Inventory Agent

Fig. 1. A general model of multiagent based supply chain system.

information such as product type, quality, quantity and desired delivery time. OMA then queries planning agent about available capability to decide if the order can be accepted. If the order is accepted, the production agent will arrange the production and the inventory agent will deliver products to customers before due time. Planning Planning agent performs aggregate planning with the information it collected from other agents. It determines the capacity, production and inventory decisions for each period over a period of time, (one week and 2 months respectively in this work) Based on the plan, the production agent will calculate the forecasted raw material consumption in the next period and send the information to procurement agent. The latter will decide if a replenishment is needed and take action. The planning agent also dynamically updates the level of safety inventory according to forecast results. The plan needs to be renewed if the situation significantly deviates the current plan. Replenishment process The inventory agent manages the raw material according to predefined inventory policy (continuous review policy in this case). When the inventory of raw materials drops below the safety inventory level, it will inform the procurement agent to procure new materials. The procurement agent then inquires suppliers to select the appropriate one according to their quotation with certain select rule. After the order is placed, the replenishment will be performed by the supplier agents.

1107 zyxwvutsrqp

4. A Prototype System Development A TCM plant has been chosen as a case study. E-fang Traditional Chinese Medicine Company is a TCM plant located in Guangdong province of China. Its main products are refined powders of TCM made through advanced extraction technique. The production processes use raw materials provided by suppliers distributed in different parts of China. The products are sold both domestic and overseas. There are over 400 products which makes it complicated and difficult to plan and manage the production and inventory of both raw materials and finished products. It is also difficult to forecast product demand as the products have different demand patterns. The demand of some products are highly stochastic while others are steady. Some products have seasonal demand patterns. And the demand of some products are related to the demand pattern of other products. To solve this problem, we have to design different forecast models for each products. Since the demands for over 400 products must be met, the plant must carefully plan and manage its production and raw materials supply. The charactristics of raw materials used in TCM also result in the difficulty in planning raw material procurement. Most of the raw materials of TCM production are natural, so the price often varies in different seasons. This also offers the procurement department opportunities to gain extra profit from properly planning its procurement and inventory. A computer aided system had been designed to solve these problems. This prototype system contains an order management agent, a planning agent, a production management agent, a procurement agent and an inventory management agent. It also contains 5 customer agents and 4 supplier agents to simulate the real suppliers and distributors. In comparison with other agentbased supply chain systems, special attention has been paid on planning agent and procurement agent. In the planning agent, special module had been set for data forecast which manages historical demand data of each products and provides the demand forecast methods . Aggregate planning is made based on forecast results to determine capacity, raw material supply and other issues. The procurement agent also maintains price pattern versus season of each products to determine best procurement schedule. 40 typical products with different demand pattern have been selected to be used in the system. We have collected the selling and procurement data of these products from year 2000 to 2002. In the pretreatment process, the demand data of each products from year

1108 2000 to 2001 are used as historical data . In the pretreatment process, the demand data of each products are regressed, the best forecast method and relevant parameters are automatically determined through the comparison of forecast accuracy. The price variation of relevant raw materials are also modeled by data regression. The data collected from year 2002 is used to test the feasibility of the proposed system. Earliy test runs indicate that this system can generate alternatives for optimal decision

making. More practical results will be published in subsequent phases of the project. zyxwvutsrqpon

5. Summary In this paper, we reported our first step toward developing a multiagent system to simulate the supply chain of TCM industry. The specific charactristics of the TCM industry has been analyzed, based on which both main business processes of TCM enterprise have been identified and and a software agent system has been configured with agents cooperating with each other to imporve the decision making process in the TCM supply chain management. . Efforts are made to keep the prototype system model generic and agile so as to be extended to other cases. Some details on the prototype system development are discussed .

6. Reference Fox, M.S., Barbuceanu, M., Teigen, R., 2000, International Journal of Flexible Manufacturing Systems, 12,165. Covington, M.A., 1998, Decision Support Systems, 22, 203. Garcia-Flores, R., Wang, X.Z., Goltz, G.E., 2000, Computers and Chemical Engineering, 24, 1135. Garcia-Flores, R., Wang, X.Z., 2002, OR Spectrum, 24,343.

7. Acknowldgement The authors would like to acknowledge the financial support from National Science Foundation of China (Highlight Project, No. 79931000), and the Major State Basic Research Development Program (973 Project, No: G2000026308), and the Significant Project of Guangdong Science & Technilogy Bureu: "Modernization of Chinese Medicine". Authors also acknowledge Dr. M.L.Lu and David Hui for their precious help in preparing this paper.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

1109

A Tool for Modelling the Impact of Regulatory Compliance Activities on the Biomanufacturing Industry Ai Chye Lim, Suzy Farid, John Washbrook*, Nigel John Titchener-Hooker The Advanced Centre for Biochemical Engineering Department of Biochemical Engineering, University College London, Torrington Place, London WCIE 7JE, UK. *Department of Computer Science, University College London, Gower Street, London WCIE 6BT, UK.

Abstract Quality control (QC)/quality assurance (QA) and batch documentation form key parts of the in-process validation procedure for any biopharmaceutical. The activities of QC/QA and batch documentation rather than the manufacturing steps themselves are usually rate limiting in a plant (Ramsay, 2001). Thus, managing QC/QA and batch documentation becomes a challenge. This paper presents the configuration of a prototype tool for modelling the impact of in-process testing and batch documentation in the biopharmaceutical-manufacturing environment. A hierarchical task-oriented approach was employed to convey maximum user flexibility. The impact of employing a range of manufacturing options on financial and technical performance was used in a case study to evaluate the functionalities of the tool. The study aims to investigate the effect of regulatory compliance activities on operational costs and demands on resources. The study demonstrates the use of such a software tool for the facilitation of early planning of process development and the appropriate allocation of resources to each stage of manufacturing including in-process testing and documentation. The modelling tool provides realistic indications as to the feasibility of different manufacturing options and ultimately is an aid in the decision-making process.

1. Introduction The biopharmaceutical manufacturing industry is highly regulated, having to comply with stringent rules and procedures. Process controls are crucial to ensure that a specific process will consistently produce a product which meets its predetermined specifications and quality attributes. Adequate in-process testing, thorough process validation and analytical methodologies are required to demonstrate lot-to-lot consistency. The production of new drugs, using progressively more complex production technologies, the wide range of biopharmaceuticals potency and stability, targeted with sociological factors has triggered the development of stringent regulations in the biopharmaceutical industry. With an increasing number of international and national regulations, companies face additional burdens of ensuring and improving quality, each being expensive and time-consuming activities.

1110

A major challenge faced by biotechnology companies is the need for efficient inprocess testing and documentation systems in order to deliver new drugs to the market quickly. An examination of existing process modelling tools for the simulation of biomanufacturing shows that the pertinent issue of in-process testing/batch documentation are usually not included in any analysis. Farid (2001) reported the implications of these support activities on the manufacturing environment but no implementation was attempted. This deficiency could vitiate the accuracy of any model to provide an actual reflection of the entire manufacturing process and distort the accuracy of resource management and manufacturing cost estimates incurred. Furthermore including these support activities, which run in parallel with production, is of critical importance in the biopharmaceutical industry in order to identify lead times and bottlenecks in the manufacturing process. As the need for speed rises, the application of simulation modelling tools to address these problems is becoming increasingly important. These factors have driven the need for the addition of these regulatory compliance activities in a bioprocess simulator to reflect the impact of current good manufacturing practices (cGMP) in biopharmaceutical plants. zyxwvutsrqponmlkjihgfe

2. Design Methodology 2.1. Modelling approach A highly structured bioprocess simulation support tool is proposed in order to achieve rapid process modelling for the manufacture of biological products. The conceptual framework seeks to integrate various aspects, including resource management, mass balance analysis, in-process testing and costing, that each relate to strategic bioprocess decision-making. The tool structure was arranged in a hierarchical manner to represent the key tasks and resources in a manufacturing operation (Farid et al., 2000) but was extended further to incorporate QC/QA activities and batch documentation (Figure 1). As depicted in Figure 1, the ancillary steps (e.g. equipment-preparation, in-process testing and batch documentation) were modelled separately from the productmanufacture steps. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

Manufacturing tasks

4^ Productmanufacture tasks

Equipmentpreparation tasks

4-

*

Regulatory compliance tasks zyxwvutsrqponmlkjihgfedcbaZYXWVU

4r

Seed fermentation

Cleaning-in-place

Quality control

Production femnentation

Sterilising-in-place

Quality assurance

Chromatography

Equilibration

Batch documentation

etc.

etc.

etc. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM

Figure 1. Hierarchical tree representation of manufacturing tasks.

n i l zyxwvutsrqp 2.2. Implementation The design, implementation and application of the tool were based upon the production of monoclonal antibodies (Mabs) expressed in mammalian cell culture. The software tool was developed using a task-oriented approach on the platform of the visual simulation package Extend Industry Suite v5 (Imagine That Inc., San Jose). The software tool comprises the operational tasks (e.g. fermentation, cleaning-in-place), resources required to carry out each task (e.g. equipment, raw materials, utilities, labour) and resultant process streams from each task. Specific blocks to describe the bioprocess steps were coded in Extend and linked to represent the whole process within the manufacturing environment. Each of the unit operations was simulated as an activity requiring resources. The same approach was applied to the modelling of equipment-preparation, in-process testing and batch documentation operations. To model the process stream, each was represented as a batched item, comprising of several components (e.g. media, cells, IgG, buffer etc.). The stream carried the attributes (e.g. mass, volume, density, concentration) of each component. Such information was then carried over to the subsequent operating task for mass balance. 2.3. Key parameters The simulation of the model requires the specification of the maximum availability of all the resources within the plant. These conditions cause constraints to be placed on the simulation flow in the model based on the availability of resources. User specified the purchase cost/cost per use of the resources or built-in cost model based on costestimating factors for biopharmaceutical process equipment were used (Remer & Idrovo, 1991). The series of process steps and ancillary operations to manufacture the product, prepare the equipment, test the sample and document the batch were then defined in their respective hierarchical workspaces. Finally, the factors for mass balance calculations were input to determine the characteristics (i.e. mass, volume) of the process streams from each process step.

The feasibility of a given manufacturing option could be determined from an economic point of view, i.e. the cost of goods (COG) performance metric. The COG model was simulated with cost equations developed for conventional chemical engineering facilities (Sinnott, 1993). In addition, it is important to consider supplementary costs associated with the compliance to cGMP in the biopharmaceutical manufacturing industry. Based on bioprocessing plant data (Francis, Protherics Pic, London, UK), it was possible to predict the cost contributions arising from in-process testing and documentation steps (Table 1). The miscellaneous materials (e.g. equipment and raw materials) associated with the QC/QA and documentation tasks were related to the utilisation of labour. The costs included taking samples, transferring to the laboratory, testing, reviewing data and reporting results. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIH Table 1. COG model for QC/QA and documentation activities. Cost category Quality control/Quality assurance labour Batch documentation labour Miscellaneous materials

Equation f(utilisation) f(utilisation) 10-20% of operating labour time

1112 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

3. Case Study

3.1. Set-up To evaluate the functionalities of the software tool, the production of monoclonal antibodies (Mabs) using perfusion culture was considered. The example was based on a biopharmaceutical company wishing to investigate how often to pool broth after fermentation, i.e. the pooling interval, given a fixed downstream process scale. The company considered whether to pool at 1, 2 10, 15 or 30 day intervals. Such decisions require an appropriate balance between running costs and annual throughput to be made. The software tool was used to model the different scenarios and to compare the results. An overview of the process production of a therapeutic Mab using mammalian cell culture is illustrated in Figure 2. The entire manufacturing process consisted of basic unit operations including fermentation, affinity chromatography etc and equipment-preparation steps of cleaning-in-place (CIP) and sterilising-in-place (SIP). In addition to these product-manufacture and equipment-preparation steps, the model also considered the issue of quality control and assurance as well as batch documentation running in parallel with the main production run. The performance metrics used to compare the production strategies were the cost of goods per gram (COG/g) and the demand on resources. Several assumptions were made for the case study (Table 2). These assumptions were validated through discussion with industrial experts. Harvesting of the broth was included as part of the perfusion process typically using microfiltration to remove the cells and recycle them back into the bioreactor. After harvesting, the broth was loaded onto the affinity column, purified and stored. The eluate was then pooled and passed through other purification steps. The QC/QA and documentation activities were carried out for each unit operation. At the end of the batch, there was a lot review to determine product acceptance/rejection. zyxwvutsrqponmlkjihgfedcb

Inoculum Seed grow-up fermentation

Final filtration

Viral clearance

Gel filtration chromatography

Ion exchange chromatography

Concentration/ Diafiltration

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQP

Figure 2. Process diagram of the case study: Production of a therapeutic Mab from mammalian cell culture using perfusion culture.

1113 zyxwvutsrqpo Table 2. Key process case study assumptions. AssumptionzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Input value Plant operating hours 48 weeks a year, 7 days a week Bioreactor size 1,000L Length of 1 perfusion run 30 days PooUng intervals 1-30 days Productivity 200 mg/L/day 1 reactor volume/day Perfusion rate Capacity of QC/QA & documentation unit 40% of total plant operating force 0.5-1 day Manual time to test a batch Manual time to document a batch 15% - 20% of operating time

3.2. Simulation results and discussion The annual cost outputs on a task category basis for pooling intervals of 1, 2, 10, 15 and 30 days are plotted as shown in Figure 3. The costs are relative to the case of a pooling interval of 1 day. Examining the base case indicates that the COG/g is dominated by the equipment-preparation and product-manufacturing tasks. Each contributes about the same proportion to the COG/g. As the pooling interval increases, the COG/g drops as fewer equipment-preparation steps are required and becomes dominated by the productmanufacture tasks. The costs of the product-manufacture tasks are almost invariant with pooling interval. The results provide the capability of viewing where the bulk of manufacturing costs are concentrated for different production strategies. Further examination illustrates that QC/QA and documentation activities contribute about 14.3% of the COG/g in the base case and falls sharply so that at an interval of 15 days, it is about 4%. It is interesting to note that the COG/g increases again at an interval of 30 days. Hence, there is a limit to the duration of the interval that the company can operate under. The tool also generates the current utilisation of operator resources highlighting the daily peak levels in demand and when they occur. Examples of the demand on the QC/QA staff for 3 successive batches are illustrated in Figure 4a and b for pooling intervals of 1 and 15 days respectively. Figure 4a indicates that typically, a maximum of 4 operators are employed at any time in the process. At a longer pooling interval. Figure 4b shows that a maximum of 2 operators are used during the majority of the production time. The average utilisation of staff in both cases was also probed. This value corresponded to 2.6 and 1.2 in the case of 1 and 15 day intervals respectively. Thus, different manufacturing options can affect the demand on resources. The current utilisation performance metrics would prompt a company to allocate the appropriate number of staff to carry out the task efficiently depending on the manufacturing option. In this particular case study, the lowest COG/g occurred at a pooling interval of 15 days. Hence, one may conclude that based on this performance metric, the company has to pool the broth every 15 days given the particular DSP equipment scale and operating conditions. The number of QC/QA staff could be reduced since the utilisation curve shows that the staff resource pool is not fully utilised most of the time.

1114 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

0 R egUctory oonrpi i cnoe t a ks D Prcxjuct-mcnufcctu'eta ks • E qu pment-pr epacti on t a ks

2

10 15 Pooling intervd (Dcys) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJ

Figure 3. Annual cost of goods per gram (COG/g) on a task category basis for pooling intervals ofl, 2, 10, 15 and 30 days. Values are relative to an interval of 1 day. zyxwvutsrqponmlk (b)

(a)

«* •• §3

-

O

•s

i2 T& M

n

1

zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

^1

°0

1

1 50

100

Time(Days)

150

-

50

1 ni •

100 Time (Days)

Figure 4. Utilisation of QC/QA staff for (a) a daily pooling strategy (b) a 15 day pooling strategy. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

4.zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Conclusions The case study demonstrated the functionalities of the tool to output the cost of goods on a task basis, view the demands on resources and compare the effect of QC/QA and batch documentation activities using different production strategies. Such a tool could be employed in early planning, hence contributing to transparent planning and project management decisions.

5. References Farid, S., 2001, A Decision-Support Tool for Simulating the Process and Business Perspectives of Biopharmaceutical Manufacture, PhD thesis. Farid, S., Novais, J.L., Karri, S., Washbrook, J. and Titchener-Hooker, N.J., 2000, A Tool for Modelling Strategic Decisions in Cell Culture Manufacturing, Biotechnology Progress, Vol. 16, pp. 829-836. Ramsay, B., Lean Compliance in Manufacturing, 2001, Pharmaceutical Tech. Europe. Remer, D.S. and Idrovo, J.H., 1991, Cost-Estimating Factors for Biopharmaceutical Process Equipment, Pharmaceutical Tech. International. Sinnott, R.K., 1993, Coulson & Richardson's Chemical Engineering, Chapter 6.

European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B. V. All rights reserved.

1115

Modeling the Polymer Coating in Microencapsulated Active Principles D. Manca^ M. Rovaglio^, I. Colombo^ CMIC Department, Politecnico di Milano, Italy^ Eurand International S.p.A., Italy'' e-mail: [email protected]

Abstract Starting from the detailed theory of Kamide (1990) about the nucleation and deposition of a polymer on a particle surface, the paper describes and models the progressive formation of the polymer coating in terms of juxtaposed unit spheres. The deposition of such polymer spheres on the particle is modeled by means of a Montecarlo technique that tries to numerically quantify the spatial vacancies that are responsible for the pores formation. The dimension and distribution of the pores are the two main quantities responsible for the possible evaluation of membrane diffusivity. The numerical model, developed to describe in a detailed and phenomenological way the polymer coating attributes, puts together two main points of the microencapsulation process: polymer nucleation and deposition onto the active principle. The research target is to quantitatively describe the surface morphology of the polymer coating in terms of both micro and macro pores, in order to use the diameter and distribution values of the holes as input data for the evaluation of the effective surface diffusivity. By understanding the direct and indirect effect of those operating parameters over the pores formation and distribution, it is possible to fine-tune such variables so to reach an optimal coating quality and consequently an optimal release curve.

1. Introduction Microencapsulation of active principles is a well-known process of the pharmaceutical industry. A polymer coating covers a particle of drug. The resulting product has some peculiar and interesting properties as far as the release of the drug is concerned. We speak of controlled release of the active principle within the human body. Actually, respect to a conventional tablet, a microencapsulated one has the capability to avoid any sudden release while distributing and smoothing the concentration profile of the drug throughout a longer time period. Often 8-10 hours of sustained release can be achieved bypassing the stomach membrane and releasing the active principle directly into the intestine duct. Dynamic models about controlled release have been proposed by several authors and are available in the literature (Deasy et a.l., 1980; Nixon and Wong, 1990; Singh, et al., 1994). Most of them are quite simpHfied and based on macroscopic properties. The most advanced model proposes a combined effect of penetration theory and drug diffusivity through the coating, thus explaining the typical dependence of release from the square root of time. In such a model an adaptive semiempirical parameter makes the numerical model matching the experimental data. Such a

1116

parameter is the effective diffusivity that tries to summarize the real coating structure of the polymer layer. Actually, the coating diffusivity depends on pores that can be clearly observed as reported in figures 1 and 2. The polymer layer has a quite complicate morphology that comprises micro and macro pores, craters, chinks and holes. These features are responsible for the delayed release of drug into the external solution. On the contrary, if a perfectly continuous polymer layer is produced by film casting, no release at all is observed also after 96 hours. This paper wants to increase the understanding about the main features that affect and regulate the drug release, in order to identify the process parameters that can be tuned to reach the requested properties. zyxwvutsrqponmlkjihgfedcbaZYX

Figure 1. Pores and chinks in the polymer Figure 2. Passing holes and craters in the coating. polymer coating. zyxwvutsrqponmlkjihgfedcbaZYXWVU

2.zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Model of the Polymer Coating The theory of Kamide (1990) allows determining the properties of the polymeric membrane covering a particle both in terms of number of pores, their conformation and apparent diffusivity. Basically, it is necessary to know the volumes ratio i? between solvent and polymer. These two phases are respectively identified as poor and rich phases. In our formulation and experimental activity we worked with ethylcellulose polymer and cyclohexane solvent. Consequently, the specific values defined in the following refer to such pairing. The membrane is formed by the separation of the polymer from the poor phase and the consequent coagulation of the polymer particles. It is possible to outline three main steps for the membrane formation: • Nucleation: the formation of polymer nuclei and the growth of primary particles; • Coagulation of primary particles and formation of a layer of secondary particles; • Pores formation matching the voids existing among the secondary particles. y Given R as: R = -^ (1) the ratio between the poor and the rich polymer phases, the radius S^ of primary particles can be determined once are known: the coagulation free energy variation for (2) unit volume: A/^ = AG(JCO) - AG^XQ) where

XQ

is the initial concentration of polymer,

for the coexistence of poor and rich phases:

AG(XQ)

is the mean free Gibbs energy

1117 zyxwvutsrqpon (x -X )

AG(xo) = (AG'(x2)-AG'(xi)))-^

^ + AG'(xi) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ (3)

(X2-X1)

and Xj, X2 are the volume fractions of poor and rich polymer phases. The free mixing Gibbs energy for unit volume is then defined as: AG'(XQ) =

^—^RT

log(l-xo) +

Xo+Xo{^ +

PlXo+P2^o)^i

pJ XQRT

VX„

(4)

log(xo)-(X^-l)(l-Xo) + X^;roa-^o)'a + Y a + 2 x o ) + ^ ( l + 2xo+3xo')

where X = 1607 is the mean number of monomer units present in the polymer, Oy/Q

1+

k'

(5) pJ y/Q = 2.2 is the entropy parameter, 0 = 364.21 K is the Flory temperature, which is an

;iro=(0.5 + ^o) +

intrinsic property of the polymer, T is the system temperature, whilst: zyxwvutsrqponmlkjihgfedcba

t'=)fcii-^' zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (6) T k = -328.39 is a parameter that does not depend neither on the temperature nor on the polymerization degree, pi = -6.34 and pj = 47.49 are two coefficients from the Taylor series expansion of the interaction parameter. As before described, the coagulation of primary particles of radius S^ produces a distribution of secondary particle radii that is simulated by a Montecarlo method. Actually, primary particles move across the space with constant velocity modulus and random direction. Collisions between two or more particles are statistically accounted for. The resulting Brownian motion is characterized by the following velocity modulus: v = k6juS{

(7)

where k is the Boltzmann constant and // is the solution viscosity. Equation (7) is obtained by equating the following two equations: V

juSf Knit=^^^-=kT

(8) (9)

where A^^^^^-^ is the sampling time adopted to simulate the dynamic evolution of the solution. As a matter of facts, lSX^^^^^ is the time taken by a primary particle to walk a length equivalent to its diameter. Once the dynamic formation of secondary particles has been simulated, it is possible to focus the attention on the pores pattern. The pores are originated by the vacancies existing among the secondary particles. A straight classification of pores allows defining:

1118 •

Isolated pores: made up of a reduced number of vacancies that are internal to the membrane; • Semi-open pores: connecting the active principle surface to the coating or the coating to the external solution; • Passing pores: they cross the whole polymer coating connecting the active principle to the external solution. The latex structure, constituting the juxtaposed secondary particles, is assumed as hexagonal-compact. Consequently, every particle is surrounded by twelve neighboring particles (or voids). This assumption conditions the pores formation. The membrane porosity can then be defined in terms of probability factors for the three categories outlined before: zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (10) Pr = Pr,^+Pr^^+Prp^ where the terms in equation (10) represent respectively: overall, isolated, semi-open and passing probabilities for the membrane pores. By assuming that the membrane porosity does not change during the exsiccation process, Kamide's theory proposes the following formulation for the overall porosity: Pr-

( d \ S'.^' zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 1-zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA {l-Pr') + \l-X2-^\{l-Pr') + Pr' (11) yS2j

^.y

where: dp^ is the polymer density, d is the density of the polymer particles, ^'2 is the

radius of the exsiccated secondary particles that can be determined as follows: zyxwvutsrqponmlkjih dpi »>2 — »>2

. V

(12)

d , P J ^

\hlla

whilst Pr' is defined as follows: Pr' =

(13)

where /Q is the layer thickness and /^ is the exsiccated coating thickness. The pores formation is simulated by combinatorial calculus and through the concept of probability. The system is assumed to consist of spherical particles of .^2 radius. The hexagonal and compact latex contains either polymer-rich aggregates or voids filled with solvent (polymer-poor vacancies). The polymer-rich units represent the structural frame of the coating. A pore is formed whenever one or more vacancies are juxtaposed. Focusing the attention on a given sphere / , a probability Pr^ is given that such an element is a void. The overall probability of the coating is obtained by multiplying Pr^ and A^o» which is the total number of spheres in the unit volume of coating. To better understand the line of reasoning, let us consider the simplest case where the pores are made of a single vacancy. Evidently, this situation may occur only when all the surrounding spheres are polymer-rich particles. Due to the hexagonal and compact hypothesis, there must be twelve polymer spheres around the single void. Consequently, the associated probability of this event becomes: {l-Pr)

. Finally, the number of

pores A^i made of a single vacancy is obtained by combining the two independent

1119 events:zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA E^ "the selected sphere is a vacancy" and E2 "the surrounding spheres are all polymer-rich particles": Nj = NgPr^l-Pr)

(14)

The following step consists of extending the same procedure to the case where two adjacent vacancies are involved. The independent events that should be considered are: • E^ both the adjacent spheres are two vacancies, p(Ej) = Pr^; •

E2

the

remaining

spheres

in the

layer

are polymer-rich

particles,

p(E,) = {l-Pr)"; •

E^ the spheres of the following layer, which are adjacent to the vacancy of the previous layer, are polymer-rich particles, p(E^) = {^1-Pr^ ^^ .

The spheres m^ . belonging to layer / that are adjacent to a given number /^ of Mspheres of layer / - 1 are given by the following equation: m/ ^ = 3/, —^^

(15)

zyxw

Mi

where M, is the number of spheres belonging to layer / and is equal to: (16) | ( / - l ) ^ 2 ^ ( i - l ) +l M; = hit 16 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA V

For example, if we want to determine the number of particles in the second layer that are adjacent to a vacancy in the first layer: Wj j =3-1

=3 (17) M(l) zyxwvutsrqponmlkjihgfedcbaZYXWV

On the contrary, when considering the number of possible vacancies in the first layer, we discover, by combinatorial calculus, that there are twelve different selections. It is then straightforward to write the relation existing between the number of pores made up of two vacancies and their probability: (18) N,= ^ ^ ^ V , P r ^ ( 7 - P r / ' ( 7 - P r p vly When the number of vacancies increases, the formulation gets more complex since the number of layers to be considered increases as well. Considering the case of three vacancies, there are two distinct alternatives. Given the intermediate vacancy it is possible either to find the remaining two vacancies both in the first layer or one in the first and the other in the second layer. Such alternatives are concurrent and exclusive. Consequently, the number of pores for one configuration gets summed to the other one: N',= N'y.

12

(19)

N,Pr'{l-Pr)"{l-Prf"

\ '• ' Vl-Prf''

v2y The same line of reasoning can be extended to the four vacancies case. By extrapolating the method it is possible to determine some rules for the deduction of the polymer structure. By reporting the fulfill layers as a function of the possible vacancies distribution within the layers it is possible to obtain Table 1.

1120 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Table 1: Vacancies as a function of layers. Layer 1

Layer 2

Layer 3

1

1

1

Vacancies in a layer

Table 2 reports the number of times a layer is involved in the redistribution of vacancies as a function of the number of voids. Table 2: Distribution of vacancies as a function of the number of voids. Number of particles 2 3 4

Layer 1 1 1 1

Layer 2

Layer 3

I 2

1

This allows to emphasize how for an assigned number of vacancies, the total number of configurations can be determined by summing row-wise the elements of Table 2, which are distributed according to the Poisson's law. The determination of the passing pores, i.e. the ones that go from the inner active principle towards the external solution, is done by following a very similar combinatorial and probabilistic approach. Instead of speaking of layers, the concept of sheets is introduced. Due to the tetrahedral packing, once a certain number of vacancies has been evaluated, it is possible to determine how many spheres of the following sheet are vacancies. In a structure comprising two layers, a void in the former sheet may be followed by one, two or three vacancies in the latter. The sum of the probabilities associated to each alternative gives the overall probability of the passing pore Pr^^. Finally, once determined Pr^^ and Pr^^, it is possible to

evaluate the probability of semi-open pores Pr^^ by simple difference through equation (10). Although the theory of Kamide does not consider any influence exerted by exsiccation, it is realistic that a too high temperature or a too fast procedure can dramatically influence the pores number and pores structure. Some macroscopic irregularities on the coating surface can be originated by the evaporation of solvent that leaves the polymer during exsiccation. At the moment, an extended experimental activity is being carried on to understand and validate the interaction of polymer deposition and exsiccation on coating diffusivity. zyxwvutsrqponm

3. References Deasy, P.B., Brophy, M.R., Ecanow, B. and Joy, M.M. 1980, J. Pharm. Pharmacol, 32, 588. Kamide K., 1990, Thermodynamics of polymer solutions. Polymer Science Library, Elsevier, Amsterdam. Nixon, J.R. and Wong, K.T. 1990, Int. J. Pharm., 58,1421. Singh, M., Lumpik, J.A. and Rosenblatt, J. 1994, J. Control. Release, 32, 931.

European Symposium on Computer Aided Process Engineering - 13 A. Krasiawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.

1121

Extractant Design for Enhanced Biofuel Production through Fermentation of Cellulosic Wastes Eftychia C. Marcoulaki* and Fragiskos A. Batzias Laboratory of Simulation of Industrial Processes, Dept. Industrial Management University of Piraeus, Karaoli & Dimitriou 80, Piraeus 185 34, Greece

Abstract This work suggests novel materials to enhance the production of bioethanol from pulp and paper industry wastes. The extractant design problem is formulated mathematically to include process data and solvent properties tailored to the use of cellulosic waste substrates. The simulator is interfaced with available molecular design synthesis algorithms to search and select chemicals of desired and/or optimal performance. The materials designed herein can initiate more detailed analyses and laboratory experiments to validate and support the screening results.

1. Introduction The industrial and agricultural wastes are rich in cellulosic substrates, so they can be fermented to give renewable fuels within a "zero emission" framework (Ulgiati, 2001). This waste treatment option produces clean alternatives to fossil fuels, while using wastes that would otherwise be landfilled. The environmental benefits of this scheme are significant, but reduction of the production cost is considered essential to make the biofuel products competitive in the open market (Wyman, 2001). In typical fermentation processes, ethanol is produced from the fermentation of saccharites (substrate) by microorganisms in an aqueous environment. Ethanol concentrations higher than 12% inhibit the reaction and yield an effluent mixture that is poor in ethanol (Minier and Goma, 1982). The inhibition effect in the conventional scheme increases the energy and capital costs, due to increased separation effort (Boukouvalas et al., 1995) and low substrate conversion (Tangnu, 1982). In extractive fermentation the ethanol is simultaneously recovered in a solvent phase, from where it can easily be obtained via simple distillation.

This work considers the mathematical modeling of anzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQP in situ extractive fermentation process tailored to cellulosic waste treatment, and discusses the solvent requirements. The model and the requirements are used here to propose new extraction solvents using available molecular design synthesis technology. The work makes employs recently developed tools (Marcoulaki and Kokossis, 2000a), which have been tested to solvent design problems, and reported novel molecular structures and significant improvements over other techniques (Marcoulaki and Kokossis, 2000b; Marcoulaki et al., 2000, 2001).

Corresponding author. E-mail: [email protected] (E. Marcoulaki)

1122 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

2. Advantages of Cellulosic Wastes The use of waste instead of primary raw materials should bring significant decrease to the operational cost for the production of bioethanol fuels (Tangnu, 1982). Additionally, there are several differences in processing cellulosic materials wasted or obtained as byproducts from the pulp and paper industry, compared to straw or agricultural waste. The main difference starts from the necessity to carry out the end of pipe treatment, since the pulp waste contains hazardous materials that need to be removed before it can be safely landfiUed. Post-processing to derive a marketable product provides an opportunity for profit, while reducing the landfilling fees and the transportation cost. The advantages of processing the pulp waste spring from the scale economies of massive production in specific locations, and the nature of the waste itself. Economical advantages include the following •

The pulp waste appears in large quantity and predetermined rate



The waste can be processed within the pulp mill site, in reduced cost due to industrial intensification



Integrated plants provide investment incentives associated with cash grants, loan interest rate subsidies, leasing subsidies, tax allowances, etc. These incentives may contribute decisively in increasing the competence of the pulping itself.



The financial benefits of in situ processing. These, however, can be counterbalanced by scale economies, when the waste from various locations is collected and co-processed in a distant facility.

Processing advantages include that the pulp waste •

is enriched in cellulose during the pulping process



obeys quality standards (not very strict in the case of repulping)



is more suitable for biological treatment and more susceptible to the required biotransformations, due to its reduced content in lignin and hemicellulose



is hydrolyzed in an aqueous slurry, avoiding expensive dewatering.

Post-processing of cellulosic wastes to biofuels aims to produce useful environmentally benign products out of a potentially hazardous material. To ensure environmental consistency, however, life cycle assessment is essential, to evaluate the environmental impact of post-processing and its end products. A significant amount of research has been conducted in the fields of experimental extractant selection and mathematical modeling for the typical glucose fermentation. This was based on the typical scheme involving sugar beet, where the xylose amount is very low, and usedzyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Saccharomyces cerevisiae that treats mainly the glucose substrate. These works could not address the problem of cellulose fermentation, where significant amounts of xylose are also produced. The present paper uses kinetic data based on yeast which is specially modified to treat both substrates (Krishnan et al., 1999). The kinetic data are used to develop an extended simulation model, to enable the screening of appropriate extractants formulated as a solvent optimization problem.

1123 zyxwvutsrqp

3. Extractive Fermentation Simulation Model A process simulation model tailored to the case of cellulosic wastes is hereby developed to assist the search for solvents with improved performance. The new model stems from the work of Kollerup and Daugulis (1985), and modifies it to include both glucose and xylose according to the kinetic models of Krishnan et al. (1999). The final equations are S Y p , , (Do -S" - D S , ) = P [ D + Mp / ( l - M p P/pp)]

(1)

k

where, Yp/k is the ethanol yield coefficient based on substrate k; D is the effluent aqueous dilution rate in h"^ S^ is the effluent aqueous concentration of substrate k in g/L; P is the effluent ethanol concentration in the aqueous phase in g/L; k is G for glucose or X for xylose; the subscript/superscript 0 denotes the influent variables; pp is the ethanol density in g/L; Mp is the ethanol mass distribution coefficient between extract and aqueous phases. Do p" - D p ^ = P M p D" / ( l - M p • P / P p ) +Z Yc/, (Do -S" - D S J

(2)

k

where, Yc/k is the CO2 yield coefficient based on substrate k; kG {G, X}, G: glucose and X: xylose; PA is the effluent aqueous phase density in g/L; DE is the influent extract dilution rate in h"^ The dilution rate D is equal to the overall reaction rate. The mixing rule and the specific growth rates are expressed by Eq. 3 and 4, respectively D = (SG-^G+SX-|^XMSG+SX)

(3)

^^k =l^.,k -Sk /(Ks,k + S , +S^ / K , J . [ l - P / P , / ^ J

(4)

The aqueous phase densities (Eq. 3) are given by the linear expression PA = P W + « G SG -t-a^ Sx H-ttp P

(5)

The equation system (Eq. 1-5) is solved for given values of Mp, DE, S^and Sk, to get the values of P, D and DQ. The productivity in the extract (PDE) phase, the extraction efficiency (EE) and the conversion of substrate k (Ck) are PDE = P . M p D ^ [ l + M p P / ( p p - M p P ) ]

(6)

EE = P . D / ( P D + PDE)

(7)

and

C^ = l - D S ^ / ( D Q S^)

The coefficients and parameters for the above equations are found in Table 1. Krishnan et al. (1999) do not give values for the CO2 yield coefficient Yc/s,k based on substrate k. Since Yp/s,G and Yp/s,G have similar values in the papers of Kollerup and Daugulis (1985) and Krishnan et al. (1999), YC/S,G is taken from the former, and for Yc/s,x we maintain the analogy between the ethanol values. Also, for Pmc and bo (Eq. 4) the

1124 present work uses the set of values of Krishnan et al. (1999) where the differences in the reaction rate values between the two models are small. The coefficients used in the density expression (Eq. 5) are taken from linear regression on experimental values for aqueous glucose, xylose (Wooley and Putsche, 1996) and ethanol (Perry and Green, 1997) binary solutions at 20°C (in the range of the expected aqueous ethanol concentrations). zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Table 1: Coefficients and parameters for the model equations. Parameter

Glucose value Xylose value Units Parameter Value Units 0.400 0.470 789.34 g/L Yp/s pp 998.23 g/L 0.44 0.3745 Yos Pw 0.3697 g/L 59.04 129.9 Pm OG g/L 0.4093 0.565 3.400 Ks zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA OCX 0.1641 h-' 0.662 0.190 ap ^m g/L 283.7 18.10 Kl 1.036 0.25 P

4. Desirable Extractant Properties A typical extractive fermentation process EFP includes a fermentor, the solvent recovery unit, and the product purification unit. The biotransformation reaction is carried out in the aqueous (nutrient) phase, which contains the substrate and the microorganisms. The recovery and purification stages are preferably done via simple distillation, and can be carried out in the same equipment according to the extract mixture. An appropriate solvent to extract the toxic product should preferably have the following properties, with particular emphasis on the low toxicity, the low solubility in the aqueous phase and the high selectivity for the product (Fournier, 1986; Karakatsoulis et al., 1995). •

Low extractant microbial toxicity (t), to guarantee biocompatibility to the microorganisms that perform the fermentation.



Low extractant losses to the nutrient phase (SI), to reduce the solvent makeup, and the cost of downstream treatment of the aqueous stream.



High distribution coefficient of the product to be removed (Mp), to remove the toxic bioproduct from the nutrient phase and reduce the inhibition.



Low distribution coefficient for essential nutrient(s) (Mk), to reduce the loss of substrate(s) in the extract stream. Similar constraints may apply to reaction byproducts, e.g. acetic acid.



High extractant selectivity (SE) (high Mp is usually accompanied by increased SE).



Easy separation of solute from extractant, preferably by simple distillation. A substantial difference in the boiling point temperatures of the two components can safely guarantee the absence of azeotropic formations and the ease of separation.

1125 zyxwvutsrqpo Table 2: Operational parameters and optimization constraints for extractant design. Process conditions: Temperature = 298K, D^ = 2 h-^ S^ = 300g/L; SG = 30 g/L; S^ = 150g/L; Sx = 30 gyTL Solvent properties: selectivity to solute > 2.0 (wt); solute distribution coeffiecient > 1.3 (wt), losses to raffinate < 0.01 (wt); melting point temperature (Tm)> 288K; boiling point temperature (Tb): 300