Optimization in Polymer Processing 1611228182, 9781611228182

Plastics processing is a major industrial activity, which yields components and systems for a wide range of industries,

275 24 11MB

English Pages 229 [239] Year 2011

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Optimization in Polymer Processing
 1611228182, 9781611228182

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

CHEMICAL ENGINEERING METHODS AND TECHNOLOGY

OPTIMIZATION IN POLYMER PROCESSING

No part of this digital document may be reproduced, stored in a retrieval system or transmitted in any form or by any means. The publisher has taken reasonable care in the preparation of this digital document, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained herein. This digital document is sold with the clear understanding that the publisher is not engaged in rendering legal, medical or any other professional services.

CHEMICAL ENGINEERING METHODS AND TECHNOLOGY Additional books in this series can be found on Nova’s website under the Series tab.

Additional E-books in this series can be found on Nova’s website under the E-books tab.

MATERIALS SCIENCE AND TECHNOLOGIES Additional books in this series can be found on Nova’s website under the Series tab.

Additional E-books in this series can be found on Nova’s website under the E-books tab.

CHEMICAL ENGINEERING METHODS AND TECHNOLOGY

OPTIMIZATION IN POLYMER PROCESSING

ANTÓNIO GASPAR-CUNHA AND

JOSÉ ANTÓNIO COVAS EDITORS

Nova Science Publishers, Inc. New York

Copyright © 2011 by Nova Science Publishers, Inc. All rights reserved. No part of this book may be reproduced, stored in a retrieval system or transmitted in any form or by any means: electronic, electrostatic, magnetic, tape, mechanical photocopying, recording or otherwise without the written permission of the Publisher. For permission to use material from this book please contact us: Telephone 631-231-7269; Fax 631-231-8175 Web Site: http://www.novapublishers.com

NOTICE TO THE READER The Publisher has taken reasonable care in the preparation of this book, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained in this book. The Publisher shall not be liable for any special, consequential, or exemplary damages resulting, in whole or in part, from the readers’ use of, or reliance upon, this material. Any parts of this book based on government reports are so indicated and copyright is claimed for those parts to the extent applicable to compilations of such works. Independent verification should be sought for any data, advice or recommendations contained in this book. In addition, no responsibility is assumed by the publisher for any injury and/or damage to persons or property arising from any methods, products, instructions, ideas or otherwise contained in this publication. This publication is designed to provide accurate and authoritative information with regard to the subject matter covered herein. It is sold with the clear understanding that the Publisher is not engaged in rendering legal or any other professional services. If legal or any other expert assistance is required, the services of a competent person should be sought. FROM A DECLARATION OF PARTICIPANTS JOINTLY ADOPTED BY A COMMITTEE OF THE AMERICAN BAR ASSOCIATION AND A COMMITTEE OF PUBLISHERS. Additional color graphics may be available in the e-book version of this book.

LIBRARY OF CONGRESS CATALOGING-IN-PUBLICATION DATA Optimization in polymer processing / editors, Antsnio Gaspar-Cunha and Josi Antsnio Covas. p. cm. Includes index. ISBN:  (eBook)

1. Plastics. 2. Production engineering. I. Gaspar-Cunha, Antsnio. II. Covas, J. A. TP1120.O68 2010 668.4--dc22 2010043913

Published by Nova Science Publishers, Inc. † New York

CONTENTS   Preface Chapter 1

vii  Introduction José António Covas and António Gaspar-Cunha 

Optimization in Engineering Chapter 2

An Introduction to Optimization Lino Costa and Pedro Oliveira 

Chapter 3

An Introduction to Multiobjective Optimization Techniques Antonio López Jaimes, Saúl Zapotecas Martínez and Carlos A. Coello Coello 

Chapter 4

Extending Optimization Algorithms to Complex Engineering Problems António Gaspar-Cunha, José Ferreira,   José António Covas and Carlos Fonseca 

Application to Polymer Processing Chapter 5

Polymer Extrusion - Setting the Operating Conditions and Defining the Screw Geometry José António Covas and António Gaspar-Cunha 

1  9  11 

29 

59 

85  87 

Chapter 6

Reactive Extrusion - Optimization of Representative Processes António Gaspar-Cunha, José Antonio Covas,   Bruno Vergnes and Françoise Berzin 

Chapter 7

The Automatic Design of Extrusion Dies and Calibration/Cooling Systems João Miguel Nóbrega and Olga Sousa Carneiro 

143 

On the Use of Reduced Bases in Optimization of Injection Molding Francisco Chinesta and Fabrice Schmidt

167 

Chapter 8

115 

vi Chapter 9

Index

Contents Estimation and Control of Sheet Temperature in Thermoforming Benoit Boulet, Md. Muminul Islam Chy and Guy Gauthier 

193 

221 

PREFACE Plastics processing is a major industrial activity, which yields components and systems for a wide range of industries, such as packaging, automotive, aeronautics, electrical and electronic, sports and leisure, toys, civil and construction, and agriculture. Most plastic components are manufactured either by extrusion or injection molding, but other techniques such as blow molding and thermoforming are also important. The productivity of these technologies is dictated by the equipment design, choice of the operating conditions and physical properties of the polymer system. This book discusses the recent scientific developments on the optimization of manufacturing engineering problems and applies them to polymer processing technologies. Chapter 1 – In simple terms, optimization is the process of choosing the best solution from a set of alternative solutions. Thus, for instance, the optimization of a production process may be seen as the choice of the least costly production design amongst a set of different designs. Alternatively, the optimum may involve the maximization of the profit of a given process. Optimization can, thus, be defined as the search of an optimal solution, which may involve, in its simplest formulation, the minimization or the maximization of scalar functions such as the cost or the profit. In this chapter, the mathematical foundations of optimization are presented, in particular, the necessary and sufficient conditions that must be observed in order to guarantee that a solution is optimal. These conditions constitute the basis of many optimization algorithms, in particular, in the case of deterministic algorithms, i.e., algorithms whose search rules are deterministic. Therefore, the optimal conditions refer to a particular class of optimal problems, i.e., the so called convex optimization problems, where the variables are continuous and the search space is convex. Problems where these conditions are not observed can be approached by other algorithms presented in subsequent chapters. Chapter 2 – In simple terms, optimization is the process of choosing the best solution from a set of alternative solutions. Thus, for instance, the optimization of a production process may be seen as the choice of the least costly production design amongst a set of different designs. Alternatively, the optimum may involve the maximization of the profit of a given process. Optimization can, thus, be defined as the search of an optimal solution, which may involve, in its simplest formulation, the minimization or the maximization of scalar functions such as the cost or the profit. In this chapter, the mathematical foundations of optimization are presented, in particular, the necessary and sufficient conditions that must be observed in order to guarantee that a solution is optimal. These conditions constitute the basis of many optimization algorithms, in particular, in the case of deterministic algorithms, i.e.,

viii

António Gaspar-Cunha and José António Cova

algorithms whose search rules are deterministic. Therefore, the optimal conditions refer to a particular class of optimal problems, i.e., the so called convex optimization problems, where the variables are continuous and the search space is convex. Problems where these conditions are not observed can be approached by other algorithms presented in subsequent chapters. Chapter 3 – A wide variety of problems in engineering, industry, and many other fields, involve the simultaneous optimization of several objectives. In many cases, the objectives are defined in incomparable units, and they present some degree of conflict among them (i.e., one objective cannot be improved without deterioration of at least another objective). These problems are called Multiobjective Optimization Problems (MOPs). Let us consider, for example, a shipping company which is interested in minimizing the total duration of its routes to improve customer service. On the other hand, the company also wants to minimize the number of trucks used in order to reduce operating costs. Clearly, these objectives are in conflict since adding more trucks reduces the duration of the routes, but increases operation costs. In addition, the objectives of this problem are expressed in different measurement units. In single-objective optimization, it is possible to determine between any given pair of solutions if one is better than the other. As a result, the authors usually obtain a single optimal solution. However, in multiobjective optimization there does not exist a straightforward method to determine if a solution is better than other. The method most commonly adopted in multiobjective optimization to compare solutions is the one called Pareto dominance relation which, instead of a single optimal solution, leads to a set of alternatives with different tradeoffs among the objectives. These solutions are called Pareto optimal solutions or nondominated solutions. Although there are multiple Pareto optimal solutions, in practice, only one solution has to be selected for implementation. For instance, in the example of the shipping company presented above, only one route from several alternatives generated will be selected to deliver the packages for a given day. Therefore, in the multiobjective optimization process the authors can distinghish two tasks, namely: i) find a set of Pareto optimal solutions, and ii) choose the most preferred solution out of this set. Since Pareto optimal solutions are mathematically equivalent, the latter task requires a Decision Maker (DM) who can provide subjective preference information to choose the best solution in a particular instance of the multiobjective optimization problem. They can distinguish two main approaches to solve multiobjective optimization problems. The first is called the MultiCriteria Decision Making (MCDM) approach which can be characterized by the use of mathematical programming techniques and a decision making method in an intertwined manner. In most of the MCDM’s methods the decision maker plays a major role in providing information to build a preference model which is exploited by the mathematical programming method to find solutions that better fit the DM’s preferences. Evolutionary Multiobjective Optimization (EMO) is another approach useful to solve multiobjective optimization problems. Since evolutionary algorithms use a population based approach, they usually find an approximation of the whole Pareto front in one run. Although in the EMO community the decision making task has not received too much attention in the past, in recent years a considerable number of works have addressed the incorporation of preference in MultiObjective Evolutionary Algorithms (MOEAs). In the following, the authors present some general concepts and notations used in the remainder of this chapter. Chapter 4 – Real engineering design and optimization problems are complex, multidisciplinary and difficult to manage within reasonable timings; in some cases, they can, at least to some extent, be mathematically described by sophisticated computational tools,

Preface

ix

which generally require significant resources. The momentous advances in some scientific and technological subjects (e.g., computational fluid dynamics, structural mechanics), coupled to the development of highly performing computing techniques (e.g., parallel and/or grid computing) and computer facilities, make it possible to progressively tackle more features of complicated problems. Multidisciplinary Design Optimization, MDO, can be described as a technology, an environment, or a methodology for the design of complex integrated engineering structures, which combines different disciplines and takes into account in a synergistic manner the interaction between the various subsystems [2-4]. Examples of its practical application include the design and optimization of aircrafts, cars, building structures and manufacturing systems. Chapter 5 – Plasticating extrusion (i.e., the conversion of solid pellets or powder of a polymer system into an homogeneous melt that is continuously pushed through a shaping die) is a major plastics polymer processing step: - Extrusion lines, comprising one or more extruders, a shaping die and downstream equipment, are used to manufacture a wide range of mass consumption plastics products (e.g, profiles, pipes & tubing, film & sheet, wires & cables, filaments, fibers, non-wovens). - Compounding lines, are not only used for additivation (i.e., incorporation of additives such as lubricants, processing aids, plasticizers, anti-oxidants, UV stabilizers, impact modifiers) and pelletization (usually preceded by melt mixing and devolatilization), but also to prepare/create advanced polymer based systems, such as highly filled compounds, nanocomposites, polymer blends, hybrid materials, thermovulcanizates, or modified functionalized polymers. In several of these examples, chemical reactions take place simultaneously with processing, i.e., the extruder is also utilized as a continuous reactor. Some commercial polymers can also be synthesized and pelletized continuously in an extruder. - Plasticating extruders are the core unit of some other important polymer processing technologies, such as injection molding and blow molding, and of a few rapid prototyping methods. For instance, in injection molding the screw can also move axially, which initially lets melt to accumulate at its tip and subsequently injects it into the mould cavity. - Finally, it seems worth noting that plastics extrusion technologies have been successfully applied in other industries, particularly for the processing of food, pharmaceuticals and ceramics. Chapter 6 – Reactive extrusion consists in using an extruder as a continuous chemical reactor. In parallel with the conventional functions of a screw extruder (solids conveying, melting, mixing, melt pumping), a chemical reaction develops and must be controlled. In comparison with a classical chemical process in solution, reactive extrusion exhibits interesting advantages: Chapter 7 – Thermoplastic profiles have a large-scale application in the construction, medical, electric and electronic industries, among others. The term profile is commonly used to designate products of constant cross section that are obtained by the extrusion process. A typical extrusion line for the production of thermoplastic profiles generally comprises an extruder, a die, a calibration/cooling system, a haul-off unit and a saw. The forming tools are the die and the calibration/cooling system, whose main functions are to shape the polymer melt into a cross section similar to that specified for the profile and to establish its final dimensions while cooling it down to a temperature that guarantees its mechanical integrity, respectively.

x

António Gaspar-Cunha and José António Cova

The major objective of any extrusion line is to produce the required profile at the highest rate and quality. These goals are usually conflicting, i.e., the increase in speed generally affects negatively the product quality, and vice-versa. Consequently, the improvement of the extrusion line performance demands a systematic approach and a careful study of the phenomena involved in the process, particularly those concerning the design of its critical components (forming tools): extrusion die and calibrator(s). The extrusion die plays a central role in the establishment of the product dimensions, morphology and properties. The difficulties to be faced in the design of an extrusion die are closely related to the complexity of the profile to be produced. In fact, while the design of these forming tools for the production of a rod or pipe is almost straight-forward, in the case of an intricate window profile it can be an extremely complex process. From the geometrical point of view the extrusion die flow channel must convert a circular cross section, corresponding to the melt leaving the extruder, into a shape similar to that of the profile. This geometrical transformation should be performed as smoothly as possible, in order to avoid problems caused by stagnation points or abrupt melt accelerations. Chapter 8 – About 30% of the annual polymer production is transformed by injection molding. It is a cyclic process of forming a plastic into a desired shape by forcing the molten polymer under pressure into a cavity. For thermoplastic polymers, the solidification is achieved by cooling. Typical cycle times range from 1 to 100 seconds and depend mainly on the cooling time. The complexity of molded parts is virtually unlimited, sizes may range from very small (1m), with an excellent control of tolerances. Chapter 9 – The thermoforming process is divided into three different phases. The first phase is the heating of the plastic sheet in the oven. The heating phase is necessary to obtain a ductile sheet so that it can be molded. The temperature at which the sheet becomes ductile depends on the kind of polymer it is made of. Once the sheet has been heated to the adequate temperature, it goes into the second phase of the process. The second phase is the molding and cooling of the sheet. There are various ways to make sure that the warm plastic sheet takes the shape of the mold. This can be done, for example, by creating a vacuum between the sheet and the mold. The plastic sheet then cools down and loses its pliability, which makes it keep the shape of the mold. The molded plastic part is then extracted from the mold and transferred towards the last phase of the process. The third phase is the phase of trimming. In this phase, the excess of plastic material is removed to give to the part its final shape.

In: Optimization in Polymer Processing Editors: A. Gaspar-Cunha and J. A.Covas, pp. 1-7

ISBN: 978-1-61122-818-2 ©2011 Nova Science Publishers, Inc.

Chapter 1

INTRODUCTION José António Covas and António Gaspar-Cunha IPC/I3N, Institute of Polymers and Composites, University of Minho, Guimarães, Portugal

The World's consumption of Plastics has increased steadily for several decades, thus mirroring the technological and societal advances within the same period. The forecast for 2010 reaches 259 million tons, up from 190 in 2004 and from 86 in 1990 (source: Plastics Europe Deutschland, WG Statistics and Market Research). In fact, the plastics consumption per capita is nowadays a well know economical indicator, given its good correlation with the Gross National Income of a specific economy. This widespread use of plastics in packaging, electrical and electronics, building, medical, automotive, aeronautics, sports and leisure, fishing, agriculture, textile and toys applications results from a number of inherent favorable characteristics, such as low density, thermal, electrical and acoustical insulation, low permeability to liquids and gases, good mechanical performance (tensile, impact), good aesthetical characteristics (namely in terms of color, gloss and touch) and easy conversion into useful products, even if with complex shapes. For a given polymer, these properties can be easily tuned via additivation (for example, plasticizers, impact modifiers, reinforcements), but the polymer itself can be modified in terms of its molecular weight and/or molecular weight distribution, or grafting of specific chemical species, in order to make it more adequate to a particular application of processing technology (known as grade in the industrial jargon). However, the full range of properties - e.g., from elastomeric to quite rigid, from transparent to opaque, from low to high service temperature - can be explored via the selection of a material from hundreds of possibilities (including polyolefins, polyamides, vinyls, polyesters, polyurethanes, or their blends). A number of processing technologies has been made progressively available to convert the above polymeric systems into industrial successful products, with adequate dimensional tolerances, required aesthetics and sufficient performance under service conditions. In the case of thermoplastics, the most important are injection molding, extrusion, blow molding and thermoforming. Most of these techniques rely on the same working principle: the raw material, generally in the form of solid pellets, is heated until full melting; this melt is then

2

José António Covas and António Gaspar-Cunha

forced to take the shape of the product (e.g., flow through a die in the case of extrusion, filling of the mold cavity in injection molding); the melt is cooled down until solidification (in practice, until it becomes sufficiently rigid to be handled). The spectacular developments in sensing, informatics and electronics have been applied to these technologies, yielding significant increases in precision, accuracy, reproducibility, control, automation and energy savings. Thus, processing equipments show increasing output capacity, flexibility and control capability. Nevertheless, the chief contribution to equipment enhancement is associated to the progress of the knowledge of the physical phenomena involved in a production cycle. Polymers have low thermal conductivity (this is usually a very useful property in terms of final application, but a drawback whilst heating or cooling for processing purposes), may degrade upon exposure to the processing temperatures, may crystallize upon cooling (this affecting the optical characteristics) and their melts are generally highly viscous and exhibit some elasticity. In practice, the development of flow, temperature, material morphology and homogeneity inside/along a processing unit is quite complex. Scientific and technical studies of polymer processing began by identifying and then analyzing the major individual steps of each process, their related physical, thermal, rheological and chemical phenomena and their influencing parameters. Correlations between these phenomena and operating conditions and equipment geometry, as well as between the latter and final product properties were soon established. For example, thermoforming denotes a group of techniques that typically consist in heating a plastic sheet, pushing it against the contours of a mould and cooling the part before extraction from the mold. Applications range from food packaging to large parts for electric appliances. From a process analysis point of view, the production cycle can be divided in three stages, heating, mechanically deforming the sheet and cooling the part. Thin sheets are generally heated by radiation, hence the first stage can be assumed as equivalent to heating a plastic sheet by radiation during a certain time, with possibly convective surface losses. The corresponding process parameters include the heater and initial sheet temperatures, sheet thickness and heating time (operating conditions), the distance between heater and sheet, the type of heater and oven layout (equipment characteristics), the sheet emissivity and thermal conductivity (material's properties). During heating, a complex temperature profile develops and the sheet may sag - thus affecting its thickness and distance to the heater - and its surface quality may be affected. In parallel, a vast amount of empirical knowledge has also been accumulated, namely on the processability and operating window of different materials and on the efficiency and working life of different types of heaters. Only when the role of each parameter on the process response is known, it becomes possible to ameliorate the process performance in an efficient way. Practical information can be exclusively used for this purpose, although it is usually associated to expensive (and often time and material consuming) trial-and-error experiments. Also, it is difficult to extrapolate existing experience to new concepts or solutions. Despite of these limitations, many new technological solutions continue to pop up based on ideas from experienced practicians, which are then validated and empirically developed. Alternatively, process modeling seems an elegant, powerful and efficient tool to improve the knowledge of a given processing sequence and, thus, enable its progress. There is an abundant, ever increasing literature on modeling of polymer processing. From early attempts assuming 1D isothermal flow and heat transfer, purely Newtonian viscous melts or elastic solids and simplified geometries of

Introduction

3

product or equipment, the field has developed significantly in the last two decades, producing sophisticated numerical codes with very good prediction capability (as ascertained from direct comparison of computational predictions with experimental measurements), not only of the flow kinematics, but also of morphology development and even of the final product dimensions and (some) properties. Depending on the processing technique, and on whether the computational model focus on a number of process stages or on the entire processing sequence, this may mean a 3D non-isothermal flow and heat transfer analysis, the consideration of the solid or of the melt viscoelastic behavior (through complex constitutive equations) and of the effect of normal stresses, the full description of the product and/or equipment geometry and the insertion of morphology development analyses and/or of multiscale approaches capable of predicting macroscopic properties. Some commercial software’s are available on the market. Generally, the greater the sophistication of a software, the greater the expertise required from the user and the greater the costs involved. Thus, the utilization of software for design and engineering purposes must be very efficient. Unfortunately, this is often not the case. Taking as an example single screw extrusion, the process engineer might want to use process modeling software to set up the operating conditions to be adopted for the manufacture of a new product. After entering into the program the geometrical characteristics of the extruder and die, the relevant polymer properties and a specific operating condition (screw speed and barrel temperature profile), the program runs and she or he will be confronted with a more or less complete description of flow and heat transfer along the screw. Usually, the data is presented as axial evolution of pressure, temperature, shear rate, viscosity and degree of mixing along the screw, mass output, power consumption, velocity profiles at specific channel cross-sections, et cetera. It is up to the user to "digest" this information and judge whether the predicted thermomechanical environment and global process response are adequate or not (for instance, is the output large enough? Is the final degree of mixing sufficient? Is the maximum melt temperature acceptable? Are local residence times excessive?). The user may decide to investigate the effect of screw speed on the machine performance, thus repeating a number of times the modeling and analysis process. At the end of the study, adequate set operating conditions may have been defined, but there is no evidence if it would have been possible to find a better solution. From a mathematical point of view, process modeling consists in solving the relevant governing equations (in the case of melt flow they comprise mass, momentum and energy conservation equations), coupled to material constitutive equations (e.g., viscosity, density) and considering the relevant geometrical/operational boundary/initial conditions, in order to the process characteristics (velocity, stress, pressure and temperature). This is known as the direct problem. In the above example of single screw extrusion, it is the solution to the direct problem that provides the data for analysis (see Figure 1). However, the process engineer would probably be much more interested in defining the appropriate process responses and obtain from the program the corresponding set of operating conditions. Mathematically, this would correspond to solving the same set of governing equations, with the same set of constitutive equations, but now in order to some of the previous boundary conditions. Most of the previous process responses would now become boundary conditions. This corresponds to solving the inverse problem (Figure 2). Unfortunately, the latter is mathematically ill-posed, namely because there is not a unique relationship between cause and effect (in other words, various operating conditions might fulfill the new variables).

4

José António A Covass and Antónioo Gaspar-Cunhha

Fiigure 1. Numeriical modeling of o single screw extrusion. e

Direc ct problem::

• Outp put

• Geom metry • Polym mer properties s • Opera ating condition ns

Govern ning equations

Inverrse problem m: • Polym mer propertie es • Output • Powe er consumptio on • Melt temperature t • Degre ee of mixing • ...

Govern ning equation ns

• Pow wer consumpttion • Meltt temperature e • Degree of mixing g • ...

• Geometry • Operating conditions • ...

Fiigure 2. Direct and a inverse prooblem in single screw extrusionn.

An interessting alternatiive consists in considerinng the abovee tasks as opptimization prroblems. Forr example, seetting the opperating condditions of a single screw w extruder coorresponds to defining the screw s speed annd barrel tempperature that will w maximize (optimize) prrocess perform mance. As illuustrated in Figgure 3, this appproach could typically use three t interreelated moduless: 1) a modeliing package, thhat yields the process respoonse to a givenn output, 2) ann objective fu unction, that quantifies q thee process perfformance (this function caan combine seeveral processs parameters) and a 3) an optiimization algoorithm. Thus, each possible solution is chharacterized by b a value off the objectivve function thhat is determiined via the use of the m modeling pack kage. The optiimization algoorithm providdes progressivvely better sollutions. An addequate user in nterface couldd provide automatically the results to the user. u

Inntroduction

5

Fiigure 3. Processs optimization approach. a

The potential and advanttages of this methodology m a multiple: are -

ble to autom matically proviide the practiical answers sought s after by b process it is ab engineeers;

-

it can use availabble process modeling m packkages (conveentional direcct problem solverss), thus benefiting from the sophisticationn reached in thhis field;

-

differeent optimizattion algorithm ms are avaiilable, depennding on thhe specific charactteristics of thee processing problem to solvve;

-

it can incorporate important i praactical knowleedge (in the selection s of the t process d of their t range off variation, inn the final deccision of a parameeters, in the definition solutio on to the problem);

-

the inccrease in compputational pow wer provided by computer manufacturerss keeps the compu utation times reequired by thee method withhin reasonable values.

The authorrs have successsively appliedd this methodoology to polym mer extrusionn, for screw deesign, processs optimization and scale-up purposes. A number n of com mmercial softw ware’s have addded optimizaation routines to their menuus. The amouunt of publicattions in this field fi is also grrowing. All th hese are indicaators of the proospective morre widespread practical adopption of the opptimization ap pproach. This is wh hy this book iss about optim mization in pollymer processsing. The workk is a joint efffort of a num mber of expertss in polymer processing p andd/or optimizatiion and is diviided in two paarts. The firstt presents andd discusses opptimization cooncepts (Chappters 2 to 4), the second reeports applicattions in polym mer processingg (Chapters 5 to t 9). Chapter 2 is committedd to the introdduction of opttimization conncepts. It startts with the m mathematical formulation f o an optimizzation problem of m and with the distinctioon between coontinuous and d discrete opttimization. Thhen, the definnitions of convvexity, globall and local

6

José António Covas and António Gaspar-Cunha

optimization and optimality conditions are given. Some simple examples are presented to illustrate the different type of problems (e.g., constrained and unconstrained) and how the definitions introduced can be used to solve them. This is followed by the justification for the need of other types of optimization algorithms (namely metaheuristics). The chapter ends with a short introduction to Evolutionary Algorithms, a very efficient class of metaheuristics that are adopted in some chapters of this book. Another important characteristic of real optimization problems is their multi-objective nature. This is the subject of Chapter 3. Multi-objective problems and optimality conditions for these types of problems are defined and traditional methods to deal with their multiobjective nature are presented. The chapter concludes with the study of various evolutionary algorithms capable of dealing with multi-objective problems and the discussion of their advantages. In Chapter 4, multi-objective evolutionary algorithms are extended to solve some specific and complex issues arising when solving real optimization problems, namely decision making, robustness of the solutions and required computation times. Since the result of a multi-objective optimization problem is not a single point, but a set of Pareto solutions, it is necessary to use a decision making strategy able to take into account the relative importance of all individual objectives. Also, the solutions obtained must be robust against variations of the decision variables, i.e., the performance should not deteriorate when the value of the decision variables changes slightly. Finally, due to the high number of solutions evaluations required by MOEAs and the corresponding high computation times, it is interesting to develop strategies to minimize these difficulties, which usually involves hybridization with local search methods. The methods proposed to deal with these three issues are illustrated towards the end of the chapter for a few benchmark test problems. The methodologies presented in Chapters 3 and 4 are used in Chapter 5 to set the operating conditions and to define the screw geometry/configuration for plasticating single screw and co-rotating twin-screw extrusion, two major polymer processing and compounding technologies. The Chapter starts by presenting in some detail the process modeling routines that are used to evaluate the solutions proposed by the optimization algorithms during the search stage. Then, the most important characteristics capable of influencing the optimization procedure are presented for both extrusion processes. Finally, a few results obtained with the proposed optimization strategy are presented and discussed. Chapter 6 is devoted to the optimization of two representative reactive extrusion processes making use of co-rotating twin-screw extruders, i.e., ε-caprolactone polymerization and starch cationization, using the methodology described in previous chapters. The chemical reactions are presented and modeled. Results concerning the optimization of operating conditions and screw configuration of the two chemical processes are then presented. The design of extrusion dies and downstream calibration/cooling systems is the subject of Chapter 7. The state-of-the-art on the design of these tools is presented. The optimization methodology adopted is a metaheuristic where the improvement of the performance of the solutions generated in each iteration is obtained by changing iteratively the design controllable parameters - this is known as the Simplex method. An application example is discussed. Sequential quadratic programming is used to optimize the injection molding process in Chapter 8. As in other process application chapters, the authors start by describing the numerical modeling routine developed, which in this case is based on the dual reciprocity

Introduction

7

boundary element method. Then, the modeling and optimization methods are used to solve an example dealing with the optimization of mold cooling. The last Chapter is divided in two main parts. The first deals with the development of methods for the estimation and control of sheet temperature in thermoforming, while the second is dedicated to the resolution of the inverse heating problem using the conjugate gradient method. In both cases, the methodologies presented are illustrated with some typical examples.

OPTIMIZATION IN ENGINEERING

In: Optimization in Polymer Processing Editor: A. Gaspar-Cunha, J. A. Covas, pp. 11-28

ISBN 978-1-61122-818-2 c 2011 Nova Science Publishers, Inc.

Chapter 2

A N I NTRODUCTION TO O PTIMIZATION Lino Costa and Pedro Oliveira Department of Production and Systems Engineering, University of Minho, Portugal

Keywords: Optimization, Optimality Conditions, Metaheuristics.

1

Introduction

In simple terms, optimization is the process of choosing the best solution from a set of alternative solutions. Thus, for instance, the optimization of a production process may be seen as the choice of the least costly production design amongst a set of different designs. Alternatively, the optimum may involve the maximization of the profit of a given process. Optimization can, thus, be defined as the search of an optimal solution, which may involve, in its simplest formulation, the minimization or the maximization of scalar functions such as the cost or the profit. In this chapter, the mathematical foundations of optimization are presented, in particular, the necessary and sufficient conditions that must be observed in order to guarantee that a solution is optimal. These conditions constitute the basis of many optimization algorithms, in particular, in the case of deterministic algorithms, i.e., algorithms whose search rules are deterministic. Therefore, the optimal conditions refer to a particular class of optimal problems, i.e., the so called convex optimization problems, where the variables are continuous and the search space is convex. Problems where these conditions are not observed can be approached by other algorithms presented in subsequent chapters.

2

Mathematical Formulation

Mathematically, an optimization problem with a single objective consists in the minimization or maximization of a function subject to constraints [1, 2, 3]. The following notation will be used: • x is the variable decision vector;

12

L. Costa and P. Oliveira • f (x) is the objective function to be minimized or maximized; • g(x) is the vector of inequality constraints; • h(x) is the vector of equality constraints. A minimization problem can, thus, be formulated, without loss of generality, as follows: min f (x) subject to

where x ∈ Ω g j (x) ≥ 0 hi (x) = 0

with with

j = 1, . . ., m i = m + 1, . . ., m + p

(1)

It should be noted that any problem formulated as a maximization of an objective function f (x), can be formulated as max f (x) = − min(− f (x)). This formulation considers a vector x of n real variables (Ω ⊆ Rn ), m inequality constraints and p equality constraints, thus, the total number of constraints being m + p. The variable space, corresponds to the set of all possible values for the decision variables. The search for a solution of the optimization problem, the optimal point x∗ , is realized on the variable space. In general, the optimization problems are approached assuming that the optimal point does exist, it is unique, and can be found using an optimization algorithm. Though many times this is the case, there are situations where such conditions do not exist: if x has no lower bound, or x∗ may not exist, or for certain objective functions x∗ may not be unique. The inequality constraints are expressed as greater than or equal (any constraint formulated as less than or equal can be transformed into a greater than or equal constraint by simply multiplication by -1). Moreover, in many situations there might exist inequality constraints expressing lower and upper bounds for the n variables x. Any point x satisfies a constraint if, for that constraint, the left side of the expression evaluated on that point is in accordance to the right hand side, in terms of the relational operator. A point is designated as feasible if all the inequality and equality constraints are satisfied on that point. The set of all feasible points constitutes the feasible region and can be defined as follows: F = {x ∈ Ω : g(x) ≥ 0 ∧ h(x) = 0} .

(2)

All the points that do not belong to the set are unfeasible points. The optimum, the problem solution, necessarily belongs to the feasible region. When a point satisfies an inequality constraint j, two situations might occur: • The point is in the limit of the feasible region, g j (x) = 0, and the constraint is said to be active; • The point is in the interior of the feasible region of constraint j, g j (x) > 0, and the constraint is said to be inactive. For any feasible point x, all the equality constraints are active. The active set, for any feasible solution x, is defined as the set of indexes of all active constraints:   R(x) = j ∈ {1, . . ., m} : g j (x) = 0 ∪ i ∈ {m + 1, . . ., m + p} : hi (x) = 0

(3)

An Introduction to Optimization

13

Problems in the general formulation can be classified, for example, in accordance to the nature of the objective function and constraints (linear, non linear, convex), the number of variables (small or large), the differentiability of the objective function and constraints, the existence of constraints or the nature of the variables (continuous, integer or mixed integer). Problems where the objective function and all the constraints are linear functions with respect to x constitute the so called Linear Programming Problems. On the other hand, problems where at least one of the constraints or the objective function are non linear with respect to x are designated as Non Linear Programming Problems.

3

Continuous and Discrete Optimization

The term continuous optimization refers to problems where the search for the optimum is made on a set of infinite points, i. e., the search space is infinite. In these problems, typically, the decision variables are real, as is the case of the problems corresponding to the general formulation. On the other hand, the term discrete optimization refers to problems where the search of the optimum is made on a set of points. Due to its nature, for a large number of problems, the variables only have a physical meaning if they can only take integer values. However, the solution of such a problem ignoring the integer nature of the variables, i.e., taking them as real variables and rounding to the closest integer, does not guarantee the finding of a solution close to the optimum of the original problem. These problems with integer variables are designated as Integer Programming Problems and can be formulated in the following way: min f (y) subject to

where y ∈ Ψ g j (y) ≥ 0 hi (y) = 0

with with

j = 1, . . ., m i = m + 1, . . ., m + p

(4)

In this formulation, y is the vector of the q integer variables defined in Ψ ⊆ Z q , where Z corresponds to the set of all integers. In these problems, often, inequality constraints are considered specifying the lower ( yl ) and (yl ) upper limits for the q variables yl , i.e., yl ≤ yl ≤ yl , para l = 1, . . ., q. It must be noted that there are problems which, by their nature, must be modeled using real and integer variables. These problems are, in general, designated as Mixed Integer Programming Problems. The formulation of these problems might be as follows: min f (x, y) subject to

where x ∈ Ω, y ∈ Ψ g j (x, y) ≥ 0 hi (x, y) = 0

with with

j = 1, . . ., m i = m + 1, . . ., m + p

(5)

In this formulation there are a total of n + q variables, being x and y, respectively, the vectors of the n real variables and of the q integer variables. As previously, with respect to the formulations, if, for a given problem, at least one of the constraints or the objective function are non linear with respect to x or y, thus, that problem is a Mixed Integer Non Linear Programming Problem.

14

4

L. Costa and P. Oliveira

Global and Local Optimization and Convexity

In general, it is intended to find the global optimum, mathematically, defined as: Definition 1. Global minimum - A point x∗ ∈ F is a global minimum if f (x∗) ≤ f (x) for all x ∈ F. A local minimum is a point that, in a neighborhood, has the minimum objective function value, formally: Definition 2. Local minimum - A point x∗ ∈ F is a local minimum if there exists a neighborhood N (x∗) around x∗ such that f (x∗) ≤ f (x) for all x ∈ F ∩ N. A convex function is defined as Definition 3. Convex function - A function f is convex on its domain if, for any two points x1 and x2 , in that domain, f (λx1 + (1 − λ)x2 ) ≤ λ f (x1 ) + (1 − λ) f (x2 ) for each λ ∈ [0, 1], i.e., the value of f is always below the line joining the points (x1 , f (x1 )) and (x2 , f (x2 )). A convex programming problem is, therefore, a problem where f is a convex function and F a convex set.

5

Optimality Conditions

In this section, the optimality conditions for optimization problems (as formulated in (1)) are presented. However, for a question of clarity, firstly, the optimality conditions for unconstrained problems are formulated. Later, the optimality conditions for optimization problems with inequality and equality constraints are presented. The proofs of the theorems presented in this section can be found in [1].

5.1

Unconstrained problems

For unconstrained problems, when the objective function f (x) is continuously differentiable, it is possible to identify the local minima analyzing certain optimality conditions. In particular, if the objective function f (x) is twice continuously differentiable, one can say that a point x∗ is a local minimum by the analysis of their first and second derivatives at x∗ . For a given objective function f (x), the gradient ∇ f (x) and the Hessian ∇2 f (x) are, respectively, the vector the first partial derivatives and the matrix of second partial derivatives, i.e.,   ∂f ∂x

 ∂ f1   ∂x2   ∇ f (x) =   ..   .  ∂f ∂xn

(6)

An Introduction to Optimization and



   ∇2 f (x) =   

∂2 f ∂x21 ∂2 f ∂x2 ∂x1

.. . ∂2 f ∂xn ∂x1

∂2 f ∂x1 ∂x2 ∂2 f ∂x22 ∂2 f ∂xn ∂x2

··· ..

.

···

∂2 f ∂x1 ∂xn ∂2 f ∂x2 ∂xn

.. . ∂2 f ∂x2n

15 

   .  

(7)

Assuming that x∗ is a local minimum, the necessary conditions of optimality may be deduced. Theorem 1. First order necessary conditions - If x∗ is a local minimum of f (x) and f (x) is continuously differentiable in an opened neighborhood x∗, then ∇ f (x∗) = 0. Theorem 2. Second order necessary conditions - If x∗ is a local minimum of f (x) and ∇2 f (x) is continuously differentiable in an opened neighborhood x∗, then ∇ f (x∗ ) = 0 and ∇2 f (x∗) is positive semi-definite 1. Any point that satisfies ∇ f (x) = 0 is a stationary point of f (x). Note that a stationary point may be a local minimum of f (x), a local maximum of f (x), or a point that neither is minimum or maximum is designated by saddle point. On the other hand, if the sufficient condition is satisfied at a point x∗, then it is guaranteed that x∗ is a local minimum of f (x). Theorem 3. Second order sufficient condition - If ∇2 f (x) is continuously differentiable in an opened neighborhood x∗ and if ∇ f (x∗ ) = 0 and ∇2 f (x∗) is positive definite, then x∗ is a strong local minimum of f (x). In general, the necessary conditions are used to demonstrate that a given point is not an optimum, since a local minimum should satisfy the necessary conditions. These conditions are almost necessary and sufficient, but they are not satisfied in the case of null curvature. Example 1. Suppose that an aluminum can is designed in order to minimize the total material used so that the can can hold 330 ml. The can is cylindrical in shape as it is depicted in figure 1. The material used is the sum of the material on the cylinder plus the material on the bottom and top of the can. The area of the bottom and top cap is the area of a circle πr2 with radius r, thus totaling 2πr 2 . The area of the cylinder with height h and radius r is given by 2πrh. For a volume of . Therefore, the 330 ml, the height can be expressed as a function πr2 h = 330, thus h = 330 πr 2 material in the cylinder is given by 2πrh = 2πr

330 660 = πr2 r

and the total material is given by the following equation: f 1 (r) = 2πr2 + 1A

660 . r

positive definite matrix is a matrix for which all eigenvalues are positive. A negative definite matrix is a matrix for which all eigenvalues are negative. In a semi-definite matrix some of the eigenvalues can be zero and all others have the same signal.

16

L. Costa and P. Oliveira

r

h

Figure 1. Can. 1400

1200

A (cm2)

1000

800

600

400

200

1

2

3

4

5 r (cm)

6

7

8

9

10

Figure 2. Area. The first order condition for a single variable optimization problem requires the computation of the first derivative: 660 d f 1 (r) = 4πr − 2 = 0. dr r Thus, 4πr 3 = 660 i.e., r=



660 4π

 13

= 3.7449cm

and the height h = 7.4900cm. Figure 2 shows the area as function of the can radius. Table 1 presents the surface area of the can for various values of the radius r and the corresponding height h.

An Introduction to Optimization

17

Table 1. Area. r(cm) 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 10.0

h(cm) 420.2 105.0 46.7 26.3 16.8 11.7 8.6 6.6 5.2 4.2 1.1

f1 (cm2 ) 1321.6 666.3 454.1 355.1 303.3 276.5 265.5 265.5 273.9 289.1 694.3

The second derivative is given by 1320 d 2 f1 (r) = 4π + 3 2 dr r For any positive value of r the second derivative is always positive and the value that satisfies the first order condition corresponds to a minimum as can be seen in figure 2. Example 2. The previous problem was formulated as single variable optimization problem. Suppose now that the objective is not only to minimize the total area but, as well, to minimize the thickness of the material. For that matter, the problem is transformed into a minimization problem of the total cost (cost of the material, as a function of the area, and as a function of the thickness). The cost function is now given by, f2 (r,t) = k1 (2πr2 +

660 ) + k2 t 2 − k3 t + k4 r

which involves two components, one respecting to the area and another referring to the thickness. It is assumed a proportional cost for the area with coefficient k1 , and a quadratic function of the thickness with coefficients k2 , k3 and k4 . Let us consider the following values for these coefficients: k1 = 10, k2 = 4, k3 = 2 and k4 = 10. The first order conditions are now given by the gradient of the cost function !   ∂ f2 40πr − 6600 2 ∂r r = 0. = ∇ f 2 (r,t) = ∂ f2 8t − 2 ∂t Solving this equation system, we obtain (r,t) = (3.7449, 0.25).

18

L. Costa and P. Oliveira

The second order conditions are given by the second derivatives, defining the Hessian matrix as follows, !   ∂2 f2 ∂2 f2 40π + 13200 0 2 3 2 ∂r∂t ∂r r = . ∇ f2 (r,t) = ∂2 f2 ∂2 f2 0 8 2 ∂t∂r

∂t

It can be observed that the off diagonal elements are zero since there is no cross term between the two variables. Since the solution vector satisfying the first order conditions produces a positive definite Hessian, it can be said that the solution vector corresponds to the minimum. Figure 3 shows the cost as function of the can radius and thickness. In Table 2, the cost of the can is given for various values of the radius r and the thickness t.

3600

Cost ($)

3400 3200 3000 2800

3 6

2

5 4

1

3 0

t (cm)

2

r (cm)

Figure 3. Cost.

Table 2. Cost. r(cm) 3.50 3.50 3.50 3.75 3.75 3.75 4.00 4.00 4.00

t(cm) 0.1 0.25 0.5 0.1 0.25 0.5 0.1 0.25 0.5

f2 ($) 2665.245 2665.154 2665.405 2653.413 2653.332 2653.573 2665.150 2665.060 2665.310

Example 3. Let us now consider a cost function where the second cost component depends on the radius and on the thickness. Suppose, as previously, that the can producer is

An Introduction to Optimization

19

interested in the minimization of the total cost, which depends on the area and the thickness, by the following objective functions, f3 = k1 (2πr2 +

660 ) + k4 t 2 r2 − k5 tr + k6 r

Let us consider k1 = 10, k4 = 0.1, k5 = 0.2 and k6 = 10. The first order conditions, given by the gradient of the cost function, are !   ∂ f3 2 40πr − 6600 2 + 0.2t r − 0.2t ∂r r = = 0. ∇ f 3 (r,t) = ∂ f3 0.2r2t − 0.2r ∂t Solving this equation system, we obtain (r,t) = (3.7449, 0.2670). The second order conditions, based on the second derivatives, define the following Hessian matrix, !   ∂2 f3 ∂2 f3 + 0.2t 2 0.4rt − 0.2 40π + 13200 2 3 2 ∂r∂t ∂r r . = ∇ f3 (r,t) = ∂2 f3 ∂2 f3 0.4rt − 0.2 0.2r2 2 ∂t∂r

∂t

It can be observed that the Hessian now presents cross terms. The solution vector derived from first order conditions produces a positive definite Hessian which guarantees that the solution is a minimum.

5.2

Constrained problems

For constrained problems, it is also possible to define optimality conditions. These conditions will be expressed in terms of the Lagrangian function. For the formulation of the problem with constraints (equation (1)) the Lagrangian function is given by m

m+p

j=1

i=m+1

L(x, λ) = f (x) − ∑ λ j g j (x) −



λi hi (x),

(8)

where λ is the vector of Lagrangian multipliers. For problems with constraints, it is mandatory to analyze the properties of the gradient of the constraints ∇g j (x) and ∇hi (x) which are, respectively, the vectors of first derivatives of the m inequality constraints and the p equality constraints. A gradient vector of a constraint ∇g j (x) (or ∇hi (x)) is, in general, orthogonal to the contour of constraints g j (x) (or hi (x)). In the case of an inequality constraint, the corresponding gradient vector points towards the admissibility of the constraint. Nevertheless, it is possible that ∇g j (x) (or ∇hi (x)) may be zero due to the algebraic representation of g j (x) (or hi (x)). For this reason, a regularity condition is introduced in order to guarantee that none of the gradients is null. The regularity of a point is defined as follows:

20

L. Costa and P. Oliveira

Definition 4. Regular Point - Given a point x∗ and the set of active constraints ∗ ∗ set of the gradients of the active constraints R(x  ), a point x∗ is said regular if the ∇g j (x), j ∈ R(x ) ∪ {∇hi (x), i ∈ R(x∗)}, is linearly independent2. The optimality conditions that are shown below always assume that a solution x∗ of the optimization problem is a regular point. The first order necessary conditions can be stated as follows: Theorem 4. First order necessary conditions - Let x∗ be a local minimum. If x∗ is a regular point of constraints then exists a Lagrangian multipliers vector λ∗ such as the following conditions are satisfied at (x∗, λ∗ ) m

∇Lx (x , λ ) = ∇ f (x ) − ∑ λ j ∇g j (x ) − ∗







j=1

m+p



λi ∇hi (x∗) = 0

(9)

i=m+1

g j (x∗) ≥ 0 for all j = 1, . . ., m

(10)

hi (x∗) = 0 for all i = m + 1, . . ., m + p

(11)

λ∗j ≥ 0 for all j = 1, . . ., m

(12)

λ∗j g j (x∗)

= 0 for all j = 1, . . ., m

(13)

The equations (9) to (13) define the first order necessary conditions which are known as Karush-Kuhn-Tucker conditions (KKT conditions). A vector (x∗, λ∗) satisfying all these conditions is a Karush-Kuhn-Tucker point. The second order necessary conditions involve the matrix of second derivatives of the Lagrangian function ∇2 Lxx (x, λ) with respect to x. Theorem 5. Second order necessary conditions - Let x∗ be a local minimum that is a regular point and λ∗ a Lagrangian multiplier vector that satisfies the Karush-Kuhn-Tucker conditions, then (14) tT ∇2xx L(x∗, λ∗ )t ≥ 0 for all t such that  ∇g j (x∗)T t = 0 for all j = 1, . . ., m and j ∈ R(x∗) . (15) ∇hi (x∗)T t = 0 for all i = m + 1, . . ., m + p The second order sufficient conditions do not require that x∗ is a regular point. Theorem 6. Second order sufficient conditions - Let x∗ be a feasible point for which there exist a Lagrangian multiplier vector λ∗ that satisfies the Karush-Kuhn-Tucker conditions, if tT ∇2xx L(x∗, λ∗)t > 0 for all t 6= 0 such that

(16)

 ∗ T ∗ ∗  ∇g j (x ) t = 0 for all j = 1, . . ., m and j ∈ R(x ) with λ j > 0 ∇g j (x∗)T t ≥ 0 for all j = 1, . . ., m and j ∈ R(x∗) with λ∗j = 0 .  ∇hi (x∗)T t = 0 for all i = m + 1, . . ., m + p

(17)

then x∗ is a strong local minimum. 2 Two vectors are linearly

independent, if one of them cannot be written as a linear combination of the other, i.e., if there is no scalar, different from zero, that transforms one into the other

An Introduction to Optimization

21

Example 4. Suppose, as previously, that the can producer is interested in the minimization of the total cost, which depends on the area and on the height of the can. Moreover, it is required that the can has a volume V , i.e., πr2 h = V . This problem can be formulated as a constrained problem as follows: min f 4 (r, h) = 2πr2 + 2πrh subject to πr 2 h −V = 0. The Lagrangian function for this problem is given by: L(r, h; λ) = 2πr2 + 2πrh − λ(πr2 h −V ). The gradient of the Lagrangian function is given by: !   ∂L(r,h;λ) 4πr + 2πh − 2λπrh ∂r = . ∇x L(r, h; λ) = ∂L(r,h;λ) 2πr − λπr 2 ∂h

The KKT conditions (Theorem 4) require that the gradient of the Lagrangian function is set to zero,   4πr + 2πh − 2λπrh = 0, ∇x L(r, h; λ) = 2πr − λπr 2 and the constraint πr 2 h − V = 0 is satisfied, from which the following KKT point is obtained:  1/3  1/3  −1/3 ! V V V ,2 ,2 . (r, h, λ) = 2π 2π 2π The second order conditions (Theorems 5 and 6), based on the second derivatives, define the following Lagrangian Hessian matrix with respect to r and h, !   ∂2 L ∂2 L 4π − 2λπh 2π − 2λπr 2 2 ∂r∂h ∂r = . ∇xx L(r, h; λ) = ∂2 L ∂2 L 2π − 2λπr 0 2 ∂h∂r

For the solution (r, h, λ) = Hessian matrix, we find that



∂h

   V 1/3 V 1/3 V −1/3 , 2 2π , 2 2π 2π

∇2xx L(r, h; λ) =



−4π −2π −2π 0





, computing the Lagrangian

.

   V 1/3 V 1/3 , 2 2π , matrix is negative definite. Thus, (r, h, λ) = 2π   V −1/3 2 2π corresponds to the maximum of L. It should be noted that for V = 330, the solution is, as expected, the same as in Example 1. This

Example 5. Suppose, as previously, that the can producer is interested in the minimization of the total cost, which depends on the area and on the height of the can. Moreover, it is required that the can has a volume V , i.e., πr2 h = V and that the diameter of the can is

22

L. Costa and P. Oliveira

inferior than D, i.e., 2r ≤ D. This problem can be formulated as a constrained problem as follows: min f5 (r, h) = 2πr2 + 2πrh subject to πr 2 h −V = 0 −2r + D ≥ 0. The Lagrangian function for this problem is given by L(r, h; λ1 , λ2 ) = 2πr2 + 2πrh − λ1 (πr2 h −V ) − λ2 (−2r + D). The gradient of the Lagrangian function is given by !   ∂L(r,h;λ1 ,λ2 ) 4πr + 2πh − 2λ1 πrh + 2λ2 ∂r ∇x L(r, h; λ1, λ2 ) = . = ∂L(r,h;λ1 ,λ2 ) 2πr − λ1 πr2 ∂h

Setting the gradient of the Lagrangian function equal to zero,   4πr + 2πh − 2λ1 πrh + 2λ2 = 0, ∇x L(r, h; λ1, λ2 ) = 2πr − λ1 πr2 and the conditions πr 2 h − V = 0 and r ≤ −D/2. If the constraint is active, then r = D/2 and we obtain   D 4V 4 4V , , , − πD . (r, h, λ1, λ2 ) = 2 πD2 D D2 The second order conditions, based on the second derivatives, define the following Lagrangian Hessian matrix with respect to r and h, !   ∂2 L ∂2 L 4π − 2λ1 πh 2π − 2λ1 πr 2 2 ∂h∂r ∂r . = ∇ L= ∂2 L ∂2 L 0 2π − 2λ1 πr 2 ∂r∂h

∂h

The Lagrangian Hessian matrix for the KKT point is   4π − 32V −2π 3 2 D . ∇ L= −2π 0 < 0, i.e., 8V > π. This matrix is negative definite if 4π − 32V D3 D3 For instance, if V = 250 and D = 5 then the inequality constraint is active and λ2 = 24.29 > 0. In this case, the Lagrangian Hessian matrix is   4π − 64 −2 2 . ∇ L= −2π 0 which is negative definite, and thus the KKT point is a maximum of L. On the other hand, if V = 330 and D = 10, the inequality constraint will not be active which implies λ2 = 0 and the optimal solution corresponds to D = 7.4900 as in Example 4.

An Introduction to Optimization

6

23

Heuristics and Metaheuristics

The previous sections presented very strict conditions for the solution of optimization problems. In reality, these conditions are only applicable to convex programming problems. However, modeling real-world problems imposes (by both natural and technological reasons) the consideration of integer variables, such as different alternatives chosen among a set of integer values, as can be the case in the presence of different materials, or different sizes. This leads to mixed-integer nonlinear problems, lacking the properties of differentiability and convexity. Moreover, complex theory shows that certain optimization problems require large computational times in order to find the optimal solution. In order to overcome these limitations, heuristics have been developed such that, while not guaranteeing the finding of the optimal solution, provide a ”good“ solution in reasonable computing time. These heuristics incorporate some kind of knowledge that helps the algorithm to make an educated guess regarding the search of best solutions. In the last decades, many heuristics inspired on natural processes such as genetics, colonies of ants, particle swarms, to name just a few of them, have been developed, constituting the so-called metaheuristics. For continuous, convex and differentiable problems, the optimality conditions are used to the guide search. There are several algorithms based on these conditions, namely Steepest Descent, Newton and Quasi-Newton [4] algorithms for unconstrained problems and Sequential Quadratic Programming for constrained problems [3]. These algorithms, in general, require an initial guess that, if not near to the global optimum, may cause the algorithm to converge to a local optimum. Furthermore, they are gradient-based methods since they use information regarding the first and/or second derivatives. However, there are methods, the so-called direct search methods, that use only information regarding the objective function. Such algorithms, namely the simplex search method proposed by Nelder and Mead [5], are particularly useful when, for instance, the problem is not differentiable, the derivatives are not available or easily computable. Example 6. In order to illustrate the use of gradient-based and direct search algorithms, the problem presented in Example 3 was solved using the Quasi-Newton algorithm and simplex search algorithm implemented in the Optimization MatLab Toolbox [6]. Starting the search from the initial point (r0 ,t0 ) = (4, 1.5), the Quasi-Newton algorithm converges to the minimizer (t, r) = (3.7449, 0.2670) with f3 (t, r) = 2653.5 after 33 objective function evaluations. Using the simplex search algorithm to solve the same problem with the same initial value, we obtain the minimizer (t, r) = (3.7449, 0.2670) with f 3 (t, r) = 2653.5 after 58 objective function evaluations. It should be noted that this problem is differentiable and convex. Therefore, it is advantageous to use gradient-based algorithms for this problem. It is clear that for this problem, gradient-based algorithms are more appropriate and efficient, requiring less number of objective function evaluations. Despite the results obtained by these two deterministic approaches, for illustrative purposes, an evolutionary algorithm was applied to this problem. The solution obtained by the application of (1+1)-ES (evolution strategy), starting the search from the same initial value, was (t, r) = (3.7450, 0.2687) with f3 (t, r) = 2653.5 (100

24

L. Costa and P. Oliveira

objective function evaluations). It should be noted that this algorithm is stochastic, thus different approximations to the optimum are obtained when different runs are performed. For further details on this particular algorithm and other evolutionary approaches please refer to [7, 8]. In previous examples, differentiability and convexity conditions exist. However, when these conditions are not present, gradient based and direct search algorithms may exhibit some difficulties. For that matter, a similar problem to the one that has been studied has been created with a function with multiple local optimums. Example 7. Consider the same problem as in Example 3 but where a different cost function of the thickness is now considered. Thus, two local optimums exist, as can be observed in Figure 4 and 5. The objective function is now given by f6 = k1 (2πr2 +

660 ) + 10t 4 − 120t 3 + 460t 2 − 580t + 400. r

Starting the search from the initial point (r0 ,t0 ) = (4, 3.5), the Quasi-Newton algorithm converges to the local minimizer (t, r) = (3.7449, 4.9343) with f6 (t, r) = 2892.9 after 30 objective function evaluations. Using the simplex search algorithm to solve the same problem with the same initial value, we obtain again the minimizer (t, r) = (3.7449, 4.9343) with f 6 (t, r) = 2892.9 after 70 objective function evaluations. These algorithms have converged to a local minimum. Thus, for these approaches, if they do not start with a sufficient good approximation to the global optimum, they may converge to local optimums. Thus, in the presence of non convex problems with multiple local local optimums, other approaches, such as evolutionary algorithms, may be used to search for the global optimum. The solution obtained by the application of (1+1)-ES was (t, r) = (3.7450, 0.9402) with f6 (t, r) = 2813.0 (100 objective function evaluations). Thus, it can be seen that this evolutionary approach obtains an approximation to the global optimum starting from the same initial point. In order to illustrate the working of an evolutionary algorithm, (µ/ρ + λ)-ES [7] was applied to the previous example. The number of parents and offspring is denoted, respectively, by µ and λ. Each generation, ρ individuals are recombined. Figures 6 to 9 show how an (4/4+10)-ES proceeds to the optimal region and its main features: population based and search strategy. As can be seen, the algorithm starts form a initial randomly generated population of 4 points (signaled as ’x’ in figures). From these initial points, 10 new points (signaled as ’o’ in figures) are generated according to two probabilistic transition rules: mutation and recombination. The mutation introduces a perturbation (a normal random value of mean zero and an adaptive standard deviation), as shown on the following equation: xN ← x(t) + N(0, σ2 ), where σ is a standard deviations used to adapt step sizes during the search. The recombination operator allows the exploration of new regions of the search space since new points are generated from distinct solutions, possibly far apart from each other. This operator can be formulated in accordance to the following equation: xD = (xu1 ,1 , . . ., xun ,n ) com u1 ∈ U(0, ρ), . . ., un ∈ U(0, ρ),

An Introduction to Optimization

25

5000

4000

3500

3000

7 6 6

5 4

5 3

4

2 3

1 0

t (mm)

2

r (cm)

Figure 4. Cost function for example 7.

Costs ($) 7

00 42004350 44860500 350 45 4050 3900 4 0050 39 3750 0 37 3600 420050 3450 4 33 00 0 5 3150 34 300 3 50 3000 31

46504800 5000 42004345 40 50 3600 3900 375 0 34 50

31

3450

3150

360

30 00

0

3750

4000

3500

31 50

0 285 2850

300

0

300

0

3

3.5

345

0

0 2.5

50

2

00

00

31

0

33

3600

3300

30

330 3450

1

4500

50

300

0

3750

00

3450

3900

3

5000

3600

30

4

2

0

5

330

36

00

6

t (cm)

Cost ($)

4500

4 r (cm)

4.5

3000

0

330

5

5.5

Figure 5. Cost isolines for example 7.

6

26

L. Costa and P. Oliveira Population at generation:0 7

6

5

t

4

3

2

1

0

2

2.5

3

3.5

4 r

4.5

5

5.5

6

5.5

6

Figure 6. (4/4+10)-ES - initial population.

Population at generation:1 7

6

5

t

4

3

2

1

0

2

2.5

3

3.5

4 r

4.5

5

Figure 7. 4/4+10)-ES - population at 1st generation.

An Introduction to Optimization

27

Population at generation:5 7

6

5

t

4

3

2

1

0

2

2.5

3

3.5

4 r

4.5

5

5.5

6

Figure 8. (4/4+10)-ES - population at 5th generation.

Population at generation:10 7

6

5

t

4

3

2

1

0

2

2.5

3

3.5

4 r

4.5

5

5.5

Figure 9. 4/4+10)-ES - population at 10th generation.

6

28

L. Costa and P. Oliveira

where ρ is the number of individuals in recombination. Figures 6 to 9 show how the search proceeds by the application of the mentioned operators. It must be said that, for each generation, from the generated 10 points, just 4 (the best ones) are selected for the following generation. The algorithm proceeds by the successive application of these steps, till a stopping criteria is met. The final solution obtained by this evolutionary algorithm approximates the global optimum.

7

Conclusion

The above examples show that when differentiability and convexity exist, gradient based and direct search algorithms exhibit a performance that cannot be beated by evolutionary algorithms. Thus, when these conditions are not present, and in most real problems this is the case, Evolutionary Algorithms start to show some advantages. Evolutionary algorithms [7, 8], in contrast to deterministic algorithms, do not require any differentiability or convexity conditions and the search of the optimum is based on probabilistic transition rules. Moreover, these algorithms start from a pool of points and, therefore, are less prone to being trapped in a local optimum, which makes them very suitable for the search of a global optimum or, at least, to provide a “good” approximation to the solution of the problem. Moreover, in single objective problems with some constraints, it might be advantageous to treat the constraints as other objectives to be satisfied. As it will be shown, Evolutionary Algorithms are well suited to deal with problems with multiple objectives.

References [1] Nocedal, J. and Wright, S.J., Numerical Optimization, Springer-Verlag, New York, 1999. [2] Rao, S.S., Optimization theory and applications (second edition), Wiley Eastern Limited, 1984. [3] Reklaitis, G.V., Ravindran, A. and Ragsdell, K.M, Engineering Optimization: models and applications, Wiley, 1983. [4] Fletcher, R., Practical Methods of Optimization, Vol. 1, Unconstrained Optimization, John Wiley and Sons, 1980. [5] Nelder, J.A., Mead, R., A simplex method for function minimization, Computer Journal, 7, 308–313, 1965. [6] The MathWorks, Inc, MATLAB Getting Started Guide, 1984–2010. [7] Schwefel, H.-P., Evolution and Optimum Seeking, Wiley, New York, 1995. [8] Goldberg, D.E., Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, Reading, Massachusetts, 1989.

In: Optimization in Polymer Processing Editor: A. Gaspar-Cunha, J. A. Covas, pp. 29-57

ISBN 978-1-61122-818-2 c 2011 Nova Science Publishers, Inc.

Chapter 3

A N I NTRODUCTION TO M ULTIOBJECTIVE O PTIMIZATION T ECHNIQUES Antonio L´opez Jaimes, Sa´ul Zapotecas Mart´ınez, Carlos A. Coello Coello CINVESTAV-IPN Departamento de Computaci´on Evolutionary Computation Group (EVOCINV) Av. IPN No. 2508 Col. San Pedro Zacatenco M´exico, D.F. 07360, MEXICO

Keywords: multiobjective optimization, evolutionary algorithms

1

Introduction

A wide variety of problems in engineering, industry, and many other fields, involve the simultaneous optimization of several objectives. In many cases, the objectives are defined in incomparable units, and they present some degree of conflict among them (i.e., one objective cannot be improved without deterioration of at least another objective). These problems are called Multiobjective Optimization Problems (MOPs). Let us consider, for example, a shipping company which is interested in minimizing the total duration of its routes to improve customer service. On the other hand, the company also wants to minimize the number of trucks used in order to reduce operating costs. Clearly, these objectives are in conflict since adding more trucks reduces the duration of the routes, but increases operation costs. In addition, the objectives of this problem are expressed in different measurement units. In single-objective optimization, it is possible to determine between any given pair of solutions if one is better than the other. As a result, we usually obtain a single optimal solution. However, in multiobjective optimization there does not exist a straightforward method to determine if a solution is better than other. The method most commonly adopted in multiobjective optimization to compare solutions is the one called Pareto dominance relation [1] which, instead of a single optimal solution, leads to a set of alternatives

30

A. L´opez Jaimes, S. Zapotecas Mart´ınez, C.A. Coello Coello

with different trade-offs among the objectives. These solutions are called Pareto optimal solutions or non-dominated solutions . Although there are multiple Pareto optimal solutions, in practice, only one solution has to be selected for implementation. For instance, in the example of the shipping company presented above, only one route from several alternatives generated will be selected to deliver the packages for a given day. Therefore, in the multiobjective optimization process we can distinghish two tasks, namely: i) find a set of Pareto optimal solutions, and ii) choose the most preferred solution out of this set. Since Pareto optimal solutions are mathematically equivalent, the latter task requires a Decision Maker (DM) who can provide subjective preference information to choose the best solution in a particular instance of the multiobjective optimization problem. We can distinguish two main approaches to solve multiobjective optimization problems. The first is called the Multi-Criteria Decision Making (MCDM) approach which can be characterized by the use of mathematical programming techniques and a decision making method in an intertwined manner. In most of the MCDM’s methods the decision maker plays a major role in providing information to build a preference model which is exploited by the mathematical programming method to find solutions that better fit the DM’s preferences [2]. Evolutionary Multiobjective Optimization (EMO) is another approach useful to solve multiobjective optimization problems. Since evolutionary algorithms use a population based approach, they usually find an approximation of the whole Pareto front in one run. Although in the EMO community the decision making task has not received too much attention in the past, in recent years a considerable number of works have addressed the incorporation of preference in Multi-Objective Evolutionary Algorithms (MOEAs). In the following, we present some general concepts and notations used in the remainder of this chapter. Definition 1. Multiobjective Optimization Problem - Formally, a Multiobjective Optimization Problem (MOP) is defined as: “Minimize” subject to

f(x) = [ f1 (x), f2(x), . . ., fk(x)]T x ∈ X.

(1)

The vector x ∈ Rn is formed by n decision variables representing the quantities for which values are to be chosen in the optimization problem. The feasible set X ⊆ Rn is implicitly determined by a set of equality and inequality constraints. The vector function f : Rn → Rk is composed by k scalar objective functions f i : Rn → R (i = 1, . . ., k; k ≥ 2). In multiobjective optimization, the sets Rn and Rk are known as decision variable space and objective function space, respectively. The image of X under the function f is a subset of the objective function space denoted by Z = f(X ) and referred to as the feasible set in the objective function space.

2

Notions of Optimality in MOPs

In order to define precisely the multiobjective optimization problem stated in definition (1) we have to establish the meaning of minimization in Rk . That is to say, it is required to define how vectors f(x) ∈ Rk have to be compared for different solutions x ∈ Rn . In

An Introduction to Multiobjective Optimization Techniques Decision variable space

31

Objective function space f3

X ⊆ Rn x2

Z ⊆ Rk f : Rn → Rk f1

x1

f2

Figure 1. Search spaces in multiobjective optimization problems.

single-objective optimization is used the relation “less than or equal” ( ≤) to compare the values of the scalar objective functions. By using this relation there may be different optimal solutions x ∈ X , but only one optimal value fmin = min{ fi (x)|x ∈ X }, for each function f i , since the relation ≤ induces a total order in R (i.e., every pair of solutions is comparable, and thus, we can sort solutions from the best to the worst one). In contrast, in multiobjective optimization problems, there is no canonical order on Rk , and thus, we need weaker definitions of order to compare vectors in Rk . In multiobjective optimization, it is usually adopted the Pareto dominance relation originally proposed by Francis Ysidro Edgeworth in 1881 [3], but generalized by the frenchitalian economist Vilfredo Pareto in 1896 [1]. Definition 2. Pareto Dominance relation - We say that a vector z1 Pareto-dominates vector z2 , denoted by z1 ≺pareto z2 , if and only if 1 : ∀i ∈ {1, . . ., k} : z1i ≤ z2i

(2)

∃i ∈ {1, . . ., k} : z1i < z2i

(3)

and Figure 2 illustrates the Pareto dominance relation with an example with four 2dimensional vectors. Vector z3 is strictly less than z2 in both objectives, therefore z3 ≺pareto z2 . Vector z3 also Pareto-dominates z1 since with respect to f1 those vectors are equal, but in f 2 , z3 is strictly less than z1 . Since ≺pareto is not a total order some elements can be incomparable like is the case with z1 and z4 , i.e., z1 ⊀pareto z4 and z4 ⊀pareto z1 . Similarly, z3 ≺pareto z4 , z1 ≺pareto z2 , and z4 ≺pareto z2 . Thus, to solve a MOP we have to find those solutions x ∈ X whose images, z = f(x), are not Pareto-dominated by any other vector in the feasible space. In the example shown in Figure 2, no vector dominates z3 and, therefore, we say that z3 is nondominated. Definition 3. Pareto Optimality - A solution x∗ ∈ X is Pareto optimal if there does not exist another solution x ∈ X such that f(x) ≺ pareto f(x∗). 1 Without

loss of generality, we will assume only minimization problems.

32

A. L´opez Jaimes, S. Zapotecas Mart´ınez, C.A. Coello Coello

z1

z2

z3

z4

f2

f1 Figure 2. Illustration of the concept of Pareto dominance relation.

Definition 4. Weak Pareto Optimality - A solution x∗ ∈ X is weakly Pareto optimal if there does not exist another solution x ∈ X such that f(x) < f(x∗) for all i = 1, . . ., k. The set of Pareto optimal solutions and its image in objective space is defined in the following. Definition 5. Pareto optimal set - The Pareto optimal set, P ∗, is defined as:

P ∗ = {x ∈ X | @ y ∈ X : f(y)  f(x)}.

(4)

Definition 6. Pareto front - For a Pareto optimal set P ∗, the Pareto front, P F ∗ , is defined as: F P ∗ = {f(x) = ( f1(x), . . ., fk (x)) | x ∈ P ∗ }. (5) Figure 3 illustrates the concept of Pareto optimal set and its image in the objective space, the Pareto front. Darker points denote Pareto optimal vectors. In variable space, these vectors are referred to as Pareto optimal decision vectors, while in objective space, are called Pareto optimal objective vectors. As we can see in the figure, the Pareto front is only composed by nondominated vectors. On some optimization techniques is useful to know the lower and upper bounds of the Pareto front. The ideal point represents the lower bounds and is defined by z?i = minz∈Z zi for all i = 1, . . ., k. In turn, the upper bounds are defined by the nadir point, which is given = maxz∈Z zi for all i = 1, . . ., k. by znad i As we mentioned before, Pareto dominance is the most common preference relation used in multiobjective optimization. However, it is only one of the possible preference relations available. The interested reader is referred to [4] (Chap. 6) and [5] (Chap. 5), where other preference relations are presented. As indicated by some authors (see e.g., [5] and [6]), in general, a MOP can be defined completely by (X , Rk, f, R ), where X is the feasible set, Rk is the objective function space, f is the objective function vector, and R is the preference relation, which induces an ordered set on Rk .

An Introduction to Multiobjective Optimization Techniques x2

Pareto optimal set

33

Pareto front

f2

Z ⊆ Rk X⊆

Rn f : Rn → Rk Pareto front

f1

x1 Decision variable space

Objective function space

Figure 3. Illustration of the Pareto optimal set and its image, the Pareto front.

3

Mathematical Programming Techniques

The mathematical programming techniques are classified regarding how and when to incorporate preferences from the DM into the search process. A very important issue is the moment at which the DM is required to provide preference information. There are three ways of doing this [4, 7]: 1. Prior to the search (a priori approaches). 2. During the search (interactive approaches). 3. After the search (a posteriori approaches). In this section we present some of the most popular MCDM’s techniques according to the above classification.

3.1 A Priori Preference Articulation 3.1.1 Goal Programming Charnes and Cooper [8] are credited with the devolopment of the goal programming method for a linear model, and played a key role in applying it to industrial problems. In this method, the DM has to assign targets or goals that wishes to archieve for each objective. These values are incorporated into the problem as additional constraints. The objective function then tries to minimize the absolute deviations from the targets to the objectives. The simplest form of this method may be formulated as follows: k

minimize

∑ | fi(x) − Ti|

i=1

subject to

(6)

x ∈ X,

where Ti denotes the target or goal set by the decision maker for the ith objective function f i (x), and X represents the feasible region. The criterion then is to minimize the sum of

34

A. L´opez Jaimes, S. Zapotecas Mart´ınez, C.A. Coello Coello

the absolute values of the differences between target values and actually achieved values. A more general formulation of the goal programming objective function is a weighted sum of the pth power of the deviation | fi(x) − Ti|. Such a formulation has been called generalized goal programming [9]. In equation 6, the objective function is nonlinear and the simplex method can be applied only after transforming this equation into a linear form, thus reducing goal programming to − a special type of linear programming. In this transformation, new variables δ+ i and δi are defined such that: = δ+ i = δ− i

1 {| fi(x) − Ti | + [ f i (x − Ti )]} 2 1 {| fi(x) − Ti | − [ f i (x − Ti )]} 2

(7) (8)

This means that the absolute value signs can be dropped from problem (6) by introducing the underachievement and the overachievement variables. Adding and subtracting the equations (7) and (8), the resulting equivalent linear formulation may be found: k

minimize

∑ (δ+i + δ−i )

i=1

subject to

− f (x) − δ+ i + δi = Ti , − δ+ i , δi

≥ 0,

i = 1. . . ., k

(9)

i = 1. . . ., k,

x ∈ X, Since it is not possible to have both under and overachievements of the goal simultaneously, then at least one of the deviational variables must be zero. In other words: − δ+ i · δi = 0

(10)

Fortunately, this constraint is automatically fulfilled by the simplex method beacuse the − objective function drives either δ+ i or δi or both variables simultaneously to zero for all i. Some times it may be desirable to express preference for over or under achievement of a goal. Thus, it may be more desirable to overachieve a targeted reability figure than to underachieve it. To express preference for deviations, the DM can assign relative − weights w+ i and wi to positive and negative deviations, respectively, for each target Ti . − If a minimization problem is considered, choosing the w+ i to be larger than wi would be expressing preference for underachievement of a goal. In addition, goal programming provides the flexibility to deal with cases that have conflicting multiple goals. Essentially, the goals may be ranked in order of importance to the problem solver. That is, a priority factor, pi (i = 1, . . ., k) is assigned to the deviational variables associated with the goals. This is called “lexicographic ordering”. These factors pi are conceptually different from

An Introduction to Multiobjective Optimization Techniques

35

weights, as it is explained, for example, in [7]. The resulting optimization model becomes: k

minimize

∑ pi(w+i δ+i + w−i δ−i )

i=1

subject to

− f (x) − δ+ i + δi = Ti , − δ+ i , δi

≥ 0,

i = 1. . . ., k

(11)

i = 1. . . ., k,

x ∈ X, Note that this technique yields a nondominated solutions if the goal point is chosen in the feasible domain. The following theorem is presented and proved in [7]: Theorem 1. The solution of a weighted or a lexicographic goal programming problem (9) is Pareto optimal if either the aspiration levels form a Pareto optimal reference point or − all the deviational variables δ+ i for functions to be minimized and δi for functions to be maximized have positive values at the optimum. 3.1.2 Goal-Attainment Method Similar to the goal programming method, in this approach the decision maker must provide a goal vector zref . In addition, the decision maker must provide a vector of weights w = [w1 , w2 , . . ., wk ] relating the relative under- or over-attainment of the desired goals. In order to find the best-compromise solution x?, the following problem is solved [10, 11]: Minimize α subject to

zref i + α · wi ≥ f i (x);

i = 1, . . ., k,

(12)

x ∈ X, where α is a scalar variable unrestricted in sign and the weights w1 , w2 , . . ., wk are normalized so that k

∑ |wi| = 1

(13)

i=1

If some wi = 0 (i = 1, . . ., k), it means that the maximum limit of objectives fi (x) is zref i . It can be easily shown [12] that every Pareto optimal solution can be generated by varying the weights, with wi ≥ 0 (i = 1, . . ., k) even for nonconvex problems. The mechanism by which this method operates is illustrated in Figure 4. The vector zref is represented by the decision goal of the DM, who also decides the direction of w. Given vectors w and zref , the direction of the vector zref + α · w can be determined, and the problem stated by equation (12) is equivalent to finding a feasible point on this vector in objective space which is closest to the origin. It is obvious that the optimal solution of equation (12) is the first point at which zref + α · w intersects the feasible region in the objective space (denoted by Z in Figure 4). If this point of intersection exists, then it would clearly be a Pareto optimal solution. It should be pointed out that the optimum value of α informs the DM of whether the goals are attainable or not. A negative value of α implies that the goal of the decision maker is attainable and an improved solution is then to be obtained. Otherwise, if α > 0, then the DM’s goal is unattainable.

36

A. L´opez Jaimes, S. Zapotecas Mart´ınez, C.A. Coello Coello

Figure 4. Illustration of the goal attainment method with two objective functions.

3.1.3 Lexicographic Method In this method, the objectives are ranked in order of importance by the decision maker (from best to worst). The optimal value fi? (i = 1, . . ., k) is then obtained by minimizing the objective functions sequentially, starting with the most important one and proceeding according to the order of importance of the objectives. Additionally, the optimal value found of each objective is added as a constraint for subsequent optimizations. This way, the optimal value of the most important objectives is preserved. Only in the case of several optimal solutions in the single optimization of the current objective, the rest of the objectives are considered. Therefore, in the worst case, we have to carry out k single objective optimizations. Let the subscripts of the objectives indicate not only the objective function number, but also the priority of the objective. Thus, f 1 (x) and fk (x) denote the most and least important objective functions, respectively. Then, the first problem is formulated as Minimize

f 1 (x)

subject to

x ∈ X.

(14)

We have to note that, although only one optimal value f1? = min{ f1 (x)|x ∈ X } is generated for this single-objective problem, it might be possible to obtain many different optimal solutions x? ∈ X . Nonetheless, regarding the original multiobjective problem, only one of these solutions is Pareto optimal. For this reason, we should consider two situations after the optimization of each objective fi (i = 1, . . ., k). If we obtain a unique optimal solution, then this solution is the optimal solution of the original multiobjective problem, and, therefore, we stop the optimization process. Otherwise, we have to optimize the next objective. In general, we have to solve the single objective optimization problem

An Introduction to Multiobjective Optimization Techniques

37

(i = 2, . . ., k) given by Minimize

fi (x)

subject to

x ∈ X, f l (x) =

(15) fl? ,

l = 1, . . ., i − 1.

If several optimal solutions were obtained in each optimization problem (15) until objective f k−1 , then the unique optimal solution obtained for fk , i.e., x?k , is taken as the desired solution of the original problem. In [7, 13] it is proved that the optimal solution obtained by the lexicographic problem is Pareto optimal. For this reason, the lexicographic method is usually adopted as an additional optimization approach in methods that can only guarantee weak optimality by themselves. For example, in the ε-constraint method (see Section 3.2.3), or in methods based on the Tchebycheff achievement function (see Sections 3.3.2 and 3.3.3).

3.2 A Posteriori Preference Articulation 3.2.1 Linear Combination of Weights In this method, the general idea is to associate each objective function with a weighting coefficient and minimize the weighted sum of the objectives. In this way, the multiobjective problem is transformed into a single objective problem. Thus, the new optimization problem is defined as: k

minimize

∑ wi fi(x)

i=1

subject to

(16)

x ∈ X.

where wi ≥ 0 and is strictly positive for at least one objective, such that ∑ki=1 wi = 1. The set of nondominated solutions can be generated by parametrically varying the weights wi in the objective function. This was initially demonstrated by Gass and Saaty for a twoobjective problem. The following implications are consequences of this formulation and their corresponding proofs can be found in [7]: Theorem 2. The solution of weighting problem (16) is weakly Pareto optimal. Theorem 3. The solution of weighting problem (16) is Pareto optimal if the weighting coefficients are positive, that is w i > 0 for all i = 1, . . ., k. Theorem 4. The unique solution of the weighting problem (16) is Pareto optimal. Theorem 5. Let the multiobjecive optimization problem be convex. If x∗ ∈ X is Pareto optimal, then there exists a weighting vector w (wi ≥ 0, i = 1, . . ., k, ∑ki=1 wi = 1) which is a solution to the weighting problem (16).

38

A. L´opez Jaimes, S. Zapotecas Mart´ınez, C.A. Coello Coello

3.2.2 Normal Boundary Intersection Das and Dennis [14] proposed this novel method for generating Pareto optimal solutions evenly distributed. The main idea in the Normal Boundary Intersection (NBI) method, is to intersect the feasible objective region with a normal to the convex combinations of the columns of the pay-off matrix. For undestanding this method let’s see the next definition. Definition 7. Let x∗i be the respective global minimizers of f i (x), i = 1, . . ., k over x ∈ X . Let Fi∗ = F(x∗i ), i = 1, . . ., k. Let Φ be the k × k matrix whose ith column is Fi∗ − F ∗ sometimes known as the pay-off matrix. Then the set of points in R k that are convex combinations of Fi∗ − F ∗, i.e. {Φβ : β ∈ Rn , ∑ki=1 βi = 1, βi ≥ 0}, is referred to as the Convex Hull of Individual Minima (CHIM). The set of the attainable objective vectors, {F(x) : x ∈ X } is denoted by F , thus X is mapped onto F by F. The space Rk which contains F is referred to as the objective space. The boundary of F is denoted by ∂F . Figure 5 illustrates the CHIM for a two-objective problem. In the example we show the shadow minimum or utopian point F ∗ defined by F ∗ = { f1∗ , . . ., fk∗ }, the area in gray describes the objective space F and the black line describes the complete boundary ∂F of F. NBI is a method designed to find the portion of ∂F which contains the Pareto optimal points. The main idea behind this approach is that the intersection point between the boundary ∂F and the normal pointing towards the origin emanating from any point in the CHIM is a point on the portion of ∂F containing the efficient points. This point is guaranteed to be a Pareto optimal point if the trade-off surface in the objective space is convex. Therefore, the original multiobjective problem is traslated into the following new problem. Given a convex weighting β, Φβ represents a point in the CHIM. Let nˆ denote the unit normal to the CHIM simplex towards the origin; then Φβ + t nˆ represents the set of points on that normal. The point of intersection of the normal and the boundary of F closest to the origin is the global solution of the following problem: Maximizex,t subject to

t Φβ + t nˆ = F(x),

(17)

x∈X The vector constraint Φβ + t nˆ = F(x) ensures that the point x is actually mapped by F to a point on the normal, while the remaining constraints ensure feasibility of x in X . This approach considers that the shadow minimum F ∗ is in the origin. Otherwise, the first set of constraints should be Φβ + t nˆ = F(x) − F ∗. As many scalarization methods, for various β, a number of points on the boundary of F are obtained thus, effectively, constructing the Pareto surface. A quasi-normal direction is used instead of a normal direction, such that it represents an equally weighted linar combination of columns of Φ, multiplied by −1 to ensure that it points towards the origin. That is, nˆ = −Φv

An Introduction to Multiobjective Optimization Techniques

39

where v is a fixed vector with strictly positive components. Commonly, nˆ is chosen to be nˆ = −Φe, where e is the column vector of all ones.

Figure 5. Illustration of the CHIM for a two-objective problem.

3.2.3 ε-Constraint Method The ε-constraint method is one of best known scalarization techniques to solve multiobjective problems. In this approach one of the objectives is minimized while the others are used as constraints bound by some allowable levels εi . The multiobjective optimization problem is transformed into the following ε-constraint problem Minimize

f l (x)

subject to

fi (x) ≤ εi

∀i = 1, . . ., k i 6= l,

(18)

x ∈ X. Figure 6 illustrates the application of the ε-constraint method in a bicriterion problem. In the example we show three different constraint values for f1 and their respective optimum values for f 2 . It is worth noting that for some values of εi , the constraint imposed might be active or inactive. For example, the constraint for ε1 and ε3 is active, whereas that for ε2 is inactive. In order to find several Pareto optimal solutions, we need to solve problem (18) using multiple different values for εi . In this iterative optimization process the user needs to provide the range of the reference objective, fl . In addition, the increment for the constraints imposed by ε must be provided. This increment determines the number of Pareto optimal solutions generated. In Algorithm 1 it is shown the pseudo code of the iterative ε-constraint optimization for the case of two objectives. [!htbp]

40

A. L´opez Jaimes, S. Zapotecas Mart´ınez, C.A. Coello Coello

Figure 6. Illustration of the ε-constraint method. Input: f1min , f1max ∈ R: Lower and upper bounds for objective f1 . δ ∈ R: Increment for constraint ε. PFapprox ← 0/ ε ← f1max while ε ≥ f1min do x ← ε-MINIMIZE (f, ε)  Minimize using problem (18) PFapprox ← PFapprox ∪ {x} ε ← ε−δ end while Return the approximation of the Pareto front PFapprox Pseudocode of an iterative optimization process using the ε-constraint method. In [13] and [7] are presented and proved the following important theorems related to the optimality of the solutions generated by the ε-constraint problem. Theorem 6. The optimal solution of the ε-constraint problem (18) is weakly Pareto optimal. Theorem 7. The solution x? ∈ X is Pareto optimal if and only if εi = fi (x?) for all i = 1, . . ., k, i 6= l, and x? is an optimal optimal solution of problem (18) for all l = 1, . . ., k. Theorem 8. If x? is the unique optimal solution of problem (18) for some l = 1, . . ., k, then x? is Pareto optimal. As pointed out by Ehrgott [13], Theorem 7 only provides a method to check Pareto optimality instead of a method to find Pareto optimal solutions since the values for ε must be equal to the nondominated vector f(x? ). Therefore, in order to generate Pareto optimal solutions only, we need to solve k single objective problems, or less than k if we obtain a unique optimal solution in one

An Introduction to Multiobjective Optimization Techniques

41

of the problems. One possibility to avoid weakly Pareto optimal solutions is the use of lexicographic optimization (see Section 3.1.3) in problem (18). That is, if f1 has multiple optimal solutions, then select the best solution with respect to objective f 2 an so on. 3.2.4 Method of Weighted Metrics The idea behind this method is to find the closest feasible solution to a reference point, which usually is the ideal point. Some authors, such as Duckstein [15] and Zeleny [16], call this method compromise programming. The most common metrics to measure the distance between the reference point and the feasible region are those derived from the L p -metric, which is defined by !1/p p

∑ |yk | p

||y|| p =

,

(19)

i=1

for 1 ≤ p ≤ ∞. The value of p indicates the type of metric. For p = 1 we obtain the Manhattan metric, while for p = ∞ we obtain the so-called Tchebycheff metric. From the L p -metrics the following compromise problem is derived p

Minimize

∑ | fi(x) − z?i | p

!1/p (20)

i=1

subject to

x ∈ X.

In order to obtain different (weakly) Pareto optimal solutions we must allow weights in problem (20). The resulting weighted compromise programming problem is p

Minimize

∑ wi| fi(x) − z?i | p

!1/p

i=1

subject to

(21)

x ∈ X.

For p = 1, all deviations from z?i are taken into account. Ehrgott [5] shows that the method of linear combination of weights (see Section 3.2.1) is a special case of the weighted compromise problem with p = 1. For p = ∞, i.e, using the Tchebycheff metric, the largest deviation is the only one taken into consideration. The resulting weighted Tchebycheff problem is defined by Minimize subject to

max {wi | fi (x) − z?i |}

i=1,...,k

(22)

x ∈ X.

This problem presents the most interesting theoretical result, and is one of the most commonly employed. Depending on the properties of the metric employed, we obtain different results regarding the optimality of the solutions generated. In [7] and [13] is shown that the solution of the weighted compromise programming problem (21) with 1 ≤ p < ∞ is Pareto optimal if one of the following conditions holds: 1. The optimal solution of (21) is unique.

42

A. L´opez Jaimes, S. Zapotecas Mart´ınez, C.A. Coello Coello 2. wi > 0 for all i = 1, . . ., k.

It is important to note, however, that for 1 ≤ p < ∞, although problem (21) can generate Pareto optimal solutions, it does not necessarily find all of them. In constrast, the weighted Tchebycheff problem is able to generate every Pareto optimal solution [7, 5]. Unfortunately, if the solution of the Tchebycheff problem is not unique, some of the solutions generated are weakly Pareto optimal. In order to identify the Pareto optimal solutions, Miettinen [7] suggests two possible approaches: use lexicographic ordering to solve the Tchebycheff problem, or modify the original problem. In the latter approach, Steuer and Choo [17] suggest aggregating an augmentation term to the original problem. Thus, it is obtained the augmented weighted Tchebycheff problem k

Minimize subject to

max {wi | fi (x) − z?i |} + ρ ∑ | fi(x) − z?i |

i=1,...,k

i=1

(23)

x ∈ X,

where ρ is a sufficiently small positive scalar. However, it is worth noting that using this approach it may be possible that some Pareto optimal solutions cannot be found. Nevertheless, every properly Pareto optimal solution can be obtained by this approach. The set of properly Pareto optimal solutions is a subset of the Pareto optimal solutions in which unbounded tradeoffs are not allowed.

3.3 Interactive Preference Articulation 3.3.1 Method of Geoffrion-Dyer-Feinberg (GDF) This interactive method developed by Geoffrion et al. [18] is based on the maximization of a value function (utility function) using a gradient-based method. The value function is only implicitly known, but is assumed to be differentiable and concave. The gradient-based method employed is the Frank-Wolfe method [19], however, as indicated by the authors, other methods could be used in an interactive fashion. The Frank-Wolfe method assumes that the feasible set, X ⊆ Rn , is compact and convex. The direction-finding problem of the Frank-Wolfe method is the following: Maximize ∇xU(f(xh)) · y

(24)

y ∈ X,

subject to

where U : Rk → R is the value function, xh is the current point, and y is the new variable of the problem. Using the chain rule it is obtained  k  ∂U h (25) ∇x fi (xh ). ∇xU(f(x )) = ∑ ∂ f i i=1 Dividing this equation by problem:

∂U ∂ f1

we obtain the following reformulation of the Frank-Wolfe k

Maximize



i=1

subject to

!

−mhi ∇x fi(xh )

y ∈ X,

·y

(26)

An Introduction to Multiobjective Optimization Techniques

43

where mhi = (∂U/∂ fi )/(∂U/∂ f1 ) for all i = 1, . . ., k, i 6= 1 are the marginal rates of substitution (or indifference tradeoff) at xh between objectives f1 and fi. The marginal rate of substitution is the amount of loss on objective f i that the decision maker is willing to tolerate in exchage of one unit of gain in objective f1 , while the value of the other objectives remains unchanged. The prodecedure of the GDF method is the following: Step 0: Provide an initial point x1 ∈ X . Set h = 1. Step 1: The decision maker must provide marginal rates of substitution between f1 (the reference objective) and the other objectives at the current point xh . Step 2: Find the optimal solution yh of problem (26). Set the new search direction dh = yh − xh . If dh = 0, go to Step 5. Step 3: The decision maker must determine the best step-size, t h , to compute the new solution xh . Then, set xh+1 = xh + t h dh . Step 4: If xh+1 = xh , go to Step 5, else set h = h + 1 and go to Step 1. Step 5: Return xh as the final solution. The most important steps of this procedure are the steps 1 and 3. One possibility to estimate the marginal rates is to compare the solutions [ f1(xh ), f2(xh ), . . ., f j (xh ), . . ., fk (xh )], and [ f 1 (xh ) − ∆1 , f2 (xh ), . . ., f j (xh ) + ∆ j , . . ., fk(xh )], where ∆ j is an small amount added to f j in compensation of a decrement in f1 by a small amount ∆1 , while the other values remain unaltered. The idea is to modified the quantities ∆ j and ∆1 until the two solutions are indifferent to the decision maker. Thus, mhi ≈ ∆∆1j . Regarding the selection of the optimal step-size, Geoffrion et al. proposed a graphical procedure that presents to the decision maker several alternative vectors varying t in the interval [0, 1]. That is, the vectors zi = f i (xh + td) for i = 1, . . ., k using different values of t ∈ [0, 1]. 3.3.2 Tchebycheff Method The Tchebycheff method proposed in [17] is an iterative method which was designed to be user-friendly, thus complicated information is not requiered. This method is based on the minimization of a function value, assuming that the global ideal objective vector ( utopian vector) is known. The metric to be used for measuring the distances to a utopian objective vector is the weighted Tchebycheff metric. Thus, the multiobjective optimization problem is transformed into a single-objective optimization problem, defined by Minimize subject to

max [wi ( fi (x) − z∗i )]

i=1,...,k

x ∈ X,

(27)

44

A. L´opez Jaimes, S. Zapotecas Mart´ınez, C.A. Coello Coello

where w ∈ W = {w ∈ Rk |0 < wi < 1, ∑ki=1 wi = 1} and z∗ is the utopian objective vector. Theorem 9. Let x∗ ∈ X be Pareto optimal. Then there exists a weighting vector 0 < w ∈ Rk such that x∗ is a solution of the weighted Tchebycheff problem (27), where the reference point is the utopian objective vector z∗ . Thus, from the above theorem, every Pareto optimal solution of any multiobjective optimization problem can be found by solving problem (27). However, with this approach, some of the solutions may be weakly Pareto optimal solutions. For solving this negative aspect, the Tchebycheff method can be stated formulating the distance minimization problem as a lexicographic weighted Tchbycheff approach, as follows: k

Minimize subject to

max [wi ( f i(x) − z∗i )], ∑ ( f i (x) − z∗i )

i=1,...,k

i=1

(28)

x ∈ X,

The following theorems are consequences of the connection between the lexicographic weighted Tchebycheff problem and the Pareto optimal solutions. Theorem 10. The solution of lexicographic weighted Tchebycheff problem (28) is Pareto optimal. Theorem 11. Let x ∈ X be Pareto optimal. Then there exists a weighting vector 0 < w ∈ Rk such that x is a unique solution of lexicographic weighted Tchebycheff problem (28). At each iteration, the Tchebycheff method provides different subsets of nondominated solutions. These solutions consist of P(≈ n) representative points, generated by using an augmented weighted Tchebycheff problem (for example, the lexicographic weighted Tchebycheff problem (28)), from which the DM is required to select one as his most preferred. Below, we describe the complete Tchebycheff method. Step 0: Calculate the ideal point z∗ and let z∗∗ = z∗ + ε, where ε is a vector of arbitrarily k

small positive values. Let W 1 = {w ∈ Rk : wi ∈ [0, 1], ∑ wi = 1} be the initial set of i=1

weighting vectors. Set h = 1. Step 1: Generate a large number (50n) of weighting vectors from W h . Step 2: Find the optimal solutions of problem (28). Filter the 2P resulting nondominated points to obtain P solutions. Step 3: Show the P compromise solutions to the DM and ask him to select the one he most prefers. Let zh be the selected point. Step 4: If (h = t) then Stop with zh as the preferred solution (where t is a prespecified number of iterations), else

An Introduction to Multiobjective Optimization Techniques

45

[i.] Let wh be the weighting vector which generated zh in step 2. Its components are given by: " # k 1 1 h wi = ∗∗ h ∑ ∗∗ h (i = 1, . . ., k) zi − zi j=1 z j − z j [ii.] Determine the reduced set of weighting vectors: k

W h+1 = {w ∈ Rk : wi ∈ [li , ui ], ∑ wi = 1} i=1

where

 h if whi ≤ rh /2,  [0, r ] [li , ui] = [1 − rh , 1] if whi ≥ 1 − rh /2,   h [wi − rh /2, whi + rh /2] otherwise,

and rh is a pre-specified “convergence factor” r raised to the hth power. iii. Set h = h + 1 and go to Step 1. 3.3.3 Reference Point Methods The proposed preference relation is based on the reference point method proposed by Wierzbicki [20, 21], and thus this section presents a summary of this method. The reference point approach is an interactive multiobjective optimization technique based on the definition of a scalarization achievement function. The basic idea of this technique is the following. First, the DM is asked to give a reference point. This point represents the aspiration levels for each objective. Then, the solutions that better satisfy the aspiration levels are computed using an achievement scalarization function, which is a type of utility function based on a reference point. If the DM is satisfied with the current solution, the interactive process ends. Otherwise, the DM must provide another reference point. Definition 8 (Achievement scalarizing function). An achievement scalarizing function is a parameterized function szref (z) : Rk → R, where zref ∈ Rk is a reference point representing the decision maker’s aspiration levels. Thus, the multiobjective problem is transformed into the following scalar problem: Minimize szref (z) subject to

(29)

z ∈ Z.

Most of the achievement scalarization functions are based on the Tchebycheff metric (L∞ metric). Based on the Tchebycheff distance we can define an appropriate achievement scalarizing function. Definition 9 (Augmented Tchebycheff scalarizing function). The augmented weighted Tchebycheff scalarizing function is defined by k

ref s∞(z, zref ) = max {λi (zi − zref i )} + ρ ∑ λi (zi − zi ), i=1,...,k

i=1

(30)

46

A. L´opez Jaimes, S. Zapotecas Mart´ınez, C.A. Coello Coello

where zref is a reference point, ρ > 0 is an augmentation coefficient sufficiently small, and λ = [λ1 , . . ., λk ] is a vector of weights such that ∀i λi ≥ 0 and, for at least one i, λi > 0. The (weighted) Tchebycheff scalarizing function poses some convenient properties over other scalarizing functions. As proved in [7] and [13], by using the augmented version of this function we can find any Pareto optimal solution. In most of the reference points methods the exploration of the objective space is made by moving the reference point at each iteration. In contrast, the weights are kept unaltered during the interactive optimization process. That is, weights do not define preferences, but they are mainly used for normalizing each objective function. Usually, the weights are set for all i = 1, . . ., k as λi =

1 ? znad i − zi

. It is important to mention that the DM can provide both feasible and infeasible reference points. On the one hand, if the reference point is infeasible, then the minimum of (30) is the closest feasible point to the aspiration levels. On the other hand, if zref is feasible, the solution generated by (30) improves the aspiration levels. 3.3.4 Light Beam Search The Light Beam Search (LBS) method proposed by Jaszkiewicz and Slowinski [22] is an iterative method which combines the reference point idea and tools of Multi-Attribute Decision Analysis (MADA). At each iteration, a finite sample of nondominated points is generated. The sample is composed of a current point called middle point, which is obtained in previous iteration, and J nondominated points from its neighborhood. A local preference model in the form of an outranking relation S is used to define the neighborhood of the middle point. It is said that a outranks b (aSb), if a is considered to be at least as good as b. The outranking relations is defined by DM, which specify three preference thresholds for each objective. They are indifference threshold, preference threshold and veto threshold. The DM has the possibility to scan the inner area of the neighborhood along the objective function trajectories between any two characteristic neighbors or between a characteristic neighbor and the middle point. Below, the general scheme of the LBS procedure is shown. Step 0: Ask the DM to specify the starting aspiration and reservation points. Step 1: Compute the starting middle point. Step 2: Ask the DM to specify the local preferential information to be used to build an outranking relation. Step 3: Present the middle point to the DM. Step 4: Calculate the charateristic neighbors of the middle point and present them to the DM.

An Introduction to Multiobjective Optimization Techniques

47

Step 5: If DM is satisfied then Stop, else i. Ask the DM to choose one of the neighboring points to be the new middle point, or ii. update the preferential information, or iii. define a new aspiration point and/or a reservation point. iv. Go to Step 4.

4

Evolutionary Algorithms

Currently, there is a large variety of traditional mathematical programming methods (see for example [7, 5]) to solve MOPs. However, some researchers [23, 24, 25, 26] have identified several limitations of traditional mathematical programming approaches to solve MOPs. Some of them are the following: 1. We need to run many times those algorithms to find several elements of the Pareto optimal set. 2. Many of them require domain knowledge about the problem to be solved. 3. Some of those algorithms are sensitive to the shape or continuity of the Pareto front. These complexities call for alternative approaches to deal with certain types of MOPs. Among these alternative approaches, we can find Evolutionary Algorithms (EAs), which are stochastic search and optimization methods that simulate the natural evolution process. At the end of 1960s, Rosenberg [27] proposed the use of genetic algorithms to solve MOPs. However, it was until 1984, when David Schaffer [28] introduced the first actual implementation of what it is now called a Multi-Objective Evolutionary Algorithm (MOEA). From that moment on, many researchers [29, 30, 31, 32, 33, 34] have proposed a wide variety of MOEAs. As other stochastic search strategies (e.g., simulated annealing, ant colony optimization, or particle swarm optimization), MOEAs do not guarantee to find the true Pareto optimal set but, instead, aim to generate a good approximation of such set in a reasonable computational time. On the other hand, MOEAs are particularly well-suited to solve MOPs because they operate over a set of potential solutions (i.e., the population). This feature allows them to generate several elements of the Pareto optimal set (or a good approximation of them) in a single run. Furthermore, MOEAs are less susceptible to the shape or continuity of the Pareto front than traditional mathematical programming techniques, require little domain information and are relatively easy to implement and use. Single objective EAs and MOEAs share a similar structure. The major difference is the fitness assignment mechanism since a MOEA deals with fitness vectors of dimension k (k ≥ 2). As pointed out by different authors [31, 4], finding an approximation to the Pareto front is by itself a bi-objective problem whose objectives are: • minimize the distance of the generated vectors to the true Pareto front, and • maximize the diversity of the achieved Pareto front approximation.

48

A. L´opez Jaimes, S. Zapotecas Mart´ınez, C.A. Coello Coello

Therefore, the fitness assignment scheme must consider these two objectives. Algorithm 4 describes the basic structure of a MOEA. Pseudocode of a MOEA. 1: t ← 0 2: Generate an initial population P(t) 3: while the stopping criterion is not fulfilled do 4: Evaluate the objective vector f for each individual in P(t) 5: Assign a fitness for each individual in P(t) 6: Select from P(t) a group of parents P0 (t) preferring the fitter ones 7: Recombine individuals of P0 (t) to obtain a child population P00 (t) 8: Mutate individuals in P00 (t) 9: Combine P(t) and P00 (t) and select the best individuals to get P(t + 1) 10: t ← t +1 11: end while Usually, the initial population is generated in a random manner. However, if we have some knowledge about the characteristics of a good solution, it is wise to use this information to create the initial population. The fitness assignment scheme requires a ranking of the individuals according to a preference relation and then assigning a scalar fitness value to each individual using such rank. The selection for reproduction (line 6) is carried out as in the single objective case, for instance, using tournament selection. In contrast, the selection for survival (line 9), intended to maintain the best solutions so far (i.e., elitism), uses a preference relation to remove some solutions and maintain the population size constant. To ensure diversity of the approximation set, the selection mechanism is also based on a density estimator of the objective function space.

4.1 MOGA Carlos M. Fonseca and Peter J. Fleming [33] proposed the Multi-Objective Genetic Algorithm (MOGA), which was one of the first in using Pareto dominance to rank individuals. In MOGA, the rank of a certain individual corresponds to the number of individuals in the current population by which it is dominated. That is, the rank of invidual xi at generation t is given by rank(xi ,t) = 1 + pi , where pi is the number of individuals that dominate xi in the current generation. Note that all nondominated individuals in the population receive rank 1, while dominated ones are penalized according to the population density of the corresponding region of the trade-off surface. N refers to the population size, g is the specific generation, f j (xk ) is the j-th objective function, xk is the k-th individual, P the population. Fitness assignment is performed in the following way: 1. Sort population according to rank. 2. Assign fitness to individuals by interpolating from the best (rank 1) to the worst (rank n ≤ N) in the way proposed by David E. Goldberg [35] according to some function, usually linear, but not necessarily. 3. Average the fitnesses of individuals with the same rank, so that all of them will be

An Introduction to Multiobjective Optimization Techniques

49

sampled at the same rate. This procedure keeps the global population fitness constant while maintaining appropriate selective pressure, as defined by the function used. As Goldberg and Deb [36] indicate, this type of blocked fitness assignment is likely to produce a large selection pressure that might produce premature convergence. In order to avoid this, MOGA adopts a fitness sharing scheme [37] that “penalizes” solutions lying too close from others in some space (e.g., objective function space).

4.2 NSGA and NSGA-II The Nondominated Sorting Genetic Algorithm (NSGA) was proposed by Srinivas and Deb [30] and is another variation of Goldberg’s approach [35]. The NSGA is based on several layers of classifications of the individuals. Before selection is performed, the population is ranked on the basis of nondomination: all nondominated individuals are classified into one category (with a dummy fitness value, which is proportional to the population size, to provide an equal reproductive potential for these individuals). To maintain the diversity of the population, these classified individuals are shared with their dummy fitness values. Then, this group of classified individuals is ignored and another layer of nondominated individuals is considered. The process continues until all individuals in the population are classified. Stochastic remainder proportionate selection is adopted for this technique. Since individuals in the first front have the maximum fitness value, they always get more copies than the rest of the population. This allows for a better search of the different nondominated regions and results in convergence of the population toward such regions. Sharing, by its part, helps to distribute the population over this region (i.e. the Pareto front of the problem). As a result, one might think that this MOEA converges rather quickly; however, a computational bottleneck occurs with the fitness sharing mechanism. An improved version of the NSGA algorithm, called NSGA-II was proposed by Deb et al. [38, 39]. As shown in Figure 7, the NSGA-II builds a population of competing individuals, ranks and sorts each individual according to its nondomination level, it applies Evolutionary Operators (EVOPs) to create a new offspring pool, and then combines the parents and offspring before partitioning the new combined pool into fronts. The NSGA-II then computes a crowding distance for each member and it uses this value in the selecion process in order to spread the solutions along the Pareto front. This is the most popular MOEA used today, and it is frequently adopted to compare the performance of newly introduced MOEAs.

4.3 SPEA and SPEA2 The Strength Pareto Evolutionary Algorithm (SPEA) was introduced by Eckart Zitzler and Lothar Thiele [31]. This approach integrates some successful mechanisms from other MOEAs, namely, a secondary population (external archive) and the use of Pareto dominance ranking. SPEA uses an external archive containing nondominated solutions previously found. At each generation, nondominated individuals are copied to the external nondominated set. In SPEA, the fitness of each individual in the primary population is computed using the individuals of the external archive. First, for each individual in this external set, a

50

A. L´opez Jaimes, S. Zapotecas Mart´ınez, C.A. Coello Coello

Figure 7. Flow diagram that shows the way in which he NSGA-II works. Pt is the parents population and Qt is the offspring population at generation t. F1 are the best solutions from the combined populations (parents and offspring). F2 are the second best solutions and so on. n , strength value is computed. The strength, si , of individual i is determined by si = N+1 where n is the number of solutions dominated by i, and N is the size of the archive. This strength is similar to the ranking value of MOGA, since it is proportional to the number of solutions to which a certain individual dominates. Finally, the fitness of each individual in the primary population is equal to the sum of the strengths of all the external members that dominate it. This fitness assignment considers both closeness to the true Pareto front and even distribution of solutions at the same time. Thus, instead of using niches based on distance, Pareto dominance is used to ensure that the solutions are properly distributed along the Pareto front. Since the size of the archive may grow too large, the authors employed a technique that prunes the contents of the external nondominated set so that its size remains below a certain threshold. There is also a revised version of SPEA (called SPEA2) [40]. SPEA2 has three main differences with respect to its predecessor: (1) it incorporates a fine-grained fitness assignment strategy which takes into account for each individual the number of individuals that dominate it and the number of individuals to which it dominates; (2) it uses a nearest neighbor density estimation technique which guides the search more efficiently, and (3) it has an enhanced archive truncation method that guarantees the preservation of boundary solutions.

4.4 PAES The Pareto Archived Evolution Strategy (PAES) was designed and implemented by Joshua D. Knowles and David W. Corne [41]. PAES consists of a (1 + 1) evolution strategy (i.e.,

An Introduction to Multiobjective Optimization Techniques

51

a single parent that generates a single offspring) in combination with a historical archive that records some of the nondominated solutions previously found. This archive is used as a reference set against which each mutated individual that is being compared. PAES also uses a novel approach to keep diversity, which consists of a crowding procedure that divides objective space in a recursive manner. Each solution is placed in a certain grid location based on the values of its objectives (which are used as its “coordinates” or “geographical location”). A map of such grid is maintained, indicating the number of solutions that reside in each grid location. Since the procedure is adaptive, no extra parameters are required (except for the number of divisions of the objective space). Furthermore, the procedure has a lower computational complexity than traditional niching methods [41]. The adaptive grid of PAES and some other issues related to external archives (also called “elite” archives) have been studied both from an empirical and from a theoretical perspective (see for example [42]). Other implementations of PAES were also proposed, namely (1 + λ)-ES and (µ + λ)-ES. However, these were found not to improve overall performance.

4.5 PESA The Pareto Envelope-based Selection Algorithm (PESA) is suggested by Corne et al. [43]. PESA consists of a small internal population and a larger external population. A hypergrid division of phenotype space is used to maintain selection diversity (using a crowding measure) as the MOEA runs. Furthermore, this crowding measure is used to allow solutions to be retained in an external archive similar to the one adopted by PAES [41]. A revised version of this MOEA is called PESA-II [44]. The difference between the PESA-I and II is that in the second, selection is region-based and the subject of selection is now a hyperbox, not just an individual (i.e., it first selects a hyperbox, and then it selects an individual within that hyperbox). The motivation behind this approach is to reduce the computational cost associated with Pareto ranking [44].

4.6 New Trends in MOEAs Today, one of the trends regarding the design of MOEAs is the adoption of performance measures to select individuals (see for example [45]). Also, there is growing interest in dealing with problems having a large number of objectives (see for example [46]) and to deal with expensive objective functions (see for example [47]).

4.7 Incorporation of Preferences in MOEAs Among the earliest attempts to incorporate preference in a MOEA, we can find Fonseca and Fleming’s proposal [33]. This proposal consisted of extending the ranking mechanism of MOGA to accommodate goal information as an additional criterion. They used the goal attainment method, so that the DM could supply new goals at each generation of the MOEA, reducing in consequence the size of the solution set under inspection and learning. Deb [48] proposed a technique to transform goal programming problems into multiobjective optimization problems which are then solved using a MOEA. In goal programming the DM has to assign targets or goals that wishes to achieve for each objective,

52

A. L´opez Jaimes, S. Zapotecas Mart´ınez, C.A. Coello Coello

and these values are incorporated into the problem as additional constraints. The objective function then attempts to minimize the absolute deviations from the targets to the objectives. Yun et al. [49] proposed the use of Generalized Data Envelopment Analysis (GDEA) [50] with aspiration levels for choosing desirable solutions from the Pareto optimal set. This is an interactive approach in which a nonlinear aggregating function is optimized by a genetic algorithm in order to generate the Pareto optimal solutions of the multiobjective optimization problem. The decision maker must define aspiration levels for each objective, as well as the ideal values for each of them. Then, the aspiration levels are adopted as constraints during the optimization, so that the Pareto optimal solutions are filtered out and those closest to the aspiration levels are assigned the higher fitness values. Branke et al. [51] proposed an approach called Guided MOEA also exploiting the concept of utility function. The idea is to express the DM’s preferences in terms of maximal and minimal linear weighting functions, corresponding directly to slopes of a linear utility function. The authors determine the optimal solution from a population using both the previously mentioned weighting functions. Those individuals are given rank one and are considered the borderline solutions (since they represent extreme cases of the DM’s preferences). Then all the nondominated vectors are evaluated in terms of these two linear weighting functions. After that, all solutions that have a better fitness than either of the two borderline individuals are assigned the same rank (these are the individuals preferred by the DM). These solutions are removed from the population and a similar ranking scheme is applied to the remaining individuals. The authors used a biased version of fitness sharing, in which the maximum and minimum niche counts are incorporated into a formula assigning each individual a fitness at least as good as that of any other individual with inferior rank. More recently, Deb and Sundar [52] incorporated a reference point approach into the NSGA-II [38]. They introduced a modification in the crowding distance operator in order to select from the last front the solutions that would take part of the new population. They used the Euclidean distance to sort and rank the population accordingly (the solution closest to the reference point receives the best rank). The proposed method was designed to take into account a set of reference points. The drawback of this scheme is that it does not guarantee weakly Pareto optimality, particularly in MOPs with disconnected Pareto fronts. A similar approach was also proposed by Deb and Kumar [53], in which the light beam search procedure was incorporated into the NSGA-II. Similar to the previous approach, they modified the crowding operator to incorporate DM’s preferences. They used a weighted Tchebycheff achievement function to assign the crowding distance to each solution in each front. Thus, the solution with the least distance will have the best crowding rank. Like in the previous approach, this algorithm finds a subset of solutions around the optimum of the achievement function using the usual outranking relation. However, from the three parameters that specify the outranking relation, they only used the veto threshold.

4.8 New Trends in the Incorporation of Preferences in MOEAs One interesting trend in this area is the integration of mechanisms to define preferences from the user into the selection process in a more natural way by allowing, for example, the use of set preference relations of any kind [54].

An Introduction to Multiobjective Optimization Techniques

5

53

Conclusion

This chapter has presented several techniques to solve multiobjective optimization problems using both mathematical programming and evolutionary computation approaches. The choice of the most appropriate approach to be used depends on the nature of the problem to be solved and on the available resources. Since mathematical programming techniques normally emphasize the use of interactive techniques, they are suitable for problems in which the decision maker has considerable knowledge of the problem in order to express his/her preferences accurately. In turn, evolutionary algorithms are not only useful to approximate the Pareto front, but also to gain knowledge about the problem, i.e., to understand the structure of the possible set of solutions, the degree of conflict and the trade-offs among the objectives. In other words, MOEAs are a good choice when little information is available about a certain MOP. Another important topic that complements the solution of a MOP is the incorporation of user’s preferences and, as such, this topic is briefly discussed in this chapter, in the context of their use combined with MOEAs. The main aim of this chapter has been to provide a general overview of the multiobjective optimization field and to serve as a departing point for those interested in working in this research area.

References [1] Pareto, V. Cours D’Economie Politique. F. Rouge, 1896. [2] Miettinen, K. Introduction to Multiobjective Optimization: Noninteractive Approaches. In Branke, J.; Deb, K.; Miettinen, K.; Slowinski, R., eds., Multiobjective Optimization: Interactive and Evolutionary Approaches, 1–26. Springer-Verlag, Berlin, Heidelberg, 2008. [3] Edgeworth, F. Y. Mathematical Physics. P. Keagan, 1881. [4] Coello Coello, C. A.; Lamont, G. B.; Van Veldhuizen, D. A. Evolutionary Algorithms for Solving Multi-Objective Problems. Second edn. Springer, New York, 2007. ISBN 978-0-387-33254-3. [5] Figueira, J.; Greco, S.; Ehrgott, M. Multiple Criteria Decision Analysis: State of the Art Surveys. Springer Verlag, Boston, Dordrecht, London, 2005. [6] Zitzler, E.; Knowles, J.; Thiele, L. Quality Assessment of Pareto Set Approximations. In Branke, J.; Deb, K.; Miettinen, K.; Slowinski, R., eds., Multiobjective Optimization. Interactive and Evolutionary Approaches, 373–404. Springer. Lecture Notes in Computer Science Vol. 5252, Berlin, Germany, 2008. [7] Miettinen, K. M. Nonlinear Multiobjective Optimization. Publishers, Boston, Massachusetts, USA, 1998.

Kluwer Academic

[8] Charnes, A.; Cooper, W. W. Management Models and Industrial Applications of Linear Programming, vol. 1. John Wiley, New York, 1961.

54

A. L´opez Jaimes, S. Zapotecas Mart´ınez, C.A. Coello Coello

[9] Ignizio, J. Goal programming and extensions. DC Heath Lexington, Massachusetts, USA, 1976. [10] Gembicki, F. Vector optimization for control with performance and parameter sensitivity indices. Ph.D. thesis, Case Western Reserve University, 1974. [11] Gembicki, F.; Haimes, Y. Approach to performance and sensitivity multiobjective optimization: The goal attainment method. IEEE Transactions on Automatic Control 1975. 20, 769–771. [12] Chen, Y.; Liu, C. Multiobjective VAR planning using the goal-attainment method. IEE Proceedings-Generation, Transmission and Distribution 1994. 141, 227–232. [13] Ehrgott, M. Multicriteria Optimization. second edn. Springer, Berlin, 2005. [14] Das, I.; Dennis, J. E. Normal-boundary intersection: a new method for generating Pareto optimal points in multicriteria optimization problems. SIAM Journal on Optimization 1998. 8, 631–657. [15] Duckstein, L. Multiobjective optimization in structural design- The model choice problem. New directions in optimum structural design(A 85-48701 24-39). Chichester, England and New York, Wiley-Interscience, 1984, 1984. 459–481. [16] Zeleny, M. Compromise programming. In Cochrane, J.; Zeleny, M., eds., Multiple Criteria Decision Making, 262–301. University of South Carolina Press, Columbia, 1973. [17] Steuer, R. E.; Choo, E.-U. An interactive weighted Tchebycheff procedure for multiple objective programming. Mathematical Programming 1983. 26, 326–344. [18] Geoffrion, A.; Dyer, J.; Feinberg, A. An interactive approach for multicriteria optimization with an application to the operation of an accademic department. Management Science 1973. 19, 357–369. [19] Frank, M.; Wolfe, P. An algorithm for quadratic programming. Naval Research Logistics Quarterly 1956. 3, 95–110. [20] Wierzbicki, A. The use of reference objectives in multiobjective optimisation. In G., F.; T., G., eds., MCDM theory and Application, Proceedings. No. 177 in Lecture notes in economics and mathematical systems, Springer Verlag, Hagen, 1980 468–486. [21] Wierzbicki, A. A methodological guide to multiobjective optimization. In Iracki, K.; Malanowski, K.; Walukiewicz, S., eds., Optimization Techniques, Part 1, vol. 22 of Lecture Notes in Control and Information Sciences . Springer, Berlin, 1980 99–123. [22] Jaszkiewicz, A.; Slowinski, R. The light beam search approach -an overview of methodology and applications. European Journal of Operational Research 1999. 113, 300–314.

An Introduction to Multiobjective Optimization Techniques

55

[23] Coello Coello, C. A. A Comprehensive Survey of Evolutionary-Based Multiobjective Optimization Techniques. Knowledge and Information Systems. An International Journal 1999. 1, 269–308. [24] Deb, K. Evolutionary Algorithms for Multi-Criterion Optimization in Engineering Design. In Miettinen, K.; M¨akel¨a, M. M.; Neittaanm¨aki, P.; Periaux, J., eds., Evolutionary Algorithms in Engineering and Computer Science, chap. 8, 135–161. John Wiley & Sons, Ltd, Chichester, Reino Unido, 1999. [25] Fogel, L. J. Artificial Intelligence through Simulated Evolution. Forty Years of Evolutionary Programming. John Wiley & Sons, Inc., Nueva York, 1999. [26] Michalewicz, Z.; Fogel, D. B. How to Solve It: Modern Heuristics. Springer, Berlin, 2000. [27] Rosenberg, R. S. Simulation of genetic populations with biochemical properties. Ph.D. thesis, University of Michigan, Ann Arbor, Michigan, EE. UU., 1967. [28] Schaffer, J. D. Multiple Objective Optimization with Vector Evaluated Genetic Algorithms. Ph.D. thesis, Vanderbilt University, 1984. [29] Coello Coello, C. A.; Toscano Pulido, G. Multiobjective Optimization using a Micro-Genetic Algorithm. In Spector, L.; Goodman, E. D.; Wu, A.; Langdon, W.; Voigt, H.-M.; Gen, M.; Sen, S.; Dorigo, M.; Pezeshk, S.; Garzon, M. H.; Burke, E., eds., Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’2001). Morgan Kaufmann Publishers, San Francisco, California, 2001 274– 282. [30] Srinivas, N.; Deb, K. Multiobjective Optimization Using Nondominated Sorting in Genetic Algorithms. Evolutionary Computation 1994. 2, 221–248. [31] Zitzler, E.; Thiele, L. Multiobjective Evolutionary Algorithms: A Comparative Case Study and the Strength Pareto Approach. IEEE Transactions on Evolutionary Computation 1999. 3, 257–271. [32] Horn, J.; Nafpliotis, N. Multiobjective Optimization using the Niched Pareto Genetic Algorithm. Tech. Rep. IlliGAl Report 93005, University of Illinois at UrbanaChampaign, Urbana, Illinois, EE. UU., 1993. [33] Fonseca, C. M.; Fleming, P. J. Genetic Algorithms for Multiobjective Optimization: Formulation, Discussion and Generalization. In Forrest, S., ed., Proceedings of the Fifth International Conference on Genetic Algorithms. University of Illinois at Urbana-Champaign, Morgan Kauffman Publishers, San Mateo, California, 1993 416– 423. [34] Knowles, J. D.; Corne, D. W. The Pareto Archived Evolution Strategy: A New Baseline Algorithm for Multiobjective Optimisation. In 1999 Congress on Evolutionary Computation. IEEE Service Center, Washington, D.C., 1999 98–105.

56

A. L´opez Jaimes, S. Zapotecas Mart´ınez, C.A. Coello Coello

[35] Goldberg, D. E. Genetic Algorithms in search, optimization, and machine learning. Addison-Wesley Publishing Company, Reading, Massachusetts, 1989. [36] Goldberg, D.; Deb, K. A comparative analysis of selection schemes used in genetic algorithms. Foundations of genetic algorithms 1991. 1, 69–93. [37] Deb, K.; Goldberg, D. E. An Investigation of Niche and Species Formation in Genetic Function Optimization. In Schaffer, J. D., ed., Proceedings of the Third International Conference on Genetic Algorithms. George Mason University, Morgan Kaufmann Publishers, San Mateo, California, 1989 42–50. [38] Deb, K.; Agrawal, S.; Pratab, A.; Meyarivan, T. A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II. KanGAL report 200001, Indian Institute of Technology, Kanpur, India, 2000. [39] Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A Fast and Elitist Multiobjective Genetic Algorithm: NSGA–II. IEEE Transactions on Evolutionary Computation 2002. 6, 182–197. [40] Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the Strength Pareto Evolutionary Algorithm. Tech. Rep. 103, Computer Engineering and Networks Laboratory (TIK), Swiss Federal Institute of Technology (ETH) Zurich, Gloriastrasse 35, CH-8092 Zurich, Switzerland, 2001. [41] Knowles, J. D.; Corne, D. W. Approximating the Nondominated Front Using the Pareto Archived Evolution Strategy. Evolutionary Computation 2000. 8, 149–172. [42] Knowles, J.; Corne, D. Properties of an Adaptive Archiving Algorithm for Storing Nondominated Vectors. IEEE Transactions on Evolutionary Computation 2003. 7, 100–116. [43] Corne, D. W.; Knowles, J. D.; Oates, M. J. The Pareto Envelope-based Selection Algorithm for Multiobjective Optimization. In Schoenauer, M.; Deb, K.; Rudolph, G.; Yao, X.; Lutton, E.; Merelo, J. J.; Schwefel, H.-P., eds., Proceedings of the Parallel Problem Solving from Nature VI Conference. Springer. Lecture Notes in Computer Science No. 1917, Paris, France, 2000 839–848. [44] Corne, D. W.; Jerram, N. R.; Knowles, J. D.; Oates, M. J. PESA-II: Regionbased Selection in Evolutionary Multiobjective Optimization. In Spector, L.; Goodman, E. D.; Wu, A.; Langdon, W.; Voigt, H.-M.; Gen, M.; Sen, S.; Dorigo, M.; Pezeshk, S.; Garzon, M. H.; Burke, E., eds., Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’2001). Morgan Kaufmann Publishers, San Francisco, California, 2001 283–290. [45] Beume, N.; Naujoks, B.; Emmerich, M. SMS-EMOA: Multiobjective selection based on dominated hypervolume. European Journal of Operational Research 2007. 181, 1653–1669.

An Introduction to Multiobjective Optimization Techniques

57

[46] Farina, M.; Amato, P. A fuzzy definition of “optimality” for many-criteria optimization problems. IEEE Transactions on Systems, Man, and Cybernetics Part A—Systems and Humans 2004. 34, 315–326. [47] Voutchkov, I.; Keane, A. Multiobjective Optimization using Surrogates. In Parmee, I., ed., Adaptive Computing in Design and Manufacture 2006. Proceedings of the Seventh International Conference. The Institute for People-centred Computation, Bristol, UK, 2006 167–175. [48] Deb, K. Solving Goal Programming Problems Using Multi-Objective Genetic Algorithms. In 1999 Congress on Evolutionary Computation. IEEE Service Center, Washington, D.C., 1999 77–84. [49] Yun, Y.; Nakayama, H.; Arakawa, M. Multiple criteria decision making with generalized DEA and an aspiration level method. European Journal of Operational Research 2004. 158, 697–706. [50] Yun, Y.; Nakayama, H.; Tanino, T.; Arakawa, M. A Multi-Objective Optimization Method Combining Generalized Data Envelopment Analysis and Genetic Algorithms. In 1999 IEEE International Conference on Systems, Man, and Cybernetics, vol. 1. IEEE, 1999 671–676. [51] Branke, J.; Kaußler, T.; Schmeck, H. Guiding Multi Objective Evolutionary Algorithms Towards Interesting Regions. Tech. Rep. 398, Institute f¨ur Angewandte Informatik und Formale Beschreibungsverfahren, Universit¨at Karlsruhe, Karlsruhe, Germany, 2000. [52] Deb, K.; Sundar, J. Reference Point Based Multi-objective Optimization Using Evolutionary Algorithms. In GECCO ’06: Proceedings of the 8th annual conference on Genetic and evolutionary computation. ACM, New York, NY, USA, 2006 635–642. [53] Deb, K.; Kumar, A. Light beam search based multi-objective optimization using evolutionary algorithms. In IEEE Congress on Evolutionary Computation. 2007 2125–2132. [54] Zitzler, E.; Thiele, L.; Bader, J. SPAM: Set Preference Algorithm for Multiobjective Optimization. In Rudolph, G.; Jansen, T.; Lucas, S.; Poloni, C.; Beume, N., eds., Parallel Problem Solving from Nature–PPSN X, 847–858. Springer. Lecture Notes in Computer Science Vol. 5199, Dortmund, Germany, 2008.

In: Optimization in Polymer Processing Editors: A. Gaspar-Cunha and J. A.Covas, pp. 59-83

ISBN: 978-1-61122-818-2 ©2011 Nova Science Publishers, Inc.

Chapter 4

EXTENDING OPTIMIZATION ALGORITHMS TO COMPLEX ENGINEERING PROBLEMS António Gaspar-Cunha1, José Ferreira1, José António Covas1 and Carlos Fonseca2 1

Institute for Polymers and Composites/I3N, University of Minho, Campus de Azurém, 4800-058 Guimarães, Portugal 2 Department of Informatics Engineering, University of Coimbra, Pólo II, Pinhal de Marrocos, 3030-290 Coimbra, Portugal and CEG-IST, Instituto Superior Técnico, Technical University of Lisbon, 1049-101 Lisboa, Portugal Keywords: Multidisciplinary design optimization, Engineering design, Multi-Objective Optimization, Evolutionary Algorithms

1. INTRODUCTION Real engineering design and optimization problems are complex, multidisciplinary and difficult to manage within reasonable timings; in some cases, they can, at least to some extent, be mathematically described by sophisticated computational tools, which generally require significant resources. The momentous advances in some scientific and technological subjects (e.g., computational fluid dynamics, structural mechanics), coupled to the development of highly performing computing techniques (e.g., parallel and/or grid computing) and computer facilities, make it possible to progressively tackle more features of complicated problems [1]. Multidisciplinary Design Optimization, MDO, can be described as a technology, an environment, or a methodology for the design of complex integrated engineering structures, which combines different disciplines and takes into account in a synergistic manner the interaction between the various subsystems [2-4]. Examples of its practical application include the design and optimization of aircrafts, cars, building structures and manufacturing systems [1, 5-9].

60

António Gaspar-Cunha, José Ferreira, José António Covas et al.

In principle, a strategy like MDO could also be utilized for the optimization of polymer processing, as this encompasses several disciplines, such as fluid mechanics, heat transfer, polymer science, rheology, numerical methods, mechanical engineering and optimization techniques. Several MDO strategies have been proposed in the literature [1-9]. A common characteristic of earlier methods is the utilization of approximation and decomposition techniques, in order to divide the problem into smaller, more amenable, blocks [1-3], that are subsequently solved separately, using simpler models. However, the unified view is lost and, most likely, the total cost of the design will be actually higher than that of solving the whole problem at once. More recently, integrated MDO approaches have been proposed [4-9], but a few seem to overlook the fact that real world problems contain multiple, often, conflicting objectives, that must be tackled simultaneously [10]. As seen in previous chapters, Evolutionary Algorithms, EAs, are particularly adequate to deal with the multi-objective nature of real problems, as they work with a population (of vectors, or solutions) rather than with a single solution point. This feature enables the generation of Pareto frontiers representing the trade-off between the objectives, while simultaneously providing a link to the decision variables [10-13]. Thus, the result of a MultiObjective Evolutionary Algorithm, MOEA, is a set of solutions as near as possible to the Pareto optimal front [10]. Other multi-objective algorithms may be preferable for specific situations, such as Ant Colony Optimization, ACO [13, 14], Stochastic Local Search, SLS [15], among others. In all cases, it will be necessary to provide information regarding the relative importance of every problem objective. This is usually accomplished by introducing in the optimization system the preferences of a Decision Maker, DM [16]. Depending on the decision making strategy adopted, such information can be introduced before, during, or after the optimization (see chapter 3 for more details) [11-13, 16]. Additionally, since in real applications small variations of the design variables, or of environmental parameters, may frequently occur, the performance of the prospective optimal solution(s) should be only slightly affected by them, i.e., the solutions must be robust [17-21]. Even though robustness is clearly an important aspect to consider during optimization, it is rarely included in traditional algorithms [19, 21]. One of the major difficulties of applying MOEAs to real engineering problems is the large number of evaluations of objective functions that are necessary to obtain an acceptable solution - typically, of the order of several thousands. Furthermore, these evaluations are often time-consuming, as they use large numerical codes generally based on computationally costly methods, such as finite-differences or finite-elements. Consequently, reducing the number of evaluations necessary to reach an acceptable solution is of major practical importance [22]. However, finding good approximate methods may be more difficult than anticipated, due to the existence of several objectives and to the possible interactions between them, different approaches having been pursued, often involving the hybridization of MOEAs with local search procedures, known as Memetic algorithms [22-27]. Finally, it is vital to define/control the dimension of the multi-objective problem to solve. As the number of individual objectives raises, so does the number of non-dominated solutions. In turn, the problem becomes much more difficult to solve (at least from the EAs' point of view), since a large quantity of the common solutions move from one generation to the next, reducing the selection pressure. Simultaneously, it turns out to be increasingly difficult to visualize the correlations between the different solutions. Whenever possible, one

Extending Optimization Algorithms to Complex Engineering Problems

61

should consider ways of reducing the number of objectives, for example using approaches based on statistical techniques [28, 29]. In conclusion, the application of a multidisciplinary design optimization methodology to solve multi-objective engineering problems should entail optimization together with engineering and design tools. The former must encompass the selection of the relevant algorithm(s), a decision support system, an analysis of the robustness of the solution(s), the reduction of the number of evaluations required by the optimization routine (and thus of the computing times) and the reduction of the number of objectives. The present chapter presents and discusses a possible approach to solve problems of the polymer processing type by employing tools that are able to deal with multiple objectives, decision making and robustness of the solutions, among others.

2. THE METHODOLOGY 2.1. Methodology Structure Although the methodology proposed here does not possess the breadth of a complete MDO, it attempts to incorporate the different facets associated to a MOEA (including, as discussed above, decision making, robustness of the solutions and reduction of the number of objectives and computation times), together with the software/calculations required to evaluate specific aspects of the design and optimization problem (such as, for example, flow and heat transfer, aerodynamics, or structural mechanics). Also, even though the concept was developed having mainly in mind polymer processing, it is sufficiently flexible to be applicable to other engineering problems. As a matter of fact, the linkage to diverse engineering disciplines is mainly performed via the tools used to evaluate the solutions (which can vary from modeling software yielding quantitative data to qualitative aesthetical or comfort judgments). As seen in Figure 1, the methodology involves necessarily analysis, modeling and optimization steps. One should begin with the identification of the problem characteristics, namely what are the major objectives and constraints, what are the main process parameters and whether automated calculation tools are available to provide solutions for subsequent evaluation, or qualitative and empirical knowledge should also apply. An optimization step using MOEA will follow, with the aim of obtaining a good approximation to the Pareto front. The DM defines the relative importance of the various objectives via weights and, depending on his degree of confidence (which can be defined in the algorithm), the MOEA will be able to move towards Pareto frontiers of different sizes [16]. The robustness analysis may suggest fewer solutions. If the problem has many objectives, reduction of their number will be attempted [29]. Depending on the time spent on the evaluation of each objective function, the hybridization of MOEA with function approximation algorithms can also be adopted [25, 26]. The final result of this stage will consist of Pareto frontiers expliciting the trade-off between the different objectives and the decision variables (i.e., the parameters to be optimized). The solutions will subsequently be presented to the DM in graphical form. This is a key step when non-quantifiable objectives exist, or when empirical knowledge must be also taken

62

António Gaspar-Cunha, José Ferreira, José António Covas et al.

into consideration. The DM indentifies search space regions (on the objectives domain) that may satisfy his requirements in terms of the design (on the decision variables domain) - these are denoted as DM1 and DM2 in Figure 1. In the following step, the methodology is used to tackle the inverse process, i.e., to determine the set of weights (one per objective) corresponding to the search space regions selected by the DM. This information is incorporated on the MOEA to generate new improved solutions. The process is repeated until the DM is satisfied with the results. MO Problem Characteristics Numerical/modelling routines

Structural mechanics Design Fluid dynamics Aerodynamics

Evaluation of solutions

Decision Making

MOEA

Robustness

Comfort assessment …

Hybridization Objectives reduction

Framework Interface Empirical knowledge

Inverse Decision Making Methodology DM1 DM2

Selection of the desired solutions

Yes

END

Good Solution(s)?

No

Figure 1. Integrated MO-MDO system.

2.2. Multi-Objective Evolutionary Algorithms Following the ideas presented in Chapter 3, a MOEA named Reduced Pareto Set Genetic Algorithm, RPSGA, has been developed by the authors [12]. Since it will be used later in this Chapter, it makes sense to present it here in some detail. In the RPSGA, the homogeneous distribution of the population along the Pareto frontier and the improvement of the solutions along successive generations are performed through the use of a clustering technique, which is applied to reduce the number of solutions on the efficient frontier [12]. The structure of the scheme is illustrated in Algorithm 1 below. Initially, an empty external population, pe, and an empty archive are formed (line 1) and an internal population, pi, of N candidate solutions is randomly generated (line 2). At each generation, that is, while a termination condition is not satisfied, the following operations are performed:

Extending Optimization Algorithms to Complex Engineering Problems

63

Algorithm 1: RPSGA 1 Initialize pe (external population) and Archive to empty set 2 pi is a randomly generated, initial population (internal) 3 while (termination condition not satisfied) 4 Evaluate pi 5 Evaluate individuals' fitness considering clustering 6 Copy best individuals to pe 7 if (external population full) 8 pe ← Clustering pe 9 Copy best individuals of pe to pi 10 endif 11 Select individuals for reproduction 12 Apply Inver-over operator to selected pairs of individuals 13 Add non-dominated solutions to Archive 14 end while 15 Filter Archive 16 Return Archive end i)

the candidate solutions of pi are evaluated by the simulation routine (line 4);

ii)

a clustering technique (Algorithm 2) is applied to reduce the number of solutions on the efficient frontier and to compute the fitness of each individual of pi (line 5);

iii)

a fixed number of the best individuals are copied to pe (line 6);

iv)

if pe is full, the clustering technique (Algorithm 2) is applied again to sort the individuals of pe (line 8) and a pre-defined number of the best individuals from pe are incorporated into pi to replace the lowest fitness individuals (line 9);

v)

if pe is not full, individuals of pi are selected (line 11) for the application of the inver-over operator (line 12);

vi)

all non-dominated solutions found during the computations are copied to the archive (line 13);

vii)

all non-dominated solutions of the archive are returned after filtering it.

Algorithm 2 starts with the definition of the number of ranks, NRanks, (line 1) and the rank of each individual i, Rank[i], is set to 0 (line 2). For each rank, r, the population is reduced to NR individuals (i.e., NR is the number of individuals of each rank), using the clustering technique (lines 5 and 6). Then, rank r is attributed to these NR individuals (line 7) until the number of pre-defined ranks is reached (line 8). Finally, the fitness of individual i, Fi,, is calculated using a linear ranking function (line 9).

64

António Gaspar-Cunha, José Ferreira, José António Covas et al. Algorithm 2: Clustering 1 Definition of NRanks 2 Rank[i] = 0 3 r=1 4 do 5 NR = r (N/NRanks) 6 Reduce the population down to NR individuals 7 r=r+1 8 while (r < NRanks) 9 Calculate fitness 10 end

In the computations below, the parameter settings that have been recommended in [12] are used. Hence, pi and pe have 100 and 200 individuals, respectively; a roulette wheel strategy is adopted for selection; the probability for applying the inver-over operator is 0.8. For more details and other parameter settings we refer to the original publication [12].

3. DECISION MAKING 3.1. Current Methods For effective application, the result of a multi-objective optimization process should consist of a single solution, not the set of solutions belonging to the Pareto front. In order to come up with this unique answer, at some point during the optimization the DM must decide what is the relative importance of the various objectives [10]. Consequently, the final outcome of a multi-objective optimization problem results not only from optimization (i.e., from the search process), but also from a decision course of action. In this respect, existing multi-objective optimization methods [30, 31] are usually classified as no-preference or preference-based, depending on the consideration given to the relative importance of the objectives [32]. In the first case, the problem is solved regardless of that relative importance and the solution(s) obtained is/are made available to the DM, that accepts or rejects it/them. In the second case, the preferences of the DM are introduced in the search procedure and, in principle, the solution that best satisfies his preferences is selected [30]. There are, at least, three opportunities to introduce the DM preferences into the optimization scheme (see also Chapter 3): i) The various individual objectives are aggregated into a single objective, being necessary to decide a priori how that combination is realized; ii) Decision making and optimization are intertwined, i.e., after an optimization step, the DM provides preference information on the set of available solutions, so that the optimization algorithm can proceed with the search; iii) The set of objectives is optimized simultaneously to obtain non-dominated vectors, the best solution being selected by integrating the DM preferences [10]. The open literature presents various methods of combining the search process with the DM preferences [10, 30], some being described in more detail in Chapter 3. For example, the

Extending Optimization Algorithms to Complex Engineering Problems

65

decision making strategy proposed by Fonseca and Fleming [33] is based on pre-defined goals and priorities. If in a problem with n objectives, a preference vector g = [(g1,1, … , g1,n1), … , (gp,1, … , gp,np)], where



p

i =1

ni = n , is defined by the DM, then the sub-vectors gi of the

preference vector associate priorities i and goals g i , ji to the corresponding objective functions f i , ji . In this strategy, the comparison operator is structured taking into account, in each priority level, the components of the objective vector that do not meet their goals. Thus, the objective vectors u and v are compared in terms of their components with the highest priority p, disregarding those in which the up meets the corresponding goals. When both vectors meet all goals with this priority, or if they infringe at least some of them but exactly in the same way, the next priority level (p-1) is considered. The process continues until priority 1 is reached and satisfied, in which case the result is decided by comparing the priority 1 components of the two vectors. The implementation of this method requires the progressive articulation of preferences with the consequent changes on the environment (i.e., location of the solutions on the objective space) and a permanent interaction with the decision maker. Also, it requires the definition of two sets of parameters, namely the goals and the priorities for each objective, which is not an easy task. Other decision methods can be mentioned, including weighted metrics [30], marginal rate of substitution [30], pseudo-weight vector [10], utility functions [34], biased sharing [10], guided domination [35], weighted domination [36] and the reference point base EMO [37]. Several difficulties may arise when applying these methods to real problems (again, see also Chapter 3). For example, some of the Pareto optimal solutions proposed by weighted metrics may not exist, depending on the problem's degree of non-convexity [30]. The marginal rate of substitution method requires a high computational effort [10]. The pseudo-weight vector does not perform satisfactorily for non-convex fronts, especially when a high importance is attributed to one of the objectives [10]; it is also very complex, as a weight vector must be calculated for each solution. Generally, in most of these methods the DM needs to define various algorithm parameters, which requires a priori a good knowledge of the Pareto front characteristics [30]. For illustrative purposes, the Weighted Sum Method (one of the weighted metrics approaches cited above) is applied here, as it is one of the simpler available schemes, transforming a problem with N objectives into a single objective optimization problem as follows: Maximize

N

F ( x ) = ∑ wi f i ( x ) i =1

Subject to

(1)

x∈S

r

where w = ( w1 , K, wN ) is a weighted vector representing the relative importance of each objective. The method can be used either a priori or a posteriori, depending on whether the DM expresses his preferences before or after the Pareto set approximation has been generated, respectively. In the first case, the optimization of a single objective is carried out, while in the second case the DM selects the best solution from the Pareto front obtained by a multi-objective optimization method. For a two-objectives to be minimized problem, as

66

António Gaspar-Cunha, José Ferreira, José António Covas et al.

shown in Figure 2, the technique consists in shifting vertically upwards a straight line, having a slope given by the ratio between the weights attributed to each objective (w1 and w2, respectively), until it becomes tangent to the Pareto front contour. Each different pair w1,w2 generates a new solution. It is evident that even if the ratio of w1 to w2 changes extensively, the concave part of the Pareto front will never be reached. -w1 f

w2

2

Objective space

Pareto frontier

A f

1

Figure 2. Application of the Weighted Sum Method to a non-convex Pareto-optimal front.

2.2. Weighted Stress Function Method In an attempt to circumvent the limitations of the methods discussed in the previous Section, a different scheme is presented and analysed here. The Weighted Stress Function Method, WSFM [16], integrates the DM preferences once the search has been concluded (thus, the method is used a posteriori), which means that search and decision are sequential. WSFM is based on the assumptions that the best solution that will satisfy the DM preferences must belong to the Pareto Frontier, i.e., to the set of non-dominated solutions, and that the selection must take into consideration an ideal objective vector (denoted as Z*) that maximizes each of the objective functions. The individual optimization of each objective corresponds to the maximum value of the global objective. The relative importance attributed to each individual objective will induce a “stress” to search for solutions that maximize each of the different objectives, the best solution being the one with zero stress. The concept is based on the typical stress-strain behavior of thermoplastic vulcanizate polymer materials, TPV. A typical structure of a TPV consists of a high volume fraction (0.40 M c* , an entangled regime is assumed. The viscosity is then defined by a

Carreau-Yasuda equation:

[

η = η0 1 + (λ γ& )a

](

n −1) / a

(8)

where η0 is the zero shear viscosity, λ a characteristic time, n the power law index and a the Yasuda parameter. The zero shear viscosity, η0, and the characteristic time, λ, are a function of temperature, molecular weight and concentration of the polymer into its monomer, according to:

⎡ Ev ⎛ 1 1 ⎞ ⎤ αv 4 ⎜ − ⎟ ⎥ M w C ac ⎣ R ⎝ T T0 ⎠ ⎦

η0 = A exp ⎢

(9)

⎡ Ev ⎛ 1 1 ⎞ ⎤ α t 1.75 ⎜ − ⎟ ⎥ M w C ac ⎣ R ⎝ T T0 ⎠ ⎦

λ = B exp ⎢

(10)

where Ev is an activation energy and αv and αt two constants. Coupled equations (1) to (10) characterize the kinetics and the evolutionary rheological behavior of polycaprolactone during polymerization. As explained before, for modeling purposes a two-step procedure is adopted: •

a first backwards calculation is performed from the die exit towards the hopper, using the rheological properties of the polymerized polycaprolactone, to provide a first estimation of the temperature profile and of the filled and partially filled sections of the extruder;



then, a second forward calculation is carried out where, in each screw profile subelement, the following coupling is performed:

ΔT =

-

from the local values of residence time and temperature, conversion along the sub-element is computed (Eq. 4);

-

this conversion defines a new molecular weight (Eq. 5) and then a new viscosity (Eq. 7 or 8);

-

from the new viscosity, the new pressure gradient (solving the flow equations) and the new temperature (solving the heat transfer equation) are calculated in the next sub-element. The thermal balance includes the reaction exothermicity, according to:

[

1 h S (T − T ) + hs S s (Ts − T ) + W& + ρ Q ΔH ΔC ρ Cp Q b b b

]

(11)

Reactive Extrusion - Optimization of Representative Processes

123

where ΔT is the temperature change along a sub-element, ρ the density, Cp the heat capacity, Q the volumetric feed rate, hb and hs the heat transfer coefficients towards barrel and screw, Sb and Ss the corresponding exchange surfaces, Tb and Ts the barrel and screw temperatures, W& the dissipated power, ΔH the reaction enthalpy and ΔC the change in conversion rate along the sub-element. Figures 2 and 3 present an example of the results produced by the Ludovic© software, for polymerization under fixed processing conditions (screw speed N = 100 rpm, flow rate Q = 3 kg/h). Fig. 2 shows how temperature and cumulative residence time evolve axially. Temperature increases rapidly, essentially by heat transfer from the barrel, until reaching the barrel temperature. A further increase takes place at the die, where the viscosity attains its maximum (see Figure 3) due to viscous dissipation and reaction exothermicity. The residence time increases mainly in the kneading blocks and in the die, which has a large free volume.

Temperature (°C), Cumulative Residence Time (s)

200

Die Barrel temperature

150

Temperature 100 Residence time

50

0 -0,2

0

0,2

0,4

0,6

0,8

1

1,2

Axial Length (m)

Figure 2. Changes in temperature and cumulative residence time along the screws. Reprinted from [15], with permission. 4

10

1

3

10

2

Viscosity

0,8

Die

1

10

0,6

0

10 10

-1

10

-2

10

-3

10

-4

-0,2

0,4 0,2

Conversion rate 0

0,2

0,4

Conversion Rate

Viscosity (Pa s)

10

0,6

0,8

1

0 1,2

Axial Length (m)

Figure 3. Changes in viscosity and conversion rate along the screws. Reprinted from [15], with permission.

124

António Gaspar-Cunha, José Antonio Covas, Bruno Vergnes et al.

Figure 3 depicts the corresponding changes in conversion rate and viscosity along the screw axis. The calculations begin just ahead of the first block of kneading discs, where the reagents are injected. As the temperature remains constant after the injection point, the conversion rate follows approximately the evolution of the residence time and is complete at the end of the screws. It is worth remarking the huge variation in viscosity along the extruder, from 10-4 to 103 Pa.s.

2.3. Starch Cationization Starch is a polysaccharide largely applied in food and non-food packaging applications. Modified starches are widely used for paper, adhesives, textile, cosmetics and pharmaceutical products, among others [27]. Starch cationization is one of the most important starch modifications. In the paper-making industry, cationic starches can increase the strength, filler and fines retention, as well as the pulp drainage rate. Sizing agents based on cationic starches present advantages due to their ionic attraction to cellulose fibers. Starch cationization consists of the substitution of the hydroxyl groups of the glycosyl units by amino, ammonium, sulfonium, or phosphonium groups, able to carry a positive charge [28]. The degree of substitution, DS, indicates the average number of sites per anhydroglucose unit on which there are substituent groups. As there are three hydroxyl groups in each anhydroglucose unit, the maximum degree of substitution is 3. Usually, cationic starches used in the paper industry have DS between 0.02 and 0.05. In a previous experimental study [29], some of the authors used wheat starch plasticized with 40% water (on dry basis) and 2,3-epoxypropyltrimethylammonium chloride (commercialized as Quab© 151) as reagent. The reaction is shown in Figure 4. CH3

O CH2

CH

CH2

+

N

CH3 + NaCl + H2O

CH3

Quab© 151 O

H

HO H

C H2O H H H

O

O

H

H

OH

C H2OH

HO H

Starch

HO

H

O

H H OH

H

O H O

CH2OH H H OH

O H H

O HO H

O

CH2

OH

C H3

CH C H2

N

OH

C H3

C H3

CH2 H H

+

O

Cationic starch H O

Figure 4. Reaction scheme of starch cationization.

The theoretical degree of substitution, DSth, is the value corresponding to a reaction efficiency of 100%. DSth is the molar ratio between reagent and anhydroglucose monomer and, in the experiments, it defines the target to reach; it also allows adjusting the values of the starch and reagent flow rates. The effective degree of substitution, DS, is estimated on

Reactive Extrusion - Optimization of Representative Processes

125

extruded samples, by measuring the nitrogen content by a Kjeldahl method. The reaction efficiency, RE, is defined by DS/DSth. The kinetic scheme of starch cationization can be written as follows, considering a second order reaction [30]: k A + B ⎯⎯→ C Starch

Reagent ©

S-OH



d [ A] t

=−

dt

d [ B] t dt

=

d [C ] t dt

(Quab 151)

Cationic starch S-O-R+Cl-

= k [ A] t [ B ] t

(12)

Expressing the reagent concentration [B]t as a function of that of the starch hydroxyl groups [A]t, and of the initial concentrations, [A]0 and [B]0, and integrating, the following equation is obtained [30]:

[A] 0 ( [A] 0 − [B] 0 ) exp[ k ( [A] 0 − [B] 0 ) t ] [B ] 0 [A] t = [A] 0 exp[ k ( [A] 0 − [B ] 0 ) t ] − 1 [B ] 0

(13)

The theoretical degree of substitution is:

DS th = 3

[B ] 0 [A] 0

(14)

and the relationship between degree of substitution and starch concentration is:

DS = 3

[C ] t [A] 0

=3

[A] 0 − [A] t [A]0

=

DS th [B] 0

( [A] 0 − [A] t )

(15)

As the reaction does not change significantly the native starch structure, one may assume that the cationization reaction does not modify the starch viscosity. In fact, this has been experimentally confirmed [31]. Consequently, in this case the viscous behavior can be simply described by a thermodependant power law. In the experiments performed, it has been shown that DS or RE decrease with feed rate and increase with screw speed, barrel temperature and restrictive character of the screw profile (expressed in terms of the number of kneading blocks) [29]. All these experiments were simulated using the reactive version of Ludovic©, with the kinetic scheme presented above [19, 20]. Figure 5 summarizes the data: whatever the screw profile, the processing

126

António Gaspar-Cunha, José Antonio Covas, Bruno Vergnes et al.

conditions and the reagent used (a second reagent, Quab© 188, was also tested), the agreement between the calculated and measured reaction efficiencies is excellent. 100

Computed efficiency (%)

90 80 70

Quab 151

60

Quab 188

50 40 30 20

20

30

40

50

60

70

80

90

100

Experimental efficiency (%)

Figure 5. Comparison between calculated and measured reaction efficiency. Reprinted from [19], with permission.

3. EXAMPLES OF THE OPTIMIZATION OF REACTIVE EXTRUSION 3.1. Optimization Algorithm The Multi-Objective Evolutionary Algorithm, MOEA, described in Chapter 4, can be applied to tackle the case studies discussed below. It is worth reminding that it consists of a modified version of the Reduced Pareto Set Genetic Algorithm, RPSGA, in order to be able to deal with the discrete characteristics of the twin-screw configuration problem (see explanation in Chapter 5). The RPSGA parameters used are identical for the two chemical reactions, except for the number of generations studied (50 for ε-caprolactone polymerization, 30 for starch cationization). The main and elitist populations had 100 and 200 individuals, respectively. A roulette wheel selection strategy, a crossover probability of 0.8, a mutation probability of 0.05, 30 ranks and limits of indifference of the clustering technique all equal to 0.01 were chosen (see Chapter 5 for more details). The global aim of the optimization examples is to define the best operating conditions and/or screw configurations yielding the best process performance (quantified in terms of the minimization of the average melt temperature, specific mechanical energy and viscous dissipation and the maximization of output and average strain) and assuring the complete chemical conversion of the reactions involved. In the case of ε-caprolactone polymerization, process modeling was performed using the Ludovic software, while modeling of starch cationization was made with the TwinXtrud software.

Reactive Extrusion - Optimization of Representative Processes

127

3.2. ε-Caprolactone Polymerization Table 1 summarizes the optimization runs performed for this reactive extrusion process. They can be classified in three groups. Group 1 (runs 1 to 4) concerns the optimization of the screw configuration. Group 2 (runs 5 to 8) aims at optimizing simultaneously the output and the screw configuration. Finally, in the third group (runs 9 to 12) the full set of operating conditions (i.e., output, screw speed and barrel temperature) and the screw configuration are optimized concurrently. Every run considers two objectives at once, the associated aim of the optimization and range of variation for each being defined in Table 2. The solutions for groups 2 and 3 (runs 5 to 12) are taken as valid only if the conversion rate (CR) is higher than 99.9%. As for the screw configuration, the aim is to define the best axial location of the 10 screw elements identified in Table 3 with numbers 1 to 10. Four screw elements (two at the beginning and two at the end of the screw profile, respectively) are maintained at their original locations, due to the need to ensure enough conveying capacity in the initial process stages, as well as pressure generation upstream of the die. The diameter D, and the overall L/D are compatible with those of an existing Leitritz LSM 30-34 laboratory modular extruder. The initial monomer has a melt density of 1030 kg.m-3, a melting point of 70ºC, a specific heat of 1700 J.kg-1.K-1, and a thermal conductivity of 0.2 W.m-1.K-1. Depending on the critical molecular weight, the viscosity is either defined by a Newtonian law (equation 7) or by a Carreau-Yasuda law (equation 8), in this case exhibiting a complex dependence on temperature, shear rate, molecular weight and concentration, as described in section 2.1 (equations 9 and 10). The values used for the constants in these equations are: k0 = 2.24 10-5, B = 1.7 10-20, a = 1.05, n = 0.52, A = 1.35 10-17, -1 Ev = 40 kJ.mol , αv = 4.4, αt = 4.1 and T0 = 413 K. Table 1. Optimization runs for ε-caprolactone polymerization.

Group

1

2

3

Run 1 2 3 4 5 6 7 8 9 10 11 12

Q (kg/hr)

Decision Variables N Tb Screw (rpm) (ºC) Configuration

10

100

190

10 elements

[3-30]

100

190

10 elements

[3-30]

[50200]

[140220]

10 elements

Objectives Texit, CR SME, CR Tmax/Tb, CR AvgStrain, CR Q, Texit Q, SME Q, Tmax/Tb Q, AvgStrain Q, Texit Q, SME Q, Tmax/Tb Q, AvgStrain

128

António Gaspar-Cunha, José Antonio Covas, Bruno Vergnes et al. Table 2. Optimization objectives, aim of optimization and range of variation. Objective

Aim

Output (Q), kg/hr Temperature at die exit (Texit), ºC Specific Mechanical Energy (SME), MJ/kg Viscous dissipation (Tmax/Tb) Average strain (AvgStrain) Conversion rate (CR), %

Maximize Minimize Minimize Minimize Maximize Maximize

Range of variation [3-30] [140-240] [0.1-2] [0.5-1.5] [1000-15000] [0-100]

Figure 6 shows the initial and the final (50th generation) non-dominated solutions for the optimization of the screw configuration - runs 1 to 4. It is clear that the initial solutions generated randomly evolve towards better solutions with higher performance, even if the operating conditions were maintained constant. The highest conversion rates are attained at the cost of deteriorating the remaining optimization objectives, i.e., of increasing Texit, SME and viscous dissipation and of decreasing the average strain. This is not surprising, since the reaction is accelerated when the temperature is increased. The points 1 to 8 in Figure 6 identify the solutions (that is, the screw configurations) that improve each of the objectives dealt with in each run. For example, point 1 indicates the solution that maximizes the conversion rate, CR, while point 2 shows the one that minimizes Initial population

Conversion ratio (%)

100

1

3

2

98 96 94 92

Run 1 90 196.2 196.4

4 196.6 196.8 Texit (ºC)

100

Conversion ratio (%)

50th Generation

197

1.44

1.46 1.48 SME (MJ/kg)

1.5

7

5

98

Run 2

6

96 94

8

92

Run 4

Run 3

90 1.05

1.1

1.15 Tmax/Tb

Figure 6. Pareto frontiers for group 1 (runs 1 to 4).

1.2

4400

4450 4500 Avg. Strain

4550

Table 3. Individual screw elements used in the optimization of ε-caprolactone polymerization (L is length, P is pitch, KB 30 indicates a block of kneading discs with a staggering angle of 30º; the thickness of the kneading disks is 7.5 mm) Beginning of screw 307 120

Element L (mm) P (mm)

20

1

2

3

4

5

6

7

8

9

10

120

120

120

120

20

30

45

30

22.5 KB 30

22.5 KB 30

22.5 KB 30

22.5 KB 30

15 KB 30

22.5 KB 30

45

End of screw 120 67.5 20

30

Table 4. Optimal screw configurations for the eight solutions identified in figure 6, runs 1 to 4. Beginning of screw

Point

1

2

3

4

5

6

7

8

9

10

End of screw

1

L (mm) P (mm)

307 20

120 45

120 30

15 KB 30

120 20

22.5 KB 30

120 30

22.5 KB 30

22.5 KB 30

120 45

22.5 KB 30

22.5 KB 30

120 20

67.5 30

2

L (mm) P (mm)

307 20

120 45

22.5 KB 30

120 30

120 20

120 45

22.5 KB 30

22.5 KB 30

22.5 KB 30

120 30

22.5 KB 30

15 KB 30

120 20

67.5 30

3

L (mm) P (mm)

307 20

120 45

22.5 KB 30

120 30

120 20

22.5 KB 30

22.5 KB 30

120 30

15 KB 30

22.5 KB 30

22.5 KB 30

120 45

120 20

67.5 30

4

L (mm) P (mm)

307 20

120 45

120 20

22.5 KB 30

120 45

22.5 KB 30

22.5 KB 30

22.5 KB 30

120 30

22.5 KB 30

15 KB 30

120 30

120 20

67.5 30

5

L (mm) P (mm)

307 20

120 45

120 30

22.5 KB 30

22.5 KB 30

22.5 KB 30

120 20

120 30

15 KB 30

22.5 KB 30

22.5 KB 30

120 45

120 20

67.5 30

6

L (mm) P (mm)

307 20

120 45

120 30

22.5 KB 30

120 30

22.5 KB 30

120 20

120 45

22.5 KB 30

22.5 KB 30

22.5 KB 30

15 KB 30

120 20

67.5 30

7

L (mm) P (mm)

307 20

120 45

120 20

22.5 KB 30

22.5 KB 30

120 30

120 45

120 30

15 KB 30

22.5 KB 30

22.5 KB 30

22.5 KB 30

120 20

67.5 30

8

L (mm) P (mm)

307 20

120 45

120 20

22.5 KB 30

120 45

120 30

22.5 KB 30

22.5 KB 30

22.5 KB 30

22.5 KB 30

120 30

15 KB 30

120 20

67.5 30

130

António Gaspar-Cunha, José Antonio Covas, Bruno Vergnes et al.

Texit.. These screw configurations are detailed in Table 4. The optimization algorithm locates the first restrictive element as early as possible in the screw shaft (positions 1 or 2). Probably, since these types of elements increase the local residence times and temperature, the polymerization reaction will start as early as possible. However, the justification of the choices made by the algorithm to locate the remaining restrictive elements is not always evident. In fact, it would require an analysis of the changes of all the calculated data along the various screw profiles (pressure, temperature, residence time, conversion rate…), which obviously lies beyond the scope of the present text. Following the same reasoning for the computational results of group 2, Figure 7 shows the non-dominated solutions of the initial and final populations for runs 5 to 8, i.e., when the output is optimized together with the screw configuration, whilst ensuring that all the solutions generated a conversion rate of at least 99.9%. Evidently, the Pareto frontiers of Figure 7 are different from those of Figure 6, because the output could now be varied. For example, in the case of run 5 lower temperatures could be attained, because lower outputs could be set. Exit and maximum temperatures increase with output, probably due to higher viscous dissipation, while the average strain decreases due to the shorter residence time. Run 6 aims at maximizing the output and minimizing SME. As the latter varies linearly with the ratio N/Q (screw speed/output) at fully filled zones (which are those that contribute more significantly to SME), these objectives are not conflicting and there is no further gain in varying the output. Consequently, the length of the Pareto frontier is reduced to only a few points. Initial population 200

SME (MJ/kg)

198

Texit (ºC)

1 196 194

2

192

50th Generation 1.23

1.228

4 1.226 1.224 1.222

3

Run 6

Run 5 190 1.3

1.22 12000

1.25

10000

1.2

Average Strain

Tmax/Tb

8

5

1.15 1.1 Run 7

6

8000 6000 4000

7

2000 Run 8

1.05

0 0

5

10 15 Output (kg/hr)

20

25

Figure 7. Pareto frontiers for group 2 (runs 5 to 8).

0

5

10 15 Output (kg/hr)

20

25

Table 5. Optimal screw configurations and output for the eight solutions identified in figure 7, runs 5 to 8. Beginning of screw

Point 1 2 3 4 5 6 7 8

1

2

3

4

5

6

7

8

9

10

End of screw

L (mm)

307

120

120

15

22.5

22.5

22.5

120

120

120

22.5

22.5

120

67.5

P (mm)

20

45

30

KB 30

KB 30

KB 30

KB 30

30

20

45

KB 30

KB 30

20

30

L (mm)

307

120

120

22.5

22.5

120

120

22.5

22.5

120

22.5

15

120

67.5

P (mm)

20

45

30

KB 30

KB 30

20

30

KB 30

KB 30

45

KB 30

KB 30

20

30

L (mm)

307

120

120

15

120

22.5

22.5

22.5

120

22.5

22.5

120

120

67.5

P (mm)

20

45

45

KB 30

30

KB 30

KB 30

KB 30

30

KB 30

KB 30

20

20

30

L (mm)

307

120

120

22.5

120

120

22.5

22.5

120

15

22.5

22.5

120

67.5

P (mm)

20

45

30

KB 30

45

20

KB 30

KB 30

30

KB 30

KB 30

KB 30

20

30

L (mm)

307

120

22.5

120

22.5

120

15

22.5

120

120

22.5

22.5

120

67.5

P (mm)

20

45

KB 30

45

KB 30

30

KB 30

KB 30

30

20

KB 30

KB 30

20

30

L (mm)

307

120

120

22.5

120

120

22.5

22.5

22.5

120

15

22.5

120

67.5

P (mm)

20

45

30

KB 30

45

20

KB 30

KB 30

KB 30

30

KB 30

KB 30

20

30

L (mm)

307

120

22.5

22.5

120

120

22.5

120

22.5

15

120

22.5

120

67.5

P (mm)

20

45

KB 30

KB 30

20

30

KB 30

30

KB 30

KB 30

45

KB 30

20

30

L (mm)

307

120

22.5

120

22.5

22.5

22.5

15

120

120

22.5

120

120

67.5

P (mm)

20

45

KB 30

20

KB 30

KB 30

KB 30

KB 30

30

30

KB 30

45

20

30

Q (kg/hr) 23.3 3.1 23.4 23.4 23.4 5.9 21.3 3.0

Table 6. Optimal screw configurations and operating conditions for runs 9 to 12. Beginning of screw

Point 1 2 3 4 5 6 7 8

End of screw

1

2

3

4

5

6

7

8

9

10

22.5

22.5

15

22.5

120

67.5

KB 30

KB 30

20

30

L (mm)

307

120

22.5

120

22.5

120

120

120

P (mm)

20

45

KB 30

30

KB 30

30

45

20

KB 30 KB 30

L (mm)

307

120

22.5

22.5

120

22.5

22.5

22.5

120

120

15

120

120

67.5

P (mm)

20

45

KB 30

KB 30

30

KB 30

KB 30

KB 30

45

30

KB 30

20

20

30

22.5

22.5

L (mm)

307

120

120

22.5

22.5

120

120

120

P (mm)

20

45

45

KB 30

KB 30

20

30

30

L (mm)

307

120

22.5

22.5

22.5

120

120

120

P (mm)

20

45

KB 30

KB 30

KB 30

30

20

45

KB 30 KB 30 22.5

15

KB 30 KB 30

L (mm)

307

120

22.5

120

15

120

120

120

P (mm)

20

45

KB 30

20

KB 30

30

45

30

L (mm)

307

120

22.5

22.5

120

120

22.5

120

22.5

P (mm)

20

45

KB 30

KB 30

30

20

KB 30

45

KB 30

22.5

22.5

15

22.5

120

67.5

KB 30

KB 30

20

30

22.5

120

120

67.5

KB 30

30

20

30

22.5

22.5

120

67.5

KB 30

KB 30

20

30

120

15

22.5

120

67.5

30

KB 30

KB 30

20

30

KB 30 KB 30

L (mm)

307

120

120

22.5

120

22.5

15

22.5

120

22.5

120

22.5

120

67.5

P (mm)

20

45

45

KB 30

30

KB 30

KB 30

KB 30

20

KB 30

30

KB 30

20

30

L (mm)

307

120

120

22.5

120

120

120

15

22.5

22.5

22.5

22.5

120

67.5

P (mm)

20

45

45

KB 30

30

20

30

KB 30

KB 30

KB 30

20

30

KB 30 KB 30

Q (kg/hr)

N (rpm)

Tb (ºC)

23.3

199

188

3.2

59

167

23.4

200

191

5.9

57

220

23.3

200

205

14.8

186

216

22.9

200

197

3.1

200

198

Reactive Extrusion - Optimization of Representative Processes

133

If the solutions that improve each of the objectives dealt with in each run are identified (points 1 to 8 in Figure 7), the corresponding screw configurations and output can be defined (see Table 5). Some solutions attain significant outputs, of up to more than 23 kg/hour. Experimental practice has demonstrated that this is indeed the practical limit of the Leistritz twin screw extruder, since under these conditions the screws work fully filled and/or the maximum motor Amperage is attained. Nevertheless, the explanation of the optimized profiles for the different runs is far from evident. This also means that an optimization design solely based on empirical knowledge, or on direct modeling, would have little probability of finding screw designs with this level of performance. In group 3 (runs 9 to 12 in Table 1), the screw profile and the operating parameters (Q, N, Tb) are optimized at the same time. Table 6 shows the optimal solutions for this third group, where: -

for run 9, points 1 and 2 indicate the solutions that maximize output, Q and minimize Texit., respectively;

-

for run 10, points 3 and 4 indicate the solutions that maximize Q and minimize SME, respectively;

-

for run 11, points 5 and 6 indicate the solutions that maximize Q and minimize Tmax/Tb, respectively;

-

for run 12, points 7 and 8 indicate the solutions that maximize output Q and average strain, AvgStrain, respectively.

The maximum screw speed is systematically chosen when the objective is to maximize output. When the exit temperature and SME are to be minimized, a low screw speed is chosen (57 and 59 rpm), which implies a very low output (to avoid the screws becoming fully filled). Barrel temperature is low when the exit temperature is to be minimized, but it is fixed at its maximum value to minimize SME, as the viscosity is at its lowest level under these conditions. To maximize strain, high screw speed and low output are imposed, which is perfectly reasonable, as the strain varies according to the N/Q ratio. Table 7 shows the optimized values of the different objectives for the three groups of runs, i.e., when more degrees of freedom for the optimization are progressively allowed. For example, a considerable gain is obtained for output and average strain when geometrical and operating parameters are used at once for the optimization. In group 1, output was defined as 10 kg/hr, which is far from the maximum attainable value (circa 23 kg/hr). When Q also became an optimization variable (groups 2 and 3), output improved significantly. Also, Texit, SME and viscous dissipation minimization from group 2 improved 12.6, 64.1 and 3.7%, respectively, to group 3. Finally, Figure 8 shows the influence of the [M0]/[I0] ratio on the optimization results. In order to generate the data, runs 9 and 10 of Table 10 were repeated, but using different values of the [M0]/[I0] ratio (400 and 800, respectively). The Figure shows the optimal Pareto frontiers in the objectives domain, taking into account that output is used both as decision variable and as an objective to be maximized. As seen above, an increase in [M0]/[I0] induces a slower reaction. Consequently, to reach the desired conversion ratio, the barrel temperature has to be raised, mainly when operating at high outputs, as the total residence time is shorter. The screw speed should also be increased in order to promote viscous heating; however, this

134

António Gaspar-Cunha, José Antonio Covas, Bruno Vergnes et al.

yields a higher exit temperature and a higher SME. The screw speed/output correlation is independent of the [M0]/[I0] ratio. The minimization of the SME (run 10) is obtained at the cost of increasing the barrel temperature (to reduce the viscosity), almost independently of the value of the [M0]/[I0] ratio. Table 7. Best results of the objectives for groups 1 to 3. Group 1 Objectives

Group 2 variation run (%)

value

run

value

CR (%) -Maximization

99.9

1

99.9

5-8

0.0

99.9

Q (kg/hr) - Maximization Texit (ºC) - Minimization SME (MJ/kg) - Minimization Tmax/Tb - Minimization AvgStrain - Maximization

10 196.3 1.447 1.09 4517

1-4 1 2 3 4

23.3 192.6 1.221 1.13 10067

5 5 6 7 8

133.0 1.8 15.6 -3.7 122.9

23.3 171.5 0.52 1.05 9921

M/Io=400

SME (MJ/kg)

M/Io=800 M/Io=1000 0.55 d) 0.54 0.53

190 170

0.52 0.51

150

0.5 0.49

130 250

0.48 250 Screw speed (rpm)

b)

200 150 100 50

Barrel temperature (ºC)

0 240

Barrel temperature (ºC)

Screw speed (rpm)

Texit (ºC)

210 a)

c)

220 200 180 160 140

0

5

10 15 20 Output (kg/hr)

value

Group 3 variation run (%) 90.0 12 9 133.0 9 12.6 10 64.1 11 3.7 12 119.6

25

e)

200 150 100 50 0 240 f) 220 200 180 160 140

0

5

10 15 20 Output (kg/hr)

25

Figure 8. Influence of the [M0]/[I0] ratio on the optimization results. Run 9: a) to c); Run 10: d) to f).

Table 8. Individual screw elements used for the optimization of starch cationization (LH indicates a left-handed screw element, KB denotes a block of kneading discs). Element L (mm)

Beginning of screw 250

P (mm) 33.3

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

End of screw

50 50

50

50

25

25

50

50

50

25

25

25

25 50

25

25

25

25

25 16.6

KB 33.3 25 -45º

LH LH LH KB KB 33.3 25 25 16.6 16.6 25 16.6 -45º 12.5 16.6 16.6 -45º

16.6

136

António Gaspar-Cunha, José Antonio Covas, Bruno Vergnes et al.

3.3. Starch Cationization Starch Cationization will take place in a Clextral BC21 (Clextral, France) laboratory scale co-rotating twin screw extruder (screw length of 900 mm). Table 8 presents the individual elements that are available to assemble the screws. As for the Leistritz extruder, those elements close to the hopper and to the screw tip were retained in their original location. The reagent is injected at the end of the second element, i.e., at an axial distance of 300 mm. Seven optimization runs were carried out as illustrated in Table 9, which identifies the decision variables (i.e., the parameters to be optimized) and the optimization objectives. The aim and range of variation of the latter are shown in Table 10. A restriction to the maximum achievable temperature was always applied (TMax < 165ºC), while for run 7 the solutions were only taken as valid when SME was lower than 0.72 MJ/kg. In every case, the screw speed N, the barrel temperature Tb, the reagent injection point and its amount were kept constant and equal to N = 400 rpm, Tb = 130 ºC, 300 mm and 0.107 Qstarch, respectively, where Qstarch is the starch feed rate. As explained above, the concentration of reagent can be expressed as a theoretical degree of substitution (DSth). It was also explained that since a starch glycosil unit has three hydroxyl groups, the maximum degree of substitution DS is 3. Although the cationic starches used in the paper industry usually have DS values in the range 0.02-0.05, the value of 0.1 for DSth was selected because it is more difficult to reach and, henceforth, the optimization exercise is more interesting. Table 9. Optimization runs for starch cationization.

Run 1 2 3 4 5 6 7

Q (kg/hr) 2.5 5 10 20 40 [2.5-40] [2.5-40]

Decision Variables Screw Configuration

Objectives

16 elements

RE, SME

16 elements 16 elements

RE, SME RE, Q

Runs 1 to 5 involve the optimization of the screw geometry for different values of output, taking into consideration the reaction efficiency (RE) and the specific mechanical energy (SME), which are to be maximized and minimized, respectively (Table 10). In contrast, runs 6 and 7 tackle at the same time the screw configuration and output optimization, now in order to maximize RE and minimize SME (run 6), or maximize output Q (run 7). Table 10. Optimization objectives, aim of optimization and range of variation. Objectives Output (Q), kg/hr Specific Mechanical Energy (SME), MJ/kg Reaction Efficiency (RE), %

Aim Maximize Minimize Maximize

Range of variation [2.5-40] [0-1.5] [0-100]

Reactive Extrusion - Optimization of Representative Processes

137

Figure 9 shows the initial population and the non-dominated solutions of the 30th population for runs 1 to 5. As before, the algorithm is able to evolve towards better solutions along the successive generations. As expected, an increase in output implies necessarily a decrease in the reaction level (i.e., RE decreases), regardless of the screw configuration (which was also optimized). The corresponding screw configurations (for maximizing RE) in these 5 runs are described in Table 11. In all cases, the optimization algorithm places the left handed (LH) element as upstream as possible (positions 1 or 2), in order to melt rapidly the starch. The remaining restrictive elements are located mostly at positions 13 to 16. In contrast, the minimization of SME is achieved mainly as a result of the higher melt temperature (thus lower viscosity) resulting from higher outputs.

RE (%)

100

Run 1

Run 2

Run 3

Run 4

80 60 40

RE (%)

20 100

80 60 40

RE (%)

20 100

Run 5

80

Run 2

60

Run 3 Run 4

40 20

Run 1

Run 5 0

0.5 1 SME (MJ/kg)

1.50

0.5 1 SME (MJ/kg)

1.5

Figure 9. Optimizing starch cationization. Initial population and non-dominated solutions at the 30th generation, runs 1 to 5.

Table 11. Optimal screw configurations maximizing RE of starch cationization, runs 1 to 5.

Run 1

Run 2

Run 3

Run 4

Run 5

Beginning of screw

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

End of screw

L (mm)

250

50

25

25

50

25

50

25

25

50

50

50

25

25

25

50

50

25

25

P (mm)

33.3

25

25

LH 12.5

33.3

16.6

16.6

16.6

25

16.6

33.3

KB -45º

25

25

LH 12.5

KB -45º

KB -45º

LH 16.6

16.6

L (mm)

250

50

25

25

25

25

50

50

25

25

50

50

25

50

25

50

50

25

25

P (mm)

33.3

25

16.6

LH 12.5

25

25

16.6

33.3

25

LH 16.6

33.3

16.6

16.6

KB -45º

25

KB -45º

KB -45º

LH 12.5

16.6

L (mm)

250

50

25

50

25

25

25

25

50

50

25

50

25

25

25

50

50

50

25

P (mm)

33.3

25

LH 12.5

16.6

16.6

25

25

25

33.3

16.6

16.6

33.3

LH 16.6

25

LH 12.5

KB -45º

KB -45º

KB -45º

16.6

L (mm)

250

50

25

25

50

50

25

25

50

25

25

50

25

50

50

25

25

50

25

P (mm)

33.3

25

25

LH 16.6

33.3

16.6

25

16.6

16.6

25

25

KB -45º

16.6

33.3

KB -45º

LH 12.5

LH 12.5

KB -45º

16.6

L (mm)

250

50

25

50

25

50

25

50

50

25

25

50

50

25

25

25

50

25

25

P (mm)

33.3

25

LH 16.6

33.3

25

33.3

16.6

16.6

KB -45º

25

25

KB -45º

16.6

16.6

25

LH 12.5

KB -45º

LH 12.5

16.6

Table 12. Optimal screw configurations for starch cationization, runs 6 and 7. Beginning of screw

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

End of screw

L (mm)

250

50

25

25

25

25

50

25

50

50

25

50

25

25

50

50

25

50

25

P (mm)

33.3

25

16.6

LH 12.5

16.6

25

33.3

25

16.6

33.3

25

KB -45º

25

LH 16.6

KB -45º

16. 6

LH 12. 5

KB -45º

16.6

L (mm)

250

50

25

25

25

50

50

25

25

25

50

25

25

50

25

50

50

50

25

P (mm)

33.3

25

16.6

LH 12.5

25

33.3

33.3

16.6

25

25

16. 6

LH 16.6

25

KB -45º

LH 12.5

KB -45º

16. 6

KB -45º

16.6

L (mm)

250

50

25

25

25

25

50

50

25

25

25

50

50

25

50

50

50

25

25

P (mm)

33.3

25

25

LH 12.5

25

LH 16.6

33.3

33.3

25

16.6

25

KB -45º

16.6

16.6

16.6

KB -45º

KB -45º

LH 12. 5

16.6

L (mm)

250

50

25

25

25

50

50

25

50

50

25

25

25

50

25

50

50

25

25

P (mm)

33.3

25

25

LH 12.5

25

33.3

33.3

16.6

16.6

16.6

25

16.6

25

KB -45º

LH 16.6

KB -45º

KB -45º

LH 12. 5

16.6

Point

1

2

3

4

140

António Gaspar-Cunha, José Antonio Covas, Bruno Vergnes et al.

The various optimal Pareto frontiers obtained for different outputs seem to define an asymptotic limiting curve in Figure 9. This should be confirmed by run 6, which comprises the optimization of output together with that of screw configuration. Figure 10 presents the Pareto frontiers in the objectives domain for runs 6 and 7. In fact, the shape of the Pareto frontier for run 6 is identical of that of runs 1 to 5. Table 12 presents the screw configurations for the solutions 1 to 4 in Figure 10. Again, the optimal screws have a restrictive element (left handed) at the beginning of the screw (position 2), in order to melt the starch as soon as possible. Screws 1 and 2, where the aim is to minimize SME, are apparently similar in geometrical terms, but can attain quite different output levels (3 and 39 kg/hr, respectively). These results are in accordance with those for runs 1 to 5, since the lowest SME is obtained at high outputs, when the temperatures are also high and, consequently, the viscosity is lower. Run 7 involves a distinct optimization exercise. In the case of the screw configurations that maximize the feed rate (point 4 in Figure 10 and Table 12), it is advisable to locate the restrictive elements as downstream as possible. Such a screw geometry should maximize the melt temperature increase due to viscous heating, which might balance the reduction in residence time that results from increasing outputs. Conversely, the screw profiles maximizing RE yield low outputs (around 3 kg/h) and have a very different construction: the restrictive elements are generally separated by conveying elements and are located near to the melting section. 100

1

Run 7

Run 6

3

RE (%)

80 60 40

4 2

20 0

0.5 1 SME (MJ/kg)

1.5 0

10

20 30 Output (kg/hr)

40

Figure 10. Optimal Pareto frontiers for starch cationization, runs 6 and 7.

CONCLUSION The optimization studies presented in this chapter demonstrated the potential of multiobjective evolutionary algorithms for optimizing the screw configuration and the processing conditions for specific applications in reactive extrusion, an important technology used to generate advanced polymer systems. The use of a Reduced Pareto Set Genetic Algorithm enabled the identification of feasible solutions, with satisfactory physical sense, even when conflicting objectives were selected.

Reactive Extrusion - Optimization of Representative Processes

141

REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]

[15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28]

Xanthos M. Reactive Extrusion: Principles and Practice; Hanser: Munich (1992). Baker, W., Scott, C., Hu, G.H. Reactive Polymer Blending; Hanser: Munich (2001). Cassagnau, P., Bounor-Legaré, V., Fenouillot, F. Intern Polym Proc. 2007, 22, 218258. Vergnes, B., Berzin, F. Plast., Rubber, Comp.: Macromol. Eng. 2004, 33, 409-415. Valette, R., Coupez, T., Vergnes, B. (2008) Proceedings 24th Annual Meeting of the Polymer Processing Society, CD Rom. Berzin, F. Hu G.H. In Techniques de l’Ingénieur, AM 3654, Paris, 2004, 1-16. Jongbloed, H.A., Kiewiet, J.A., Van Dijk, J.H., Janssen, L.P.B.M. Polym. Eng. Sci. 1995, 35, 1569-1579. Vergnes, B., Della Valle, G., Delamare, L. Polym. Eng. Sci. 1998, 38, 1781-1792. Vergnes, B., Souveton, G., Delacour, M.L., Ainser, A. Int. Polym. Proc. 2001, 16, 351362. Carneiro, O.S., Covas, J.A., Vergnes, B., J. Appl. Polym. Sci. 2000, 78, 1419-1430. F. Berzin, F., Vergnes, B., Lafleur, P.G., Grmela, M. Polym. Eng. Sci. 2002, 42, 473481. Lozano, T., Lafleur, P.G., Grmela, M., Vergnes, B. Intern. Polym. Proc. 2003, 18, 12-19. Delamare, L., Vergnes, B., Polym. Eng. Sci. 1996, 36, 1685-1693. C. Teixeira, R. Faria, J.A. Covas, A. Gaspar-Cunha, Modelling Flow and Heat Transfer in Co-Rotating Twin-Screw Extruders, 10th Esaform Conference on Material Forming, E. Cueto and F. Chinesta Editors, Zaragoza, Spain, April, 907, 957 2007. Poulesquen, A., Vergnes, B., Cassagnau, Ph., Gimenez, J., Michel, A. Intern. Polym. Proc. 2001, 16, 31-38. Berzin, F., Vergnes, B., Dufossé, P., Delamare, L., Polym. Eng. Sci. 2000, 40, 344356. Berzin, F., Vergnes, B., Canevarolo, S.V., Machado, A.V., Covas, J.A. J. Appl. Polym. Sci. 2006, 99, 2082-2090. Berzin, F., Vergnes, B., Intern. Polym. Proc. 1998, 13, 13-22. Berzin, F., Tara, A., Tighzert, L., Vergnes, B. Polym. Eng. Sci. 2007, 47, 112-119. Berzin, F., Tara, A., Vergnes, B., Polym. Eng. Sci. 2007, 47, 814-823. Chalamet, YI., Taha, M. Berzin, F., Vergnes, B. Polym. Eng. Sci. 2002, 42, 2317-2327. Dubois, Ph., Ropson, N., Jérôme, R., Teyssié, Ph., Macromol. 1996, 29, 1965-1975. Gimenez, J., Boudris, M., Cassagnau, Ph., Michel, A., Polym. React. Eng. 2000, 8, 135-157. Gimenez, J., Boudris, M., Cassagnau, Ph., Michel, A., Intern. Polym. Proc. 2000, 15, 20-27. Gimenez, J., Cassagnau, Ph., Fulchiron, R., Michel, A., Macromol. Chem. Phys. 2000, 201, 479-490. Gimenez, J., Cassagnau, Ph., Michel, A. J. Rheol. 2000, 44, 527-548. Solarek, D.B. In Modified Starches: Properties and Uses, Würzburg, O.B;. Ed.; CRC Press: Boca Raton, FL, 1986, Chap. 8. Rutenberg, M.W., Solareck, D.B. In Starch Derivatives: Production and Uses,

142

[29] [30] [31]

António Gaspar-Cunha, José Antonio Covas, Bruno Vergnes et al. Whistler, R.L., BeMiller, J.N., Paschall, E.F., Eds.; Starch: Chemistry and Technology, 2nd ed., Academic Press: Orlando, FL, 1984; Chap. 10. Tara, A., Berzin, F., Tighzert, L., Vergnes, B., J. Appl. Polym. Sci. 2004, 93, 201-208. Ayoub, A., Berzin, F., Tighzert, L., Bliard, C. Starch 2004, 56,513-519. Berzin, F., Tara, A., Tighzert, L. Appl. Rheo. 2007, 17, 1-7.

In: Optimization in Polymer ISBN: 978-1-61122-818-2 Editors: A. Gaspar-Cunha and J. A.Covas, pp. 143-165 ©2011 Nova Science Publishers, Inc.

Chapter 7

THE AUTOMATIC DESIGN OF EXTRUSION DIES AND CALIBRATION/COOLING SYSTEMS João Miguel Nóbrega and Olga Sousa Carneiro Institute for Polymers and Composites/I3N, University of Minho, Campus de Azurém, 4800-058 Guimarães, Portugal

1. INTRODUCTION Thermoplastic profiles have a large-scale application in the construction [1-3], medical [4-6], electric and electronic industries [7], among others. The term profile is commonly used to designate products of constant cross section that are obtained by the extrusion process. A typical extrusion line for the production of thermoplastic profiles generally comprises an extruder, a die, a calibration/cooling system, a haul-off unit and a saw. The forming tools are the die and the calibration/cooling system, whose main functions are to shape the polymer melt into a cross section similar to that specified for the profile and to establish its final dimensions while cooling it down to a temperature that guarantees its mechanical integrity, respectively. The major objective of any extrusion line is to produce the required profile at the highest rate and quality [8]. These goals are usually conflicting, i.e., the increase in speed generally affects negatively the product quality, and vice-versa. Consequently, the improvement of the extrusion line performance demands a systematic approach and a careful study of the phenomena involved in the process [9], particularly those concerning the design of its critical components (forming tools): extrusion die and calibrator(s). The extrusion die plays a central role in the establishment of the product dimensions, morphology and properties [10]. The difficulties to be faced in the design of an extrusion die are closely related to the complexity of the profile to be produced. In fact, while the design of these forming tools for the production of a rod or pipe is almost straight-forward, in the case of an intricate window profile it can be an extremely complex process. From the geometrical point of view the extrusion die flow channel must convert a circular cross section, corresponding to the melt leaving the extruder, into a shape similar to that of

144

João Miguel Nóbrega and Olga Sousa Carneiro

the profile. This geometrical transformation should be performed as smoothly as possible, in order to avoid problems caused by stagnation points or abrupt melt accelerations. A typical extrusion die flow channel is generally composed by three geometrically distinct zones [2,10]: -

Adapter: circular zone with the same diameter of the extruder barrel, where the breaker plate and filter, when used, are accommodated;

-

Transition Zone: where the transition between the initial (adapter) and final (parallel) zones takes place. For the case of a rod die its geometry is extremely simple; however, for intricate profiles it can be very complex and it has a determinant influence in the downstream melt flow distribution;

-

Parallel Zone: final zone of the extrusion die, consisting of a flow channel with constant cross section. It has an important role on the establishment of the final product properties and processing conditions, since it is responsible for a significant part of the total pressure drop verified [11]. Its main purpose is to allow the polymer melt to relax the deformations imposed upstream [12,13], turning the whole process less sensitive to eventual oscillations of the processing conditions and minimizing the extrudate swell occurring at the die exit [14].

The achievement of a high performance is only possible through the use of a properly designed die, which should [10]: i) enable the production of the desired profile; ii) induce a minimum level of internal stresses; ii) avoid the occurrence of rheological defects; iv) avoid the thermal degradation of the melt. To fulfill those objectives, in addition to general issues like the control over both the total pressure drop and heat viscous dissipation, the design of these extrusion tools must address the following: -

Rheological defects: during the melt flow through the die channel flow instabilities may occur, affecting the extrudate quality and, eventually, giving rise to an unacceptable product. The two most critical ones in profile extrusion are sharkskin and melt fracture [15,16]. Several criteria have been proposed for the triggering of these flow defects, being consensual that they occur when a critical value (of normal stress/recoverable extensional strain or shear stress/recoverable shear strain) is achieved. As a consequence, the stresses developed both at the die parallel zone and at the convergent regions should not exceed the critical values of the polymer melt [17,18]. Consequently, during the design stage the onset of sharkskin will establish the maximum flow rate, whereas the onset of melt fracture will determine the maximum convergence angle of the transition zone.

-

Post-Extrusion phenomena: in addition to the shape changes originated at the die channel and at the calibration/cooling system other changes taking place along the extrusion line must also be considered [19,20], namely: a.

Extrudate-swell: due to the velocity profile rearrangement and to the elasticity of the polymer melt, the extrudate cross section dimensions increase, leading to distortion when non-axisymmetric cross section geometries are considered

The Automatic Design of Extrusion Dies ...

145

[10,21]. This effect decreases with increasing parallel zone length, until a minimum limit value [22-24]; b. Draw-down: since the profile must be pulled by the haul-off unit with a velocity higher than its average velocity taken at the die exit, its cross section is stretched [20]; c.

Shrinkage: this effect is promoted by the decrease of the material specific volume that occurs during cooling. The geometry of the extrusion die flow channel should anticipate these post-extrusion effects through the adequate corrections, which should be especially focused on its parallel zone. -

Flow Balance: in a properly designed extrusion die the flow should be distributed in a way that allows the production of the required profile geometry [10,25]. This problem is especially critical for profiles comprising walls of different thicknesses, which promote different flow restrictions. In this case, the melt flow directs naturally to the thicker sections, thus originating a flow lower than required in the thinner sections. This problem can be solved by a proper design of the extrusion die flow channel, which must compensate for the differences in flow restrictions [13,26].

There are two main alternative approaches used to balance the flow in profile extrusion dies: that involving changes in the flow channel parallel zone cross-section, or die land [2629], keeping the parallel zone cross-section, but involving modifications in the die land length [24,30-35]. Based on experimental knowledge, some authors refer that the optimization techniques based on adjustments of the parallel zone cross-section generate more robust dies, i. e., dies less sensitive to variations in process conditions and/or polymer rheology [28,36]. The second design approach, based on parallel zone length control, may be insufficient to entirely balance geometries with severe differences in flow restriction [37]. The use of flow separators may bypass this problem [32,34,38], but it gives rise to the formation of weld lines, which may jeopardize the mechanical performance of the extruded profile [27,39,40]. The problems pointed out so far justify the difficulties in getting the required shape and dimensions for a new profile and the need for the use of an additional forming tool: the calibrator. In fact, the calibration/cooling step has a double objective: it prescribes the final dimensions of the profile, while cooling it down fast to solidify its outer layers to ensure sufficient rigidity during the remainder cooling steps [41]. Calibration may be carried out by applying either internal pressure or external vacuum. It can also be wet and/or dry, i.e., either involving direct or indirect contact between the cooling medium (generally, water) and the hot profile, respectively [9,42]. Usually, several calibrators are used in series, separated by relatively short air zones [43,44], where the temperature tends to equalize, thus contributing for the minimization of the internal thermal induced stresses and for increasing the heat transfer efficiency in the next calibrator. For high-speed profile extrusion, vacuum assisted dry calibration has proved to be particularly suitable, due to its reliability [9]. The design of these tools is also complex, being the parameters that influence its thermal performance grouped as follows: -

system geometry: number of calibrating units, unit length, separating distance and layout of the cooling channels (the latter involves such quantities as number,

146

João Miguel Nóbrega and Olga Sousa Carneiro diameter, type of arrangement, distance between consecutive channels and distance to the profile) [42,45]; -

cooling conditions: temperature of the inlet water, flow rate, flow direction and wet versus dry contact with the profile [41];

-

vacuum conditions: number and location of the vacuum holes and vacuum pressure;

-

extrusion conditions: mass throughput and cross-temperature field of the profile at the die exit;

-

polymer thermo-physical properties: thermal diffusivity and thermal expansion coefficient;

-

properties of the calibrator material: thermal conductivity and surface roughness;

-

profile cross-section: thicknesses, number and location of hollow sections, etc.

In this chapter the state-of-the-art on the design of extrusion forming tools will be done; also, the current approach developed by the authors, to automatically carry out these tasks, will be described and illustrated using a case study.

2. STATE-OF-THE-ART Due to the large number of phenomena and restrictions involved, geometrical complexity of some profiles and complex polymer melt rheological behavior, extrusion die design was, and still is, more an art than a science [25,46]. The design process is usually based on trialand-error procedures, which are strongly dependent on the designer knowledge and experience [46,47], often requiring several trials. As a consequence, the design process is usually very time, material and equipment consuming, affecting product price and performance [48], since it does not guarantee the achievement of an optimum solution. The design of profile extrusion dies has been studied for a quite long time [26], and there are various works that gather the accumulated experience and contain practical rules for improved design of these tools [13,26,38,49-53]. In some publications these recommendations appear combined with analytical calculations, leading to semi-empirical approaches that give the designer a better insight of the phenomena involved [10,38,54-56]. Following the proposed methodologies, a complex flow channel cross section may be subdivided into simpler sections that are treated separately using analytical relations between throughput and pressure drop [25,57-61]. However, it was shown that these models have a limited range of application, due to the complexity of the geometries usually involved and to the limitations inherent to the necessarily simple constitutive equations that are employed [62,63]. The progressive development of computational fluid dynamics and the availability of accurate numerical codes have allowed the designers to use the computer as a powerful aid [48,64-67], enabling the use of more accurate rheological models [68] in order to achieve significant improvements in the resulting tools [32,69,70]. Due to recent developments in this area, nowadays it is possible to model the flow of molten polymers both inside and outside the flow channel, i.e., with a free surface boundary condition [3,71-76], using three-

The Automatic Design of Extrusion Dies ...

147

dimensional modeling codes [77,78]. However, even using these powerful modeling tools to assist the design process, the generation of the successive trial solutions and the decisions necessarily involved in the process are still committed to the designer. Most of the available numerical tools can be used with non-isothermal rheological constitutive equations, needed to predict the melt temperature increase promoted by viscous dissipation, which demands the inclusion of the temperature effect on both the flow and heat transfer characteristics [79]. In terms of the constitutive equation, Generalized Newtonian models are usually employed for the design of real tools [80-82], instead of viscoelastic models that are supposed to be more representative of the polymer melt behavior. This is a consequence of the heavy computational resources demanded to solve real problems with this type of models [83,84], of the lack of reliable viscoelastic rheological models [63,83,85,86] and of difficulties to be faced in the required melt experimental characterization [87]. For historical reasons, most of the computational tools used in polymer processing are based on the finite element method [75,88-90]. However, the calculation times required for 3D problems, even with Generalized Newtonian constitutive models, are usually prohibitive and demand heavy computational resources [24,63,91]. These computational requirements are particularly critical when optimization algorithms are used, since the searching process requires the resolution of a large amount of trials to reach the final solution. As a consequence, 3D numerical calculations have so far been only used to solve some specific problems [80,82,92,93]; additionally, general design techniques, such as the network based flow analysis [24,94-99] and the cross section methods [29,32,80,91,100,101], are always a compromise between accuracy and swiftness of calculations [102,103]. Anyway, 3D computations are generally considered mandatory to capture adequately all the flow details [80,104-107], especially for complex geometries, which is the case of most extrusion dies for the production of profiles. Recently, some works have reported the application of the finitevolume method [83,108], widely used in traditional Computational Fluid Dynamics, claiming that it is not so demanding, in terms of computational resources, as those based on finiteelements, and presents a number of advantages for fully 3D calculations [109,110]. Therefore, there is no reason why the finite volume technique should not be as successful as finite element based techniques in the field of Polymer Processing. It was the need for a design process less dependent on personal knowledge that motivated the development of the automatic die design concept [81,111]. Optimization algorithms have until now been only applied to the design of dies for the production of simple geometries, like sheet or tubular profiles, since in these cases the search process is easy to systematize [11,102,110]. Currently, some research groups are making efforts to develop more general procedures [103,113]. In what concerns calibration/cooling systems, they have attracted relatively little attention in the scientific literature, despite their obvious practical relevance. Most available reports concern the calculation of the time evolution of the extrudate temperature [44,45,114], the exception being the work of Fradette et al. [115], in which the model previously developed [45] was integrated in an optimization routine used to determine the optimal location and size of the cooling channels. Furthermore, the existing results are either qualitative or concentrate on a few variables [114,116] ignoring, for instance, the effect of the boundary conditions. The first attempts to model the cooling of plastic profiles or pipes were made during the seventies and eighties with 1D models (see, for example [43,117,118]), which were only applicable to idealized conditions, such as uniform cooling and uniform thickness extrudates.

148

João Miguel Nóbrega and Olga Sousa Carneiro

Menges et al. [119] developed a 2D FEM approach that could deal with any extrudate crosssection, but ignored axial heat fluxes. Inclusion of axial diffusion was addressed by Sheehy et al. [45], who proposed the Corrected Slice Model (CMS), which is a hybrid 2D model that can cope with the three dimensionality introduced by the axial heat fluxes. Other 2D modeling studies of extrudate cooling addressed other specific aspects, such as the inclusion of more realistic boundary conditions for the heat exchange within the internal cavities of hollow profiles [116,120], or the prediction of sag flow in thick wall pipes [121]. A major difficulty facing the modeling of the cooling process is the selection of the adequate heat transfer coefficient, h, between the profile surface and the cooling medium, i.e., calibrator internal walls, water or air, which must include the effect of the contact resistance. It was experimentally shown that h can vary between 10 and 10000 W/m2K [122], depending on the location along the calibration system. Other authors estimate h empirically, considering the local effectiveness of the contact between the profile and the calibrator from observations of the wear pattern of the calibrator [114]. Finally, values of h can also be estimated using an inverse problem solving strategy, i.e., determining the values of the coefficient that match numerical simulations with the corresponding experimentally measured temperature fields [44,123]. The current authors performed a thorough study on the influence of geometrical, polymer, process and operational parameters on the performance of calibration/cooling systems [124] where it was concluded that one of the most influential parameters is the heat transfer coefficient and splitting the calibration/cooling system in several independent units. This study motivated others where the influence of the length of each calibration unit and the distance between them (annealing zone length) [125], as well as the cooling water temperature in each calibration unit, was assessed [126]. The interesting set of results gathered with the referred studies induced the natural next step, i.e., the use of an optimization approach to design calibration/cooling systems [127,128], which will be also presented and illustrated in a case study.

3. OPTIMIZATION METHODOLOGY In general terms the extrusion die and calibration/cooling system design codes consist of an iterative process comprising five main steps schematically represented in Figure 1.

3.1. Problem Setup This is the only step that requires the user intervention and consists on the definition of the geometrical inputs of the problem, processing conditions, materials properties (polymer or polymer and calibrators construction metal) and boundary conditions.

Extrusion dies In this case, the profile cross section geometry must be defined, followed by its division into elemental sections (ES) and in regions corresponding to ES intersections (IS) [113], as illustrated in Figure 2.

The Automatic Design of Extrusion Dies ...

149

Problem Setup

Pre-Processor Geometry and mesh generation

Modelling code 3D field calculation

Performance evaluation

Modification of the controllable parameters until the optimum is reached

Figure 1. Optimization Methodology flowchart.

IS5

ES6

IS1

ES5 IS4

ES4 ES2

IS2

ES3

IS3

ES1 Figure 2. Division of the profile cross section into elemental (ES) and intersection (IS) sections, showing the PZ thickness parameter for all ES.

150

João Miguel Nóbrega and Olga Sousa Carneiro

Then, the extrusion die flow channel must be parameterized to systematize the geometrical modifications automatically demanded by the optimization code. Following the proposed methodology and in order to extract parameters from the geometry, the extrusion die flow channel must be divided into four main functional zones: adapter (A), transition zone (TZ), pre-parallel zone (PPZ) and parallel zone (PZ) or die land. The two last functional zones are illustrated in Figure 3. As mentioned before the adapter, transition zone and parallel zone are present in almost all conventional dies. The new functional zone inserted, the PPZ, should have a shape similar to that of the PZ, but with higher thickness. Consequently, it is possible to parameterize the geometry of each ES with the following parameters: distance to the die exit or length of constant thickness (Li), angle of convergence (θi) and PZ thickness (ti) (see Figures 2 and 3). Modifications of these parameters will change the flow channel geometry and, therefore, the melt flow distribution at the die exit. Hence, the aim of the optimization algorithm is to search for the set of parameters that promotes the best flow distribution. Spider Legs

Pre‐Parallel Zone (PPZ)

θ5

Parallel Zone (PZ)

(a)

(b)

Figure 3. Flow channel of a profile extrusion die: main functional zones and ES geometrically controllable parameters (length and angle of convergence): (a) complete flow channel and (b) section view.

To finalize this step the user must select the strategy to be adopted to control the flow in each ES (based either on the lengths or on the thicknesses control).

Calibration/cooling systems For the calibration/cooling systems problem the definition of the geometry of the system encompasses the plastic profile and the corresponding calibrator(s) cross-sections, including the layout and dimensions of the cooling channels (common to all existing calibration units), as well as the distance from the extrusion die to the first calibration unit (D01), shown in Figure 4. As constraints, the maximum number of calibration units and the total length available for the calibration/cooling system (L) can be defined. The final solution encompasses the number of calibration units to be used, their lengths (LCi) and the distance between two consecutive units (Dij). Modifications of these last parameters will change the layout of the calibration/cooling system and thus the evolution of temperature the profile

The Automatic Design of Extrusion Dies ...

151

temperature along it. Therefore, in this case, the objective of the optimization algorithm is to search for the set of parameters that promotes the required level of cooling, within the maximum uniformity.

Figure 4. Layout of the calibration/cooling system and main controllable parameters, example for a system that comprises 3 individual calibration units.

3.2. Pre-processor During the optimization process a pre-processor is used to generate automatically the computational grid corresponding to the initial trial solution and those corresponding to each of the geometries proposed by the optimization algorithm. In order to improve the efficiency and/or accuracy of the numerical simulations, the pre-processor generates smooth grids, with cell-size continuity, and refines the mesh where severe gradients are expected to occur. The accuracy of the results improves with the mesh refinement, which, however, promotes a dramatic increase in the calculation time. Therefore, and in order to minimize the time required by the optimization processes, the numerical code employs coarse meshes at the initial iterations of the optimization process (allowing a swift approach to the problem solution), which are then progressively refined in order to improve the quality of the proposed solution. For the extrusion dies, since the flow distribution is dominated by the most restrictive zones of the flow channel, i.e., PPZ and PZ [81,113,131,132], during the optimization process it suffices to model the flow in these zones.

3.3. Flow and Thermal Field Calculation The code used to model the flow fields developed in extrusion dies or the thermal interchanges during the calibration and cooling stage of profile extrusion is a 3D one, based on the finite-volume method (FVM). FVM based codes are faster and require less computational resources than their FEM counterparts [133], which is essential for the recurring use demanded by the optimization algorithms. This code is described in several publications [113,124,129] and was already experimentally validated for the case of extrusion die design [130].

152

João Miguel Nóbrega and Olga Sousa Carneiro

3.4. Performance Evaluation The evaluation of the performance of the trial solutions is carried out through an objective function specifically developed for each type of problem.

Extrusion dies In this case, the optimization is multi-objective since the objective function (Fobj) combines two criteria - flow balance and length to thickness ratio (L/t). These are affected by different weights, each term being also weighted by the cross section area of the respective section, as in Equation 1:

⎧⎧ ⎛ V ⎪⎪ = ∑ ⎨ ⎨ψ ⎜1 − i ⎜ V i =1 ⎪ ⎪ ⎝ obj,i ⎩⎩ nZ

Fobj

⎞ ⎡ (L t )i ⎤ ⎟ + k (1 − ψ )⎢1 − ⎥ ⎟ ⎣ (L t )min ⎦ ⎠ 2

2⎫

⎪ Aobj,i ⎬ ⎪⎭ Aobj

⎫ ⎪ ⎬ ⎪⎭

(1)

where: nz – total number of ES and IS considered; k = 0 for all IS and for ES with (L t )i ≥ (L t )min ; k = 1 for ES zones with (L t )i < (L t )min ;

Vi – actual average velocity of the melt flow in each section; (L t )i - ratio between the length and thickness of each ES;

(L t )min - minimum value recommended for the ratio L/t (here considered to be 7);

ψ - relative weight (here taking the value of 0.75, in order to emphasize the importance of the flow balance); Aobj, Aobj,i – objective cross section areas of the global flow channel and of each section, respectively; Ai – actual cross section area of each section; V - global flow average velocity; Vobj,i– objective average velocity of the melt flow in each section, given by the continuity equation:

Vobj,i = V

Aobj,i Ai

(2)

When using the length control strategy, for example, the optimization algorithm will naturally try to reduce the length (parameter L in Figure 3) of the thinnest sections of the flow channel, in order to decrease the respective local flow restriction. Knowing that the PZ has the main function of stabilizing the polymer melt flow, making the extrusion die less sensitive to variation of the process conditions, short PZ lengths should be avoided. This justifies the inclusion of the L/t component in the Fobj, which will penalize geometries having too short PZ. The objective function is defined in such a way that its value decreases with increasing

The Automatic Design of Extrusion Dies ...

153

performance of the die, being zero for a perfectly balanced condition with all the ES lengths in the advisable range.

Calibration/cooling system In this case the final goal is to define the optimal calibrators and annealing zones lengths. The criteria used to assess the performance of a specific calibration/cooling system are: a measure of the cooling rate of the extrudate, represented by its average temperature, T , computed as nf

T =

∑ Ti Ai i =1

(3)

AT

and cooling uniformity, represented by the corresponding temperature distribution standard deviation,

σ T , computed as nf

σT =

∑ (Ti − T )

2

i =1

Ai (4)

AT

where nf is the number of computational cell faces on the extruded profile outlet boundary, Ti is the face temperature, Ai is the face area and AT is the extruded profile cross section area. These two parameters are computed at the end of the domain considered (outlet of the last calibrator). In practical terms, at the end of the calibration/cooling zone the average temperature of the profile, T , should be lower than its solidifying temperature, in order to guarantee that it will not re-melt after temperature homogenisation. Therefore, it will not be an issue to minimise the value of this parameter; instead, it should be taken into account as a constraint, assuring that only the solutions leading to a value lower than the solidifying temperature of the polymer will be considered (assessed) by the optimization algorithm. On the contrary,

σ T should be minimised in order to reduce the final level of residual stresses of

thermal origin. These stresses develop during cooling, when shrinkage is constrained, as described in the few works devoted to extrusion [134,135]. Since thermal stresses are originated by differential cooling, it was decided, in a necessarily simple approach, to consider the global standard deviation of temperature distribution,

σ T , taken at the final cross

section of the plastic profile, to account for the tendency to develop this type of stresses.

3.5. Optimization Technique The final step of the whole design process consists on the iterative correction of the system characteristics (variables). For this purpose, and for the two types of forming tools dealt within this chapter, algorithms based on the SIMPLEX method [136] were implemented

154

João Miguel Nóbrega and Olga Sousa Carneiro

to adjust the die controllable geometrical parameters until an optimum design of the flow channel is reached (flow balance together with advisable die land length), or the parameters of the calibration/cooling system until a minimum temperature gradient is attained in the final extrudate. Therefore, the corresponding optimization variables are: -

extrusion dies: length or thickness of each elemental section (ES), depending on the design strategy adopted to solve the problem;

-

calibration/cooling systems: length of each calibrator and the distance between consecutive calibrators.

The set of operations involved in the SIMPLEX optimization algorithm is carried out according to the sequence defined in [113]. The process starts with n+1 trial geometries randomly generated, with n being the number of optimization variables considered. After the corresponding simulations of the physical process (flow in the die or heat transfer in the calibration/cooling system), the worst solution (in terms of objective function value) is rejected, being replaced by a new trial system proposed by the SIMPLEX method. When the standard deviation of the objective function, corresponding to the n+1 trial solutions, is smaller than a pre-specified value the mesh is refined. Under the conditions described above, the calculation finishes when the highest mesh refinement stage, pre-defined by the user, is achieved.

4. CASE STUDY In this section the application of the design codes developed to optimize the flow distribution in profile extrusion dies and the corresponding calibration system layout, described previously, is illustrated with a practical example. The profile cross section geometry that was employed in these examples is illustrated in Figure 2, the thickness of each ES being indicated in Table 1. As shown, the profile crosssection comprises different thickness, ranging from 2 to 4 mm, as it happens in general thermoplastic profiles. As explained before, these differences difficult the design of both forming tools, since they will promote an unbalanced flow distribution and cooling rate. Consequently, the design code must find the best geometry of the extrusion die flow channel and the calibration layout capable to minimize the above mentioned problems. Table 1. Profile cross-section thicknesses. ES ti[mm]

1 2.0

2 2.5

3 2.5

4 3.0

5 2.0

6 4.0

The material used in this study was a polypropylene homopolymer extrusion grade, Novolen PPH 2150, from Targor. Its rheological behaviour was experimentally characterized in capillary and rotational rheometers, at 210ºC, 230ºC and 250ºC. The shear viscosity data was fitted with least-squares method by means of the Bird-Carreau constitutive equation combined with the Arrhenius law [113], considering Tref (ºC) = 230 and

η∞ (Pa.s) = 0,

The Automatic Design of Extrusion Dies ... yielding the following parameters: η0 (Pa.s) = 5.58x104,

155

λ (s) = 3.21, n = 0.3014, E/R(ºC) =

3

2.9 x10 . The main thermo-physical properties considered were the following: kpolymer = 0.18 W/mK, kmetal = 14.0 W/mK, ρpolymer = 1400 kg/m3, cp,polyme r= 1000 J/kgK. For both forming tools an average linear velocity of 1.0 m/min was considered, and an initial temperature of 230 ºC was employed at the inlet.

4.1 Extrusion Die Flow Channel Optimisation For optimization purposes, the flow channel cross section was divided into the 6 Elemental Sections (ES) and 5 Intersection Sections (IS), shown in Figure 3. The optimization was carried out using the design strategy based on the length optimization, employing 5 variables denoted as: L1 for ES1, L23 for ES2 and ES3, L4 for ES4, L5 for ES5 and L6 for ES6. In this way the length of ES2 and ES3 was forced to be equal, to facilitate the subsequent machining/construction step. In order to evaluate the benefits obtained by the optimisation code, a reference trial solution was adopted using the dimensions shown in Table 2. This configuration was selected following a rule of thumb usually employed experimentally, where a constant flow restriction, given by the L/t ratio, is applied. Table 2. Initial flow channel dimensions. ES Li [mm] Li/ti

1 30.0 15.0

2,3 37.5 15.0

4 45.0 15.0

5 30.0 15.0

6 60.0 15.0

A typical mesh employed in the final stage of the calculations is shown in Figure 5. These meshes have 10 cells along the profile thickness, being the whole domain composed by circa 570,000 computational cells that have 2,850,000 degrees of freedom.

Figure 5. Example of a mesh used at the last stages of the calculations, for the optimisation of the extrusion die flow channel.

156

João Miguel Nóbrega and Olga Sousa Carneiro

The evolution of the Objective Function (Fobj) and of the ratio between the actual average velocity obtained in each ES and the (global) cross-section average velocity, computed along the optimisation process, are illustrated in Figure 6. According to the results shown, the flow of the initial trial was greatly unbalanced, and after optimisation there was a huge improvement on the Fobj (circa 90%), as a consequence of the favourable evolution of the velocity ratios described above (all close to 1 after optimisation). The improvements obtained by the optimisation algorithm can also be evaluated through the comparison of the velocity contours obtained for the reference trial and the final solution given by the optimisation algorithm, depicted in Figure 7.

Fobj/Fobj,ini

Cells Along Thickness

1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0

2

4

6

Calculation Time [hh:mm]

8

10

- Mesh Ref inment

(a)

2.5 2.0 1.5

2

Cells Along Thickness 4

6

8

10

ES1 ES2 ES3 ES4 ES5 ES6

1.0 0.5 0.0 Calculation Time [hh:mm] - Mesh Ref inment

(b) Figure 6. Evolution of the optimisation process: (a) objective function normalized with the initial value; (b) ratio between the average velocity for each ES and the (global) cross section average velocity ( Vi /V ).

The Automatic Design of Extrusion Dies ... Reference Trial

157

Final Solution

V/V Figure 7. Contour of the ratio between local and average cross section velocities for the reference trial and final solution proposed by the optimisation algorithm.

4.2. Calibration/cooling System Layout Optimisation The layout adopted for the calibration/cooling system is that shown in Figure 4, while the location of the cooling channels is illustrated in Figure 8. For this particular problem it was assumed that the distance between the extrusion die and the first cooling unit (D01) was equal to 10 mm. Consequently, for optimisation purposes, 5 variables were defined: three LCx, variables defining the length of Calibrator x, and two Dij variables corresponding to the length of the annealing zone between Calibrators i and j. To carry out the optimization of the calibration/cooling system layout, the following assumptions were considered: -

the number of calibration/cooling units should not exceed three;

-

the total length of calibration (LC1+LC2+LC3) should not exceed 600 mm;

-

the total length of annealing (D12+D23) should not exceed 240 mm;

-

the minimum calibrator length is set to 50 mm; when the algorithm proposes a calibrator shorter than 25 mm that unit is removed from the system; alternatively, when the proposed length is in the range [25,50[ mm it will be set to 50 mm;

-

the minimum length for an annealing zone is set to 10 mm; when the algorithm proposes an annealing zone shorter than 5 mm that region is removed from the system; alternatively, if the proposed length is in the range [5,10[ mm it will be set to 10 mm;

-

it is considered that the end of the cooling zone is the outlet of the last calibrator;

-

the cooling fluid temperature is 10 ºC;

-

the value of the interface heat transfer coefficient, hi, is constant and equal to 350 W/m2K, a value in the range typically employed in the literature [122];

-

the solidification temperature of the polymer is 80 ºC.

158

João Miguel Nóbrega and Olga Sousa Carneiro

Figure 8. Front view of the calibration system employed, showing the location of the cooling channels.

Figure 9. Example of a mesh used at the last stage of the calculations, for the optimisation of the calibrator/cooling system.

The Automatic Design of Extrusion Dies ...

159

A typical mesh employed in the final stage of the calculations is shown in Figure 9. These meshes have 8 cells along the profile thickness, being the whole domain composed by circa 840,000 computational cells. Similarly to what was done for the optimisation of the extrusion die, a reference problem was considered with a view to evaluate the benefits of the optimisation algorithm. For the calibration/cooling system, the reference problem layout was the one composed just by one cooling unit, 600 mm long. With this layout, at the outlet of the cooling system the profile reaches an average temperature of T = 83.6 ºC and a standard deviation of σ T = 39.8 ºC. The results obtained along the optimization process, in terms of T and

σ T , are depicted

in Figure 10. As mentioned before, the conflicting behaviour of the two main variables used (

T and

σ T ) is quite evident in the graph. 40 39

[ºC]

38 37 36 35 34 33 32 74

76

78

80

82

84

[ºC]

Figure 10. Results obtained in terms of average temperature ( T ) and temperature standard deviation (

σ T ) for the layout trials tested by the optimization algorithm. The Optimum Solution is marked with the arrow.

The best cooling efficiency was obtained with the layout shown in Figure 11, having the following values for the geometry parameters: LC1 = 350 mm, LC2 = 200 mm, LC3 = 50 mm, D12 = 30 mm and D13 = 210 mm. This layout is in accordance with the results obtained in [127], where it was concluded that the best performance is obtained with decreasing cooling units lengths and increasing annealing zones lengths. For this optimum layout, at the outlet of the system the profile reaches an average temperature of T = 78.6 ºC and a standard deviation of σ T = 32.5 ºC, which consists in a reduction of circa 6% and 18% for T and respectively, when compared to the reference solution.

σT ,

160

João Miguel Nóbrega and Olga Sousa Carneiro

Figure 11. Cooling system layout corresponding to the optimum solution.

CONCLUSION In this chapter the optimisation codes developed for the automatic design of extrusion dies and calibrator/cooling systems were described and their application was illustrated with one case study. As shown, even for a medium complex geometry profile, the codes were able to improve the performance of the forming tools in a fully automatic way, i.e., without any user intervention. This allows concluding that the automatic design concept is realistic and deserves further investments.

ACKNOWLEDGMENTS The authors gratefully acknowledge funding by FEDER via FCT, Fundação para a Ciência e Tecnologia, under the POCI 2010 (project POCI/EME/58657/2004) and Plurianual programs.

REFERENCES [1] [2] [3]

[4] [5] [6] [7]

[8]

Sirlereaux, S.; Loewen, K. W. Kunststoffe-German Plastics 1983, 73(3), 114-117. Sirlereaux, S.; Loewen, K. W. Kunststoffe-German Plastics 1983, 73(1), 9-12. Krohmer Extrusion of Honeycomb Profiles for Insulations: Stretching, Cooling and Specific Structural Irregularities in 3rd Esaform Conference on Material Forming 2000, Stuttgart, Germany. Beddus, D. Extruder Theory and Die Design for Medical Tubing in Medical Manufacturing Conference Proceedings 1990, Newark, NJ. Machado, A. Multi Lumen Die Design and Techniques in Medical Manufacturing Conference Proceedings 1990, Newark, NJ. Colbert, J. Concepts of Precision Tube Extrusion for Medical and Healthcare Applications in Antec 95 1995, Boston, MA. Plazaola, C. R.; Advani, S. G.; Reubin, A. Flow Analysis in a Miniature Extrusion Die for Ic Assembly in Antec 90 Plastics in the Environment:Yesterday,Today & Tomorrow 1990, Dallas,TX. Endrass, B. Kunststoffe-Plast Europe 1999, 89(8), 48-50.

The Automatic Design of Extrusion Dies ... [9] [10]

[11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28]

[29] [30] [31] [32] [33] [34] [35] [36] [37] [38]

161

Endrass, B. Kunststoffe-German Plastics 1993, 83(8), 584-586. Michaeli, W. Extrusion Dies for Plastics and Rubber: Design and Engineering Computations, 2nd ed.; SPE Books, Hanser Publishers: Munich, Vienna, New York, 1992, pp 390. Carneiro, O. S. Design of Extrusion Dies for Tubular Profiles (in Portuguese); Ph.D. Thesis, University of Minho: Guimarães, 1994. Tadmor, Z.; Gogos, C. Principles of Polymer Processing SPE Monographs, John Wiley & Sons Inc.: New York, 1979, pp 736. Murray, T. A. Plastics Technology 1978, February, 99-105. Glukhov, E. E.; Shapenkov, M. P.; Makarova, Y. L. Soviet Plastics 1967, 54-58. Clegg, P. L. The Plas. Inst. Trans. 1959, 26, 151. Rauwendaal, C. Polymer Extrusion, 2nd ed., Hanser Publishers: Munich, 1990. Liang, J. Z.; Huang, Y. Q.; Tang, G. J.; Ness, J. N. Plastics, Rubber and Composites Processing and Applications 1992, 18(5), 311-315. Kazatchkov, I. B.; Hatzikiriakos, S. G.; Stewart, C. W. J. Plastic Film & Sheeting 1995, 11(1), 38-57. Stevenson, J. F.; Lee, L. J.; Griffith, R. M. Polymer Engineering and Science 1986, 26(3), 233-238. Griffith, R. M.; Tsai, J. T. Polymer Engineering and Science 1980, 20(18), 1181-1187. Rothemeyer, F. Kunststoffe 1969, 59, 333-338. Rothemeyer, F. Kunststoffe 1970,. 9, 235-40. Rabinovitch, E. B.; Summers, J. W.; Booth, P. C. Journal of Vinyl Technology 1992, 14(1), 20-3. Huneault, M. Extrusion of Pvc Profiles: Rheology and Die Design (in French); Ph.D. Thesis, University of Montreal: Montreal, 1992. Brown, R. J.; Kim, H. T.; Summers, J. W. Pratical Principles of Die Design in SPE,37th Annual Technical Conference 1979, New Orleans, LA. Corbett SPE Journal 1954, June, 15. Busby, W. J. PVC Profile Die Design by Simulation in PVC '96 1996, Brighton. Svabik, J.; Mikulenka, T.; Manas, M.; Busby, J. W. Evaluation of Profile Die Design Strategies in The Polymer Processing Society, Europe/Africa Regional Meeting 1997, Gothenburg, Sweden. Vlachopoulos, J.; Behncke, P.; Vlcek, J. Advances in Polymer Technology 1989, 9(2), 147-56. Schenkel, G.; Kuhnle, H. Kunststoffe-German Plastics 1983, 73(1), 17-22. Masberg, U.; Michaeli, W. Kunststoffe-German Plastics 1984, 74(1), 51-53. Kuhnle, H. Kunststoffe-German Plastics 1986, 76(3), 276-281. Lee, C. C.; Stevenson, J.F. International Polymer Processing 1992, 7(2), 186-189. Hurez, P.; Tanguy, P. A.; Blouin, D. Polymer Engineering and Science 1996, 36(5), 626-635. Kramer, A. Kunststoffe 1969, 59 (July), 409-416. Busby, W. J. Simulation of PVC Processing in PVC '99 1999, Brighton. Sun, D. W.; Peng, Y. C. Plastics Rubber and Composites Processing and Applications 1991, 16(2), 109-114. Levy, S. Advances in Plastics Technology 1981, January, 8-52.

162 [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66]

[67] [68]

[69] [70]

João Miguel Nóbrega and Olga Sousa Carneiro Wei, K. H.; Malone, M. F.; Winter, H. H. Polymer Engineering and Science 1986, 26(14), 1012-1019. Gent, A. N.; Gregory, B. L.; Jeong, J.; Charrier, J. M.; Hamel, F. Polymer Engineering and Science 1987, 27(22), 1987. Kleindienst, V. Kunststoffe 1973, 63(1), 7-11. Schiedrum, H. O. Kunststoffe-German Plastics 1983, 73(1), 2-8. Kurz, H. D. Kunststoffe-German Plastics 1988, 78(11), 1052-1058. Pittman, J. F. T.; Whitham, G. P.; Beech, S.; Gwynn, D. International Polymer Processing 1994, 9(2), 130-140. Sheehy, P.; Tanguy, P. A.; Blouin, D. Polymer Engineering and Science 1994, 34(8), 650-656. Schut, J. H. Plastics Technology 2003, August. Agassant, J. F.; Vergnes, B. Plastiques Modernes et Elastomeres 1986, 38(8), 124-8. Vlachopoulos, J. Resolved and Unresolved Issues in Extrusion Die Design in 16th Annual Meeting of the Polymer Processing Society 2000, Shangai, China. Levy, S. Advances in Plastics Technology 1981, October, 24-31. Rauwendaal, C. Plastics World 1991, 49(12), 73-5. Kaplun, Y. B.; Levin, A. N. Soviet Plastics 1965, 1(January), 42-49. Brinkschroder, F. J.; Johannaber, F. Kunststoffe-German Plastics 1981, 71(3), 138143. Crane-Plastics (1998). Extruded Plastic Profiles. http://www.crane-plastics.com/. Weeks, D. J. British Plastics 1958, April, 156-160. Miller, C. Industrial & Engineering Chemistry Fundamentals 1972, 11(4), 524-528. Losson, J. -M. Japan Plastics 1974, January, 22-28. Weeks, D. J. British Plastics 1958, May, 201-205. Carley, J. F. SPE Journal 1963, September, 977-983. Lathi, G. P. SPE Journal 1963, July, 619-620. Kozicki, W.; Chou, C. H.; Tiu, C. Chemical Engineering Science 1966, 21(8), 665679. Tiu, C. Polymer Engineering and Science 1982, 22(16), 1049-1051. Lenk, R. S. Kunststoffe-German Plastics 1985, 75(4), 239-243. Agassant, J. F. New Challenges in the Numerical Modelling of Polymer Processes. in 3rd Esaform Conference on Material Forming 2000, Stuttgart, Germany. Sebastian, D. H.; Rakos, R. Advances in Polymer Technology 1985, 5(4), 333-9. Menges, G. Journal of Polymer Engineering 1986, 6(1-4), 1-22. Mehta, B. V.; Gunasekera, J. S.; Zuewi, Y.; Kapadia, A. Cae of Extrusion Dies for Metals and Polymers in 8th International Conference on Robotics and Autonomous Factories for the Future 1993, New Delhi, India. Monzon, M. D.; Castany, J.; Garcia-Martinez, J. M.; Collar, E. P. Revista de Plasticos Modernos 1998, 76(505), 79-87. Papathanasiou, T. D.; Kamal, M. R. Simulation of Viscoelastic Flow in Complex Channels: A Finite Difference Approach in Antec '89.Conference Proceedings 1989, New York. Miller, B. Plastics World 1995, 53(10), 12-13. Forest, J. P. Revue Generale des Caoutchoucs et Plastiques 1996, 748, 58.

The Automatic Design of Extrusion Dies ... [71]

[72] [73] [74] [75] [76]

[77] [78]

[79] [80] [81] [82] [83]

[84] [85] [86]

[87] [88] [89] [90] [91] [92]

163

d'Halewyn, S.; Boube, M. F.; Vergnes, B.; Agassant, J. F. Extrusion of Elastomers in Profile Dies: 3-D Computations and Experiments in Theorectical and Applied Rheology, Proc. XIth Int. Congr. on Rheology 1992. Brussels, Belgium. Legat, V.; Marchal, J. M. International Journal for Numerical Methods in Fluids 1993, 16(1), 29-42. Otsuki, Y.; Kajiwara, T.; Funatsu, K. Polymer Engineering and Science 1997, 37(7), 1171-1181. Gifford, W. A. Compensationg for Die Swell in the Design of Profile Dies in ANTEC 2003 2003, NashVile, TN. Fluent Inc. Polyflow. http://www.fluent.com. van Rens, B. J. E.; Brekelmans, W. A. M.; Baaijens, F. P. T. Shape Prediction for Complex Polymer Extrudates through 3d Numerical Simulation in 15th Annual Meeting Polymer Processing Society 1999, 's-Hertogenbosch, Netherlands. Gifford, W. A. Use of Three-Dimensional Computation Fluid Dynamics in the Design of Profile Dies in ANTEC 2002 2002, San Francisco, CA. Gobeau, J. -F. Experimental Study and Three-Dimensional Numerical Modeling by Finite Elements of the Flow in Extrusion Die for PVC Profiles (in French), Ph.D. Thesis, L'Ecole Nationale Superieure de Mines de Paris: Paris, 1994. Agassant, J. F.; Avenas, P.; Sergent, J. -P.; Carreau, P. J. Polymer Processing: Principles and Modeling; Hanser Publishers: Munich, 1991, pp 475. Svabik, J.; Placek, L.; Saha, P. International Polymer Processing 1999, 14(3), 247253. Szarvasy, I.; Sienz, J.; Pittman, J. F. T.; Hinton, E. International Polymer Processing 2000, 15(1), 28-39. Cavka, E.; Marchal, T. Pipe Seal - Extrusion Die Design with Polyflow in 4th International ESAFORM Conference on Material Forming 2001, Liège, Belgium. Vlachopoulos, J. Recent Progress and Future Challenges in Computer-Aided Polymer Processing Analysis and Design in ATV-Semapp Meeting 1998, Funen, Odense, Denmark. Matsunaga, K.; Sakaki, K.; Kajiwara, T.; Funatsu, K. International Polymer Processing 1995, 10(1), 46-54. Leblanc, J. L. (1999). Menusim - Measurements and Numerical Simulations. http://www.Mema.Ucl.Ac.Be/Menusim/. Mitsoulis, E. Viscoelasticity in Polymer Processing: Where we Stand at the Year 2000 in The Polymer Processing Society Europe/Africa Regional Meeting 2000, Zlin, Czech Republic. Macosko, C. W. Rheology: Principles, Measurements, and Applications; John Wiley & Sons, 1994, pp 550. Dieflow Consulting. DieFlow. htp://www.dieflow.com. Compuplast International Inc. Flow2000. http://www.compuplast.com. Polydynamics Inc. ProfileCad. http://www.polydynamics.com. Koziey, B. L.; Vlachopoulos, J.; Vlcek, J.; Svabik, J. Profile Die Design by Pressure Balancing and Cross-Flow Minimisation in Antec '96 1996, Indianapolis, U.S.A. Rubin, Y. D. Revue Generale des Caoutchoucs et Plastiques 1998, (733), 39-44.

164 [93]

[94] [95] [96] [97] [98] [99] [100] [101] [102]

[103]

[104] [105] [106] [107]

[108] [109] [110]

[111]

[112] [113] [114]

João Miguel Nóbrega and Olga Sousa Carneiro Rudniak, L.; Marchal, T.; Laszezak, W. Practical Help Brought by Computer Simulation to Design a Die Faster and Cheaper in Advances in Plastics Technology 1999, Katowice, Poland. Booy, M. L. Polymer Engineering and Science 1982, 22(7), 432-437. Lee, C. C. Polymer Engineering and Science 1990, 30(24), 1607-1614. Rakos, R.; Sebastian, D. Advances in Polymer Technology 1990, 10(4), 297-307. Beaumier, D.; Lafleur, P. G.; Thibodeau, C. A. Streamline Die Design for Complex Geometries in Antec 2001 2001, Dallas, TX. Michaeli, W.; Kaul, S.; Wolff, T. Journal of Polymer Engineering 2001, 21(2-3), 225237. Tadmor, Z.; Broyer, E.; Gutfinge, C. Polymer Engineering and Science 1974, 14(9), 660-665. Hurez, P.; Tanguy, P. A.; Blouin, D. Polymer Engineering and Science 1993, 33(15), 971-979. Wang, H. P. Plastics Technology 1996, 42(2), 46-9. Reddy, M. P.; Schaub, E. G.; Reifschneider, L. G.; Thomas, H. L. Design and Optimization of Three Dimensional Extrusion Dies Using Adaptive Finite Element Method in Antec '99 1999, New York, U.S.A. Ettinger, H. J.; Sienz, J.; Pittman, J. F. T. Automated Optimization of Extrusion Die Design for Pvc Profiles in 6th ESAFORM Conference on Material Forming 2003, Salermo, Italy. Rakos, R.; Kiani, A.; Sebastian, D. H. Computer Design Aids for Non-Axis-Symmetric Profile Dies in Antec '89.Conference Proceedings 1989, New York. Liu, L. D.; Wen, S. H.; Liu, T. J. Advances in Polymer Technology 1994, 13(4), 283295. Vergnes, B.; Vincent, M.; Demay, Y.; Coupez, T.; Billon, N.; Agassant, J. F. Canadian Journal of Chemical Engineering 2002, 80(6), 1143-1152. Gobeau, J. F.; Coupez, T.; Agassant, J. F.; Vergnes, B. A Study of Profile Die Extrusion in XII Annual Meeting of the Polymer Processing Society 1996, Sorrento, Italy. Mehta, B. V.; Al-Zkeri, I.; Gunasekera, J. S.; Buijk, A. Journal of Materials Processing Technology 2001, 113(1-3), 93-97. Xue, S. C.; Phanthien, N.; Tanner, R. I. Journal of Non-Newtonian Fluid Mechanics 1995, 59(2-3), 191-213. Chang, R. Y.; Hsu, H. C.; Ke, C. S.; Hsu, C. C. Distributed Parallel Computing in Numerical Simulation of Extrusion Flow in Polymer Processing Society Asia/Australia Regional Meeting 2002, Taipei, Taiwan. Langley, D. S.; Pittman, J. F. T.; Sienz J. Computerised Optimisation of Slit Die Design Taking Account of Die Body Deflection in 4th International ESAFORM Conference on Material Forming 2001, Liège, Belgium. Huang, C. C. Polymer Engineering and Science 1998, 38(4), 573-582. Nóbrega, J. M.; Carneiro, O. S.; Oliveira, P. J.; Pinho, F. T. International Polymer Processing 2003, 18(3), 298-306. Placek, L.; Svabik, J.; Vlcek, J. Cooling of Extruded Plastic Profiles in Antec 2000 2000, Orlando, Florida.

The Automatic Design of Extrusion Dies ...

165

[115] Fradette, L.; Tanguy, P. A.; Thibault; F.; Sheehy; P.; Blouin; D.; Hurez, P. Journal of Polymer Engineering 1995, 14(4), 295-322. [116] Szarvasy, I.; Sander, R. Kunststoffe-Plast Europe 1999, 89(6), 7-9. [117] Dietz, W. Polymer Engineering and Science 1978, 18(13), 1030-1036. [118] Menges, G.; Haberstroh, E.; Janke, W. Kunststoffe-German Plastics 1982, 72(6), 332336. [119] Menges, G.; Kalwa, M.; Schmidt, J. Kunststoffe-German Plastics 1987, 77(8), 797802. [120] Szarvasy, I. Simulation of Complex PVC Window Profile Cooling During Calibration with Particular Focus on Internal Heat Exchange in 3rd ESAFORM Conference on Material Forming 2000, Stuttgart, Germany. [121] Pittman, J. F. T.; Farah, I. A. Plastics, Rubber and Composites Processing and Applications 1996, 25(6), 305-12. [122] Fradette, L.; Tanguy, P. A.; Hurez, P.; Blouin, D. International Journal of Numerical Methods for Heat & Fluid Flow 1996, 6(1), 3-12. [123] Pittman, J. F. T.; Farah, I. A.; Isaac, D. H.; Eccott, A. Transfer Coefficients in Spray Cooling of Plastic Pipes in Plastics Pipes IX 1995, Edinburgh, U. K. [124] Nóbrega, J. M.; Carneiro; O. S.; Covas, J. A.; Pinho, F. T.; Oliveira, P. J. Polymer Engineering & Science 2004, 44, 2216-2228. [125] Nóbrega, J. M.; Carneiro, O. S. Materials Science Forum 2006, 514-516, 1429-1433. [126] Nóbrega, J. M.; Carneiro, O. S. On the Performance of Multi-Step Cooling Systems in Profile Extrusion in XXII Annual Meeting of the Polymer Processing Society 2006, Yamagata, Japan. [127] Nóbrega, J. M.; Carneiro, O. S. Plast., Rubber Compos.: Macromol. Eng. 2006, 35, 387-392. [128] Nóbrega, J. M.; Carneiro, O. S.; Gaspar-Cunha, A.; Gonçalves, N. D. International Polymer Processing 2008, 23, 331-338. [129. Carneiro, O. S.; Nóbrega, J. M.; Oliveira, P. J.; Pinho, F. T. International Polymer Processing 2003, 18, 307-312. [130. Nóbrega, J. M.; Carneiro, O. S.; Pinho, F. T.; Oliveira, P. J. International Polymer Processing 2004, 19, 225-235. [131] Sienz, J.; Bulman, S. D.; Pittman, J. F. T. Optimisation Strategies for Extrusion Die Design in 4th International ESAFORM Conference on Material Forming. 2001, Liège, Belgium. [132] Carneiro, O. S.; Nóbrega, J. M.; Pinho, F. T.; Oliveira, P. J. J. Materials Processing Technology 2001, 114, 75-86. [133. Vlachopoulos, J. Recent Progress and Future Challenges in Computer-Aided Polymer Processing Analysis and Design in ATV-Semapp Meeting 1998, Funen, Odense, Denmark. [134]

Doshi, S. R. Prediction of the residual stresses distribution in plastic pipe and profile extrusion in ANTEC’89 1989, 546-549.

[135] Harrel, E. R. Aproximate model for analysing frozen-in strains and shrinkage of extruded PVC lineal profiles in ANTEC’98 1998, 3260-3265. [136] Rao, S. S. Optimization Theory and Applications, 2nd ed.; Wiley Eastern Limited; 1984.

In: Optimization in Polymer Processing ISBN: 978-1-61122-818-2 Editors: A. Gaspar-Cunha and J. A.Covas, pp. 167-192 ©2011 Nova Science Publishers, Inc.

Chapter 8

ON THE USE OF REDUCED BASES IN OPTIMIZATION OF INJECTION MOLDING Francisco Chinesta1 and Fabrice Schmidt2 EADS Corporate Foundation International Chair, GEM UMR CNRS – Centrale Nantes, France 2 Institut Clément Ader, ENSTIMAC, Ecole de Mines d’Albi, France

1

1. INTRODUCTION 1.1. Process Description – Injection Molding About 30% of the annual polymer production is transformed by injection molding. It is a cyclic process of forming a plastic into a desired shape by forcing the molten polymer under pressure into a cavity [1, 2]. For thermoplastic polymers, the solidification is achieved by cooling. Typical cycle times range from 1 to 100 seconds and depend mainly on the cooling time. The complexity of molded parts is virtually unlimited, sizes may range from very small (1m), with an excellent control of tolerances.

1.2. The Injection Molding Equipment The reciprocating screw injection molding machine is the most common injection unit used (figure 1). These machines consist of two basic parts, an injection unit and a clamping unit. The injection unit melts the polymer resin and injects the polymer melt into the mold. Its screw rotates and axially reciprocates to melt, mix and pump the polymer. A hydraulic system controls the axial reciprocation of the screw, allowing it to act like a plunger, moving the melt forward for injection. The clamping unit holds the mold together, opens and closes it automatically, and ejects the finished part.

168

Francisco Chinesta and Fabrice Schmidt

Figure 1. Injection molding machine with reciprocating screw [3].

1.3. Description of the Injection Molding Cycle The process is started by plasticizing the material. A resin supplied in the form of pellets or powder is fed from the hopper into the injection unit which consists of a reciprocating screw in a barrel heated by several heating bands. The rotating screw transfers the solid material towards the heated zones of the barrel. The granules melt under the combined action of the heater bands and the dissipated energy induced by shearing of the viscous material when rotating the screw. The screw stops rotating when the required amount of material has been dosed. In the soak time until injection, the polymer melts by heat conduction from the barrel to the polymer. Before injection we ideally obtain a completely molten, low viscous material with a homogenous temperature distribution. The injection takes only a small portion of the cycle time. The screw acts like a plunger and pushes the melt into the mold. The polymer flows from the nozzle to the mold which is coupled to the nozzle by a sprue bushing. The melt flows to a cavity by runners and is fed to the cavity through a gate. The gate is simply a restriction in the flow path just ahead of the mold cavity and serves to direct the flow of the melt into the cavity and to limit back flow. The cooling of the melt starts when the melt leaves the heated section and gets in contact with the cooled mold walls. During filling the hydraulic pressure on the screw is adjusted to follow a programmed transitional speed profile which allows controlling the flow front speed in the cavity. When the cavity is completely filled the polymer pressure increases instantly. The machine must now instantly stop pushing forward the screw to avoid an over packing of the cavity. The specific volume of thermoplastic polymers reduces when passing from the molten to the solid state. More melt must be added to the cavity during solidification to compensate for the polymer shrinkage. Therefore, a constant hydraulic pressure is applied so that the screw holds the melt under pressure and pushes more melt into the mold. Eventually, the plastic in the gate freezes, isolating the mould from the injection unit. While the part continues to cool, the melt for the next shot is dosed. At the end of the cooling time the part must be solid enough to retain the shape given by the cavity. The clamping unit opens the mould, ejects the part and closes and clamps the mould again. The next injection cycle starts.

On the Use of Reduced Bases in Optimization of Injection Molding

169

1.4. Importance of the Cooling Step for Manufacturing Injected Parts Part cooling during injection molding is the critical step as it is the most time consuming. An inefficient mold cooling may have dramatic consequences on cycle time and part quality and may require expensive mold rectification. Depending on the wall thickness of the molded parts, it usually takes the major portion of the cycle time to evacuate the heat. Polymers are bad conductors, as thermal conductivity ranges form 0.1 W/mK to 1.8 W/mK [2]. The cooling cycle can represent more than 70% of the injection cycle [4, 5]. The cooling rate is an important factor for productivity, and important benefits can be achieved by decreasing the cooling time of parts with hot zones badly cooled. A bad design of the cooling channels may generate zones with higher temperatures in the mold, increasing the cooling time. In addition, different types of injection defects due to a bad thermal regulation of the mold can appear: dimensional defects, structural defects and aesthetical defects [6, 7]. In order to reduce mold and production costs, an automatic optimization of the geometry of the cooling device and processing parameters (temperature, flow rate...) may be developed. The optimization procedure necessitates to compute numerous transient heat balance problems (eventually non linear). For solving the thermal problem, we need an efficient meshing technique. The Boundary Element Method (BEM) is well adapted for such a problem, because it only requires a surfacing mesh. The displacement of the cooling channels, after each optimization iteration, is then facilitated (no remeshing). In addition, reduced modeling is useful in order to reduce the CPU time of the direct computations, particularly for 3D computations. Injection molding is a cycle process, which implies the computation of numerous cycles. On figure 2, an example of temperature history of 40 cycles is plotted versus time. Curves (a) and (b) give respectively the maximum of the temperature in the cavity before and after optimization. Curve (c) represents the average temperature at the cavity surface after optimization.

Figure 2.Temperature history of the first 40 cycles.

170

Francisco Chinesta and Fabrice Schmidt

2. MOLD COOLING PPTIMIZATION 2.1. Introduction Several CAD and simulation tools are available to help designing the cooling system of an injection mold. Simulation of heat transfer during injection can be used to check a mold design or study the effect of a parameter (geometry, materials...) on the cooling performance of the mold. Several numerical methods such as Finite Element Method (FEM) [8] or Boundary Element Method (BEM) [9] can be used. Bikas et al [10] used C-Mold® simulations and design of experiments to find expressions of mean temperature and temperature variation as functions of geometry parameters of the mold. Numerical simulation can also be used to perform an automatic optimization of mold cooling. Numerical simulation is used to solve the thermal equations and evaluate a cost function related to productivity or part quality. An optimization method is used to modify the parameters and improve the thermal performance of the mold. Tang et al [11] used 2D transient FEM simulations coupled with Powell’s optimization method [12] to optimize the cooling channel geometry to get uniform temperature in the polymer part. Huang et al [13] used 2D transient FEM simulations to optimize the use of mold materials according to part temperature uniformity or cycle time. Park et al [14] developed 2D and 3D stationary BEM simulations in the injection molds coupled with 1D transient analytical computation in the polymer part (throughout the thickness). The heat transfer integral equation is differentiated to get sensitivities of a cost function to the parameters. The calculated sensitivities are then used to optimize the position of linear cooling channels for simple shapes (sheet, box). In the next section, we present the use of the Boundary Element Method (BEM) and the Dual Reciprocity Method (DRM) applied to transient heat transfer of injection moulds. The BEM software [15, 16] was combined with an adaptive reduced modeling (described extensively later). This procedure will be fully described in section 3; it allows reducing considerably the computing time during the linear system solution in the transient problem. Then, we present a practical methodology to optimize both the position and the shape of the cooling channels in injection molding processes (section 4). We couple the direct computation with an optimization algorithm such as SQP (Sequential Quadratic Programming).

2.2. BEM for Transient Heat Balance Equation Using BEM, only the boundary of the domain has to be meshed and internal points are explicitly excluded from the solution procedure. An interesting side effect is the considerable reduction in size of the linear system to be solved [16]. The transient heat conduction in a homogeneous isotropic body Ω is described by the diffusion equation, where α is the material thermal diffusivity, assumed to be constant:

ΔT ( x, t ) =

1 ∂T ( x, t ) α ∂t

(1)

On the Use of Reduced Bases in Optimization of Injection Molding

171

We define the initial conditions and the boundary conditions (figure 3) as:

⎧ T ( x, t = 0 ) = T o ( x ) ⎪ ⎪ −λ∇T ⋅ n = qP ∀ x ∈ Γ P ⎨ ⎪ −λ∇T ⋅ n = hc (T − Tc ) ∀ x ∈ Γ c ⎪−λ∇T ⋅ n = ha (T − Ta ) ∀ x ∈ Γ M ⎩

(2)

Where λ is the thermal conductivity (the medium is assumed as homogeneous and isotropic), Γp is the boundary of the cavity surface (plastic part), Γc the boundary of the cooling channels and ΓM the mold exterior surface. The temperature of the coolant is Tc and the heat transfer coefficient, hc , represents the heat transfer coefficient between the mold and the coolant. ha represents the heat transfer coefficient between the mold and the ambient air at a temperature Ta . In order to avoid multi-domains calculation and save computation time, the plastic part is taken into account via a heat flux qP imposed on the mold cavity surface. The flux density qP is calculated from the cycle time and polymer properties [15, 17].

Figure 3. Boundary conditions applied to the mold.

Different strategies are possible to solve such problems using BEM. Pasquetti et al [18] propose to use space and time Green’s function. Another solution should be to apply Laplace [19] or Fourier [20] transforms on time variable before spatial integration. In this section, we will use only space Green’s function inspired on the stationary heat transfer problem (i.e. Laplace’s equation). The basic steps are in fact quite similar to those used for the finite element method. We firstly must form an integral equation from the previous equation by using a weighting integral equation and then use the Green-Gauss theorem:

172

Francisco Chinesta and Fabrice Schmidt *

1 ∂T * T dΩ ∂t

*

∫Γ T q dΓ −Ω∫ ∇T ∇TdΩ = Ω∫ α

(3)

*

where T is the weighting function and q the normal temperature gradient. This is the starting point for the finite element method. To derive the starting equation for the boundary element method, we use the Green-Gauss theorem again to the second left-hand integral. This gives: *

*

*

1 ∂T * T dΩ ∂t

∫Γ T q dΓ −Γ∫ Tq dΓ + Ω∫ ΔT TdΩ = Ω∫ α

(4)

*

For the boundary element method we choose T to be the fundamental solution of Laplace’s equation (also called the Green’ function):

ΔT * + δ x = 0 where

(5)

δ x is the Dirac delta distribution at the point x located inside the domain

Ω or on its

surface Γ . If we use now the integral property of the Delta function, we obtain [21]:

∫ δ T dΩ = c ( x)T ( x)

(6)

x

Ω

with:

1 if x ∈ Ω ⎧ ⎪ 1 ⎪ if x ∈ Γ and Γ is smooth at x 2 ⎪⎪ c ( x ) = ⎨ internal angle in 2D & if x ∈ Γ (Γ is not smooth at x) ⎪ 2π ⎪ ⎪ inner solid angle in 3D & if x ∈ Γ (Γ is not smooth at x) ⎪⎩ 2π

(7)

Thus, we get the integral equation:

1 ∂T * T d Ω = ∫ T *q d Γ α ∂t Ω Γ

c ( x ) T ( x ) + ∫ Tq*d Γ + ∫ Γ

(8)

On the Use of Reduced Bases in Optimization of Injection Molding *

173 *

The fundamental solutions of Laplace’s equation are well-known [22]. T and q are then defined by equations (9) and (10) depending on the dimension of the problem:

For 2D problems:

1 ⎧ * ⎪⎪ T = − 2π ln(r) ⎨ ∂T* d = ⎪ 2 ⎪⎩ ∂n 2πr

(9)

For 3D problems:

1 ⎧ * ⎪⎪ T = 4πr ⎨ ∂T* d = ⎪ 3 ⎪⎩ ∂n 4πr

(10)

uuuur

r r

where r = PM and d = −r ⋅ n = −r ⋅ n (figure 4):

Figure 4. Definition of distances used to compute fundamental solutions.

Again, a similar step to FEM consists in meshing. So, the boundary Γ of the domain is divided into Ne elements. To express now the domain integral in terms of equivalent boundary integrals, we introduce the DRM approximation [20]. The DRM consists in seeking the solution as a series ) ) of particular solutions T and q interpolated on N points inside and on the boundary of the domain. N ) 1 ΔT = ∑ α k ΔTk = with α = F −1D k =1

α

(11)

174

Francisco Chinesta and Fabrice Schmidt

where the vector α contains the coefficients

α k , vector D the different derivatives

∂Tk and ∂t

matrix F consists of the values of the interpolation function at each point. A commonly used interpolation function is the polynomial radial function leading to particular solutions [22]. Applying BEM to the modified equation (11) leads to the new linear system of equations (12).

HT - GQ =

)

)

1

( HT - GQ ) F α

where H ij = cδij +

∫ q dΓ *

−1

D

and G ij =

Γi

(12)

∫ T d Γ . A Newmark time scheme is applied to the *

Γi

temperature and flux leading to equation (13):

1 ⎞ p +1 1 ⎞ p ⎛ ⎛ p +1 p ⎜ θ H + C ⎟ T − θ GQ = − ⎜ (1 − θ ) H − C ⎟ T + (1 − θ ) GQ t t Δ Δ ⎝ ⎠ ⎝ ⎠

where C = −

1

)

(13)

)

( HT - GQ ) , Δt is the time step and from the chosen values of θ ∈ [0,1] the α

time integrating scheme will result.

2.3 Coupling DRBEM with an Optimization Method

Figure 5. Heat transfer simulation / optimization coupling.

The heat transfer computation is coupled with an optimization method to automatically modify the parameters at each optimization iteration as shown in figure 5. Given the initial

On the Use of Reduced Bases in Optimization of Injection Molding

175

parameters, the Dual Reciprocity Boundary Element Method (DRBEM) simulation is performed and the cost function is calculated. The optimization method allows updating parameters according to constraints until a minimum of the cost function is found. SQP (Sequential Quadratic Programming) [15] is used for the optimization of continuous non linear functions with continuous non linear constraints. In order to reduce the computing time during the linear system solution (especially in 3D), we propose the use of the model reduction within the BEM solver. Before presenting the results of optimization, the reduced modeling approach is summarized.

3. REDUCING MODELING 3.1. Introduction Most engineering systems can be modeled by a continuous model usually expressed by a system of linear or non-linear coupled partial differential equations describing the different conservation balances (momentum, energy, mass and chemically reacting substances). From a practical point of view, the determination of its exact solution, that is, the exact knowledge of the different fields characterizing the physical system at any point and time instant (velocity, pressure, temperature, chemical concentrations, …), is not possible in real systems due to the complexity of models, geometries and/or boundary conditions. For this reason, the solution is searched only at some points and at some times, from which it could be interpolated to any other point and time. Numerical strategies allowing this kind of representation are known as discretization techniques. There exist numerous discretization techniques, e.g. finite elements, finite volumes, boundary elements, finite differences, meshless techniques, among many others. The optimal technique to be applied depends on the model and on the domain geometry. Progresses in numerical analysis and in computation performances make currently possible the solution of complex systems involving millions of unknowns related to the discrete model. However, the complexity of the models is also increasing exponentially, and today engineers are not only interested in solving a model, but also in solving these models many times (e.g. when they address optimization or inverse identification). For this purpose, strategies able to speed-up the numerical solution, preserving the solution accuracy, are in focus. In the context of control, optimization or inverse analysis, numerous problems must be solved, and for this reason the question related to the computation time becomes crucial. The question is very simple: is it possible to perform very fast and accurate simulations? Different answers have been given to this question depending on the scientific community to which this question is addressed. For specialists in computational science the answer to this question concerns the improvement of computational resources, high performance computing and the use of parallel computing platforms. To some specialists in numerical analysis, the challenge is in the fast resolution of linear systems via the use of preconditioners or multigrid techniques, among many others. To others, the idea is to adapt the cloud of nodes (points where the solution is computed) in order to avoid excessive number of unknowns. Many other answers have been given, however at present all these approaches allow to slightly alleviate the computation efforts. However, the fast and accurate computation remains a real challenge.

176

Francisco Chinesta and Fabrice Schmidt

This section describes a different approach based on model reduction, allowing fast and accurate computations. The idea is very simple. Consider a domain where a certain model is defined, as well as the associated cloud of nodes able to represent by interpolation the solution everywhere. In general, the number of unknowns scales with the number of nodes and, for this reason, even if the solution is evolving smoothly in time all the nodes are used to describe it at each time step. In the reduced modeling that we describe later the numerical algorithm is able to extract the optimal information describing the evolution of the solution in the whole time interval. Thus, the evolution of the solution can be expressed as a linear combination of a reduced number of functions (defining the reduced approximation basis), and then the size of the resulting linear problems is very small, and consequently the CPU time savings can attain several orders of magnitude (sometimes, in the order of millions). The extraction of this relevant information is a well known topic based on the application of the proper orthogonal decomposition, also known as Karhunen-Loève decomposition [23, 24] that is summarized in the next section. This kind of approach has been widely used for weather forecast purposes [25], turbulence [26, 27], solid mechanics [28], but also in the context of chemical engineering for control purposes [29]. Usually, reduced modeling performs the simulation of a similar problem or of the desired one in a short time interval. From these solutions the Karhunen-Loève decomposition applies, allowing the extraction of the most relevant functions describing the solution evolution. Now, it is assumed that the solution of a “similar” problem can be expressed using this reduced approximation basis, allowing for a significant reduction on the discrete problem size and then to significant CPU time savings. However, in general, the question related to the accuracy of the computed solutions is ignored. An original approach combining the model reduction and the control of the solution accuracy was proposed by Ryckelynck [30], and applied later in a large catalogue of applications [31-36]. This model reduction strategy can be coupled with usual finite element or boundary element discretizations [37]. We summarize in this section the main ideas of this reduction strategy for the non specialist in numerical analysis, in order to show its potentiality in many domains of engineering and, in particular, in the context of optimization.

3.2. Revisiting the Karhunen-Loève Decomposition We assume that the evolution of a certain field that depends on the physical space x and on time t, u ( x, t ) is known. In practical applications, this field is expressed in a discrete form, that is, it is known at the nodes of a spatial mesh and at some times, i.e. u ( xi , t p ) ≡ uip . We can also write, introducing a spatial interpolation: u p ( x ) ≡ u ( x, t = pΔt ) ; ∀p ∈ [1,L, P ] . The main idea of the Karhunen-Loève (KL) decomposition is how to obtain the most typical or characteristic structure ϕ ( x ) among these u p ( x ) , ∀p . This is equivalent to obtaining functions ϕ ( x ) maximizing α

On the Use of Reduced Bases in Optimization of Injection Molding

α=

p=P

i=N

p =1

i =1 i=N

⎡ ⎤ ∑ ⎢⎣∑ ϕ ( x ) u ( x )⎥⎦

177

2

p

i

i

∑ (ϕ ( x ) )

(14)

2

i

i =1

which leads to: i=N ⎡ ⎡i = N ⎤⎤ ⎤ ⎡ j=N p p % = ϕ x u x ϕ x u x α ϕ% ( x i ) ϕ ( xi ) (15) ( ) ( ) ( ) ( ) ⎢ ⎥ ⎢ ⎥ ∑ ∑ j ∑ i i ⎥ j ⎢∑ p =1 ⎢ i =1 ⎦ ⎣ j =1 ⎦ ⎥⎦ ⎣ ⎣ i =1 p=P

∀ϕ% ,

which can be rewritten in the form

i=N

⎡ j=N ⎧ p=P

i =1

⎢⎣





∑ ⎢ ∑ ⎨ ∑ u ( x ) u ( x ) ϕ ( x ) ⎬ ϕ% ( x )⎥ j =1

p

p

i

⎩ p =1

j

j

i



⎥⎦

i=N

= α ∑ ϕ% ( xi ) ϕ ( xi )

(16)

i =1

Defining vector φ such that its i-component is ϕ ( xi ) , Eq. (16) takes the following matrix form

φ% T kφ = α φ% T φ ; ∀φ% ⇒ kφ = αφ

(17)

where the two points correlation matrix is given by

k ij =

p=P

∑ u (x ) u (x ) p

p

i

p =1

j

⇔k=

p=P

∑ u (u ) p

p T

(18)

p =1

which is symmetric and positive definite. If we define the matrix Q containing the discrete field history: ⎛ u11 ⎜ 1 u Q=⎜ 2 ⎜ M ⎜⎜ 1 ⎝ uN

u12 u22 M u N2

L u1P ⎞ ⎟ L u2P ⎟ O M ⎟ ⎟ L u NP ⎟⎠

(19)

it is easy to verify that the matrix k in Eq. (18) results in k = Q QT

(20)

178

Francisco Chinesta and Fabrice Schmidt

3.3. Reduced Modeling If the evolution of a certain field is known u ( x i , t p ) ≡ uip , ∀i ∈ [1,L , N ] , ∀p ∈ [1,L , P ]

(21)

from some direct simulations, or from experimental measurements, then matrices Q and k can be computed and the eigenvalue problem given by Eq. (17) can be solved. The solution of Eq. (17) results in N couples eigenvalue-eigenvector. However, in a large number of models involving regular time evolutions of the solution, the magnitude of the eigenvalues decreases very fast, evidencing that the solution evolution can be represented as a linear combination of a reduced number of functions (the eigenvectors related to the highest eigenvalues). In our numerical applications we consider the eigenvalues ordered as α1 > α 2 > L > α N . The n eigenvalues belonging to the interval α1 > L > α n , with α n > α1 ⋅10−8

α n +1 < α1 ⋅10

−8

and

are selected, because their associated eigenvectors are expected to be sufficient

to represent accurately the entire solution evolution. In a large variety of models n ∈ N and moreover n only depends on the regularity of the solution evolution, but neither on the dimension of the physical space (1D, 2D or 3D) nor on the size of the model (N). The reduced approximation basis consist of the n eigenvectors φ1 ,L, φn , allowing to define the basis transformation matrix B : B = ( φ1 , φ2 ,L, φn )

(22)

whose size is N × n . Thus, the vector containing the field nodal values u can be expressed as: n

u = ∑ φi ⋅ ξi (t ) = B ⋅ ξ (t )

(23)

i =1

Now, if we consider the linear system of equations resulting from the discretization of a partial differential equation (PDE) in the form Au p = f p−1

(24)

p−1

where f accounts for the solution at the previous time step, taking into account Eq. (23) it reduces to: A u p = f p −1 ⇒ A B ξ p = f p −1

(25)

On the Use of Reduced Bases in Optimization of Injection Molding

179

T

and multiplying both terms by B it results BT A B ξ p = BT f p−1

(26)

which proves that the final system of equations is of low order, i.e. the dimensions of B T A B are n × n , with n ∈ N , and the dimensions of both ξ and B T f p −1 are n × 1 . Remark 1. Equation (26) can be also derived introducing the approximation (23) into the Galerkin form of the partial differential equation.

3.4. Reduced Basis Adaptivity The strategy just described allows for very fast computation of large size models. For example, one could solve the full model using some standard discretization technique (finite differences, finite elements, boundary elements, …) for a small time interval and then define matrix Q and k allowing to compute the reduced approximation basis transformation B that leads to the reduced solution procedure illustrated by Eq. (26). Other possibility consists in solving a model in the whole time interval and then extracting the most representative functions that could be used for solving some “similar” models. We come back to this discussion later. However, in any case, it is not guaranteed that this reduced basis that was built in the first scenario from the solution known within a short time interval, and in the second one for a particular model different to the present one, remains accurate for describing the solution in the entire simulation interval or for any other “similar” model respectively. In the first case, it is obvious that during the simulation material properties, boundary conditions … could change, compromising the validity of the reduced basis. In the second case, the model being different to the one that served to extract the reduced basis nothing guarantees the validity of that reduced approximation basis. In this manner, if one would compute reduced model solutions and keep the confidence on the related solution, a check of the solution accuracy must be performed and an enrichment strategy must be defined in order to adapt the reduced approximation basis in order to capture the new events present in the solution evolutions which cannot be described accurately from the original reduced approximation basis. For this purpose, Ryckelynck proposed [32] to start with a low order approximation basis, using some simple functions (e.g. the initial condition in transient problems) or using the eigenvectors of a “similar” problem previously solved or the ones coming from a full simulation in a short time interval. Now, we compute S iterations of the evolution problem using the reduced model (26) without changing the approximation basis. After these S iterations, the complete discrete system (25) is constructed, and the residual R evaluated: R = Au S - f S −1 = ABξ S - f S −1

(27)

180

Francisco Chinesta and Fabrice Schmidt If the norm of the residual is small enough, R < ε , with ε a threshold value small

enough, we can continue for other S iterations using the same approximation basis. On the contrary, if the residual norm is too large, R ≥ ε , we need to enrich the approximation basis

and compute again the last S iterations. This enrichment is built using some Krylov’s subspaces, in our case the three first subspaces: B ← ( B, R, AR, A 2 R ) . One could expect the enrichment process increases continuously the size of the reduced approximation basis, but in fact, after reaching the convergence, a Karhunen-Loève decomposition is performed on the whole past time interval in order to extract the significant information as well as to define an orthogonal reduced approximation basis. The interested reader can refer to Ryckelynck et al. [31] and the references therein for a more detailed and valuable description of the computational algorithm.

3.5 Illustrating the Applicability of Reduced Bases We are considering in this section for the sake of clarity a simple 1D model (the extension to multidimensional models is straightforward) related to the heat transfer equation (we omit the units, all them being expressed in metric system): ∂T ∂ 2T =α 2 ∂t ∂x

with

α = 0.01 ,

(28)

t ∈ ]0,30] and x ∈ ]0,1[ . The initial condition reads T ( x, t = 0) = 1 and the

boundary conditions are given by −λ

∂T ∂x

= q(t ) and −λ x = 0, t

∂T ∂x

= 0 (λ

= 0.01 ). The

x =1, t

boundary source q(t ) is prescribed to different values during the simulations that follow. Equation (28) is discretized by using the implicit finite element method on a mesh that consists of 100 nodes, where a linear approximation is defined in each of the resulting 99 elements. The time step was set to Δt = 0.1 . The resulting discrete system can be written as: KT p = MT p −1 + q p

(29)

where vector q p accounts for the boundary heat flux source at each time step p. Remark 2. We use the FEM because of the one-dimensionality of the model, but obviously all the results can be extended to any other discretization technique, and in particular to the BEM previously introduced. First, we consider the solution of the thermal model described above and related to the following boundary heat source:

On the Use of Reduced Bases in Optimization of Injection Molding ⎧1 0 < t < 10 q (t ) = ⎨ t ≥ 10 ⎩0

181 (30)

The computed solution is depicted in figure 6, where the temperature profiles at times t p = p ( p = 1, 2,L,30) are represented. The evolution in the 10 first seconds (heating stage) is depicted in red. In the remaining time interval no more heating sources exist, hence the heat shifts by a conduction mechanism from the hotter towards the cold zones. The profiles within this time interval are represented in blue.

Figure 6. Temperature profiles related to the thermal model with the source term modeled by Eq. (30).

Now, from these 30 profiles we could define matrices Q and k , which lead to the eigenvalue problem allowing to extract the significant eigenvectors, according to Eq. (4). The resulting eigenvalues are: α1 = 1790 , α 2 = 1.1 , α 3 = 0.1 and α j < α1 ×10−8 ( 4 ≤ j ≤ 100 ) . This result implies that the whole solution evolution could be accurately represented as a linear combination of the 3 eigenvectors related to the first 3 highest eigenvalues. In order to impose easily the initial condition, it may be possible to add the initial condition to these eigenvectors (even if then the resulting approximation basis is no more orthogonal). Figure 7 depicts the resulting approximation functions, that consist of the 3 eigenfunctions related to the 3 highest eigenvalues and the initial condition, all them normalized, and referred as Φ j . These functions allow defining matrix B and then the reduced model derived from Eq. (29): BT KBξ p = BT ( MBξ p −1 + q p )

(31)

that only involves 4 degrees of freedom. Even in the case of non linear models and an implicit discretization, only the inversion of a matrix of size 4 is required at each time step.

182

Francisco Chinesta and Fabrice Schmidt If we assume that the initial condition has been placed in the first column of B , then the

initial condition in the reduced basis writes: ( ξ 0 ) = (1 0 0 0 ) . Now, from this condition, T

Eq. (31) can be applied to compute the whole time evolution. Obviously, the global solution can be obtained from the reduced one according to the basis transformation relationship: T p = Bξ p . Figure 8 compares a few temperature profiles obtained using the global model (Eq. (29)) and that were depicted in figure 6, with those obtained using the reduced model (Eq.(31)) An excellent accuracy can be noticed. This accuracy is not surprising because, as indicated before, the four approximation functions used were the ones related to the highest eigenvalues and that consequently represent the optimal reduced approximation basis.

Figure 7. Functions defining the reduced approximation basis.

In order to conclude about the applicability of this reduced approximation basis to simulate problems different from the one that served to compute it, we consider the thermal model defined in the same domain, with the same initial condition, but with a slightly different boundary heat source term: ⎧ t ⎪⎪ 20 q (t ) = ⎨ ⎪ t − 30 ⎪⎩ 5

0 < t < 20

(32) t ≥ 20

Figure 9 compares the reference solution (continuous line) computed using the model represented by Eq. (29) with that obtained using the reduced model (31) –stars- but where the reduced approximation basis consists of the four functions represented in Fig. 7 and that were associated with the thermal model related to the boundary condition given by Eq. (30). We can notice the excellent accuracy, somewhat unexpected, because the non evident compatibility between the problem solutions defined by Eqs. (30) and (32), and then the unexpected ability of the approximation functions extracted from the solution of the thermal model defined by Eq. (30) to describe the solution of the thermal model related to Eq. (32).

On the Use of Reduced Bases in Optimization of Injection Molding

183

Figure 8. Complete (continuous line) versus reduced (stars) model solutions.

Figure 9. Complete (continuous line) versus reduced (stars) model solutions related to the thermal model associated to the thermal source given by Eq. (19).

From this result we can start to realize the potentiality of model reduction, but in any case two questions remain open: (i) how to quantify the quality of a reduced solution without the necessity of computing the global solution? and (ii) in the case of noticing a lack of accuracy, how to enrich the reduced approximation basis in order to improve the solution accuracy? To address these questions, we consider the technique originally proposed by Ryckelynck [32] that consists in computing the solution residual defined at a certain time step by R = KT p − MT p −1 − q p

(33)

184

Francisco Chinesta and Fabrice Schmidt

This residual can be used to quantify the accuracy of the reduced solution, and allows addressing the first question. Now, concerning the second one, we are assuming that the residual resulting from the application of Eq. (20) is greater than a threshold value, i.e. R ≥ ε . A natural choice consists in enriching the reduced approximation basis by adding

this residual (that is orthogonal in a Galerkin sense to the approximation functions) and some of the Krylov’s subspaces related to it (according to the procedure previously described). For the thermal model related to the boundary source given by Eq. (32) and the reduced approximation basis depicted in Fig. 7, the residual norm at t = 30 was R = 2.6 ×10−5 ,

justifying the excellent agreement between the global and reduced solutions noticed in Fig. 9. Figure 10 depicts the normalized residual ( R ← R / R ) . Even if the residual is very small, it can be noticed that the largest deviations are concentrated around the left boundary, where the boundary thermal source applies.

Figure 10. Normalized residual at t = 30 related to the solution associated with Eq. (32) and computed from the reduced approximation basis depicted in Fig. 7.

Now, we proceed to enrich the reduced approximation basis by introducing this residual (the first Krylov’s subspace) into matrix B according to: B ← ( B, R ) . If the thermal model related to Eq. (32) is solved again, but using the just updated (enriched) reduced approximation basis, then the norm of the residual decreases, justifying the introduction of the Krylov’s subspaces related to the residual to improve the reduced solution accuracy.

3.6 Discussion Similar results could be obtained by considering more complex thermal models. We have analyzed several different scenarios. One of them consisted of a thermal model that was solved for a particular evolution of the volumetric source term (the boundary conditions were assumed fixed in this case). A first solution allowed us to define the reduced approximation basis by solving the eigenvalue problem and selecting the eigenvectors related to the highest

On the Use of Reduced Bases in Optimization of Injection Molding

185

eigenvalues. Now, different evolutions in time and space of the volumetric source term were considered and the associated solutions computed by using the same reduced basis. The computed solutions were compared with the reference ones obtained by using the complete approximation basis. In all the analyzed cases the agreement was very good, always improved by adding to the reduced basis the computed residuals associated to the reduced basis solutions, as just described. The previous numerical examples illustrate the model reduction capabilities. The main originality lies in the capability of checking for the solution accuracy and the eventual possibility to adapt the reduced basis by adding the residual and some Krylov’s subspaces generated by it into the reduced approximation basis. These appealing capabilities were exploited to solve many models (see the works referenced in the introduction to this section). We have also experienced that models involving weak fixed discontinuities (thermal models in non-homogeneous media) accept reduced approximation basis, from which the evolving solutions can be accurately represented. When these discontinuities are evolving within the domain, the situation becomes a bit more complicated. If general, strong discontinuities moving within the domain don’t accept a reduced description. That is, if we know the solution evolution and we apply the Karhunen-Loève decomposition, most of the eigenvalues must be retained in the model description. This situation seems quite obvious. Other difficulties are found when the models imply moving meshes (updated Lagrangian formulations). In that case, a reduced basis can be extracted [31], but it cannot be used to solve models in which the evolution of the nodal positions differ from the one that served to define the reduced basis. The fact that level-set based descriptions of moving interfaces accept reduced approximations opens new perspectives within the framework of the partition of unity based discretizations. In what follows we focus on a direct consequence of the examples discussed in the previous section, concerning the optimization. As it is well known, in order to perform optimization one needs a minimization strategy (first or higher order) and a direct solver that is called for each tentative state of the design parameters. Because one must solve numerous direct problems, the computing time of such direct solver becomes crucial. The simplest alternative lies in solving the complete model for one (or some) point within the design space and extract the reduced basis, and then for any other point (given by the minimization strategy) computed the solution from that reduced approximation basis. In any case, the solution can be improved by enriching the reduced bases by applying the strategy described above. In the next sections we illustrate the capabilities of such one procedure.

4. APPLICATION OF THE DRBEM REDUCED MODEL TO MOLD

COOLING OPTIMIZATION 4.1. Reduced Model Coupled with DRBEM We solve the eigenvalue problem defined in section 3 selecting the eigenfunctions φn

associated with the eigenvalues αn ( n ∈ [1, N ]) belonging to the interval defined by the largest eigenvalue, such as φn ’s sum is upper or equal to 99.9% of φN ’s sum (N number of

186

Francisco Chinesta and Fabrice Schmidt

nodes). In practice, n is much lower than N. The B matrix is then assembled and used to approximate the temperature. The main steps of the direct simulation coupled with the reduced model are summarized on figure 11.

Figure 11. Direct DRBEM simulation with reduced model.

4.2 Overall Optimization Methodology We present in this section how we formulate the problem under a mathematical programming form. In the sequel, x will denote the vector of optimization variables (position and shape parameters for the cooling channels). Since the output of the heat-transfer problem is a function of x, we shall make the dependence explicit of the temperature measurements upon the position and shape parameters. Most practical optimization problems involve several (often contradictory) objective functions. The simplest way to proceed in such a multicriterion context is to consider as objective function a weighted sum of the various criteria. This involves choosing appropriate weighting parameter values. An obvious alternative is to use one criterion as objective function while requiring, in the constraints, maximal threshold levels for the remaining criteria. We choose here the latter approach because we do know a threshold level value for the maximal temperature variation under which any variation is equally acceptable. More precisely, we formulate our problem under the form: min T( x ) x

subject to f (T( x )) and g( x ) ≤ 0

(34)

where f is a real-valued function used to stipulate the uniformity-temperature constraint, and g(x) is a general vector-valued non-linear function. The complete methodology to couple the thermal solver and the optimization algorithm procedure is presented in [16].

On the Use of Reduced Bases in Optimization of Injection Molding

187

The general constraints g( x ) ≤ 0 represent any geometry related or other industrial constraints, such as: •

upper/lower-bound constraints on the xi ,



keeping the cooling channels within the mold,



technically-forbidden zones where we cannot position the cooling channels (for instance due to the presence of ejectors),



constraints stipulating a minimal distance between every pair of cooling channels to avoid inter-channels collision.

4.3. Application In this section, we report numerical simulations on a 3D plastic part whose features are displayed on Figure 12 (units in mm). It is a semi-industrial injection mold design for the European project Eurotooling 21.

Figure 12. Plastic part dimensions.

The mold is meshed using 5592 linear triangles and each cooling channel using 340 quadrangles (figure 13).

Figure 13. Upper-part of the mold mesh.

188

Francisco Chinesta and Fabrice Schmidt

The thermo-physical properties of the polymer as well the mold material are referenced in table 1. The boundary conditions are the same as defined in section 2.2. Table 1. Thermo-physical properties Polymer (PP) 0.63 891 2740

λ [W.m-1.K-1] ρ [kg.m-1] Cp [J.kg-1.K.-1]

Mold (Steel) 34 7800 460

The history matrix, corresponding to the first injection cycle time, is computed using transient DRBEM code. The mold temperature, for the next injection cycle time, is computed using the reduced model. The optimization objective consists in minimizing the maximal temperature while minimizing temperature variations:

( )

minimize max Ti i∈ N

subject to

∑ Ti − Tav

≤σ

(35)

i∈ N

where N is the number of elements and Tav the average surface temperature. For illustration purposes, we consider here 8 cooling channels and the constraints and optimization variables are sketched in figure 14. The geometrical optimization parameters are the coordinates of the end points, P1 and P2 of each cooling channel (figure 14). Since P2 can be expressed in terms of the other coordinates and since the channel length (L) is constant, the optimization parameters for locating the i-th cooling channel are completely determined by P1 = ( X i , Yi , Z i ) i

i =1,L,8 .

For

this application, Zi is fixed and therefore the problem reduces to 16 optimization variables.

Figure 14. Constraints and optimization variables.

On the Use of Reduced Bases in Optimization of Injection Molding

189

Figure 15. Convergence history.

We use as starting point a heuristic solution provided by an experienced engineer. On average, one objective function evaluation requires 14 min of CPU time (one direct computation). Since we compute gradients using finite difference approximation (associated with SQP method), one optimization iteration involves 4 hours of CPU time on Macintosh 1.83 GHz Intel Core 2 Duo. 24 optimization iterations were necessary in order to achieve convergence for one injection cycle (figure 15). In addition, the surface temperature distribution of the mold is presented in figure 16, while the temperature profile at the mold surface before and after optimization are shown in figure 17. We observe that both temperature variance and temperature average decrease significantly.

Figure 16. Surface temperature distribution of the mold.

190

Francisco Chinesta and Fabrice Schmidt

Figure 17. Temperature profile at the surface of mold cavity before and after optimization (z=0.02 m).

We need 96 hours to perform a complete optimization without the reduction model. If we use now the reduction model and DRBEM, we reduce CPU time to 7h40 (one direct computation of 14 minutes and 24 optimization iterations of 18 minutes approximately). CPU time is divided approximately by 13.

CONCLUSION We introduced a methodology based on the use of DRBEM to solve the 3D heat transfer equation during the cooling step of the injection molding process. The preliminary computation tests on a semi-industrial plastic part showed that the approach is viable for optimizing the design of cooling channels for injection molding. The numerical modeling and optimization methodology can easily take into account a large range of industrial constraints. Various optimization criteria can be provided by the user (either directly as a cost function or within constraints. Another interesting aspect consists in using this technique in order to compute multicycles injection mold cooling. The reduction model allows to extract from the first cycle the relevant eigenfunctions associated with the eigenvalues and, consequently, to calculate very rapidly all the other cycles.

REFERENCES [1] [2] [3]

Agassant, J.-F.; Avenas, P.; Sergent, J.-Ph.; Carreau, P.J. Polymer Processing – Principle and Modelling, Hanser Publishers, New York 1991. Osswald, T.A. Polymer Processing – Fundamentals. Hanser Publishers, 1998. Wesselmann, M.H. Impact of moulding conditions on the properties of short fibre reinforced high performance thermoplastic parts. PhD Thesis, Ecole des Mines Albi, 1998.

On the Use of Reduced Bases in Optimization of Injection Molding [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25]

[26] [27] [28] [29] [30]

191

Opolski, S.W.; Kwon, T.W. In Annual Technical Conference and Exhibition, Society of Plastics Engineers, 1987, pp. 264–268. Lu, Y.; Li, D.;. Xiao, J. Progress in Natural Science. 1996, 6, 227–234. Yang, S.Y.; Lien, L. Advances in Polymer Technology. 1996, 15, 289–295. Moller, J.; Carlson, M.; Alterovitz, R.; Swartz, J. In Proceeding of the 56th Annual Technical Conference ANTEC 98, 1998, pp. 525–527. Boillat, E.; Glardon, R.; Paraschivescu, D. Journal de Physique IV. 2002, 102, 27–38. Bialecki, A.; Jurgas, P.; Kuhn, G. Engineering Analysis with Boundary Element. 2002, 26, 227-236. Bikas, A.; Kanarachos, A. In Proceedings of the 57th Annual Technical Conference ANTEC 99. 1999, pp. 578–583. Tang, L.; Pochiraju, K.; Chassapis, C.; Manoochehri, S. Journal of Mechanical Design. 1998, 120, 165–174. Fletcher, R. Practical methods of optimization, Wiley, 1987. J. Huang, G. Fadel, Journal of Mechanical Design, 123, 226–239, 2001. Park, S.; Kwon, T. Polymer Engineering and Science. 1998, 38, 1450–1462. Mathey, E.; Penazzi, L.; Schmidt, F.M.; Ronde-Oustau, F. In Proceddings NUMIFORM 04 conference, 2004, pp. 222-227. Pirc, N.; Schmidt, F.M.; Mongeau, M.; Bugarin, F.; Chinesta, F. In Proceedings of the 12th ESAFORM Conference on Material Forming, 2009. Pirc, N.; Bugarin, F.; Schmidt, F.M.; Mongeau, M. International Journal for Simulation and Multidisciplinary Design Optimization. 2008, 2, 245-252. Pasquetti, R.; Petit, D. Engineering Analysis with Boundary Elements. 1995, 15, 197– 205. Sutradhar, G.H.; Paulino, L.J.; Gray. Engineering Analysis with Boundary Elements. 2002, 26, 119–132. Godinho, L.; Tadeu, A.; Simoes, N. Engineering Analysis with Boundary Elements. 2004, 28, 593–606. Brebbia, A.; Chen, C.S.; Power, H. Communications in Numerical Methods in Engineering. 1999, 15, 137-150. Brebbia, C.A.; Dominguez, J. Boundary elements, an introductory course; WIT Press/Computational Mechanics Publications, 1992. Karhunen, K. Ann. Acad. Sci. Fennicae, ser. Al. Math. Phys., 1946, 37. Loève, M.M. Probability theory. The University Series in Higher Mathematics, 3rd Ed. Van Nostrand, Princeton, NJ, 1963. Lorenz, E.N. Empirical orthogonal functions and statistical weather prediction. MIT, Departement of Meteorology, Scientific Report N1, Statistical Forecasting Project, 1956. Sirovich, L. Quaterly of applied mathematics. 1987, XLV, 561–57. Holmes, P.J.; Lumleyc, J.L.; Berkoozld, G.; Mattinglya, J.C.; Wittenberg, R.W. Physics Reports. 1997, 287. Krysl, P.; Lall, S.; Marsden, J.E. Int. J. Numer. Meth. in Engng. 2001, 51, 479–504. Park, H.M.; Cho, D.H. Chem. Engineer. Science. 1996, 51, 81-98. Ryckelynck, D.; Hermanns, L.; Chinesta, F.; Alarcón, E. Engineering Analysis with Boundary Elements. 2005, 29, 796-801.

192

Francisco Chinesta and Fabrice Schmidt

[31] Ryckelynck, D.; Chinesta, F.; Cueto, E.; Ammar, A. Archives of Computational Methods in Engineering, State of the Art Reviews. 2006, 13, 91-128. [32] Ryckelynck, D. Journal of Computational Physics. 2005, 202, 346-366. [33] Ammar, A.; Ryckelynck, D.; Chinesta, F.; Keunings, R. Journal of Non-Newtonian Fluid Mechanics. 2006, 134, 136-147. [34] Niromandi, S.; Alfaro, I.; Cueto, E.; Chinesta, F. Computer Methods and Programs in Biomedicine. 2008, 91, 223-231. [35] Chinesta, F.; Ammar, A.; Lemarchand, F.; Beauchchene, P.; Boust, F. Computer Methods in Applied Mechanics and Engineering. 2008, 197, 400-413. [36] Verdon, N.; Allery, C.; Béghein, C.; Hamdouni, A.; Ryckelynck, D. Communications in Numerical Methods in Engineering. In press, 2009. [37] Ammar, A.; Pruliere, E.; Ferec, J.; Chinesta, F.; Cueto, E. European Journal of Computational Mechanics. In press, 2009.

In: Optimization in Polymer Processing ISBN: 978-1-61122-818-2 Editors: A. Gaspar-Cunha and J. A.Covas, pp. 193-220 ©2011 Nova Science Publishers, Inc.

Chapter 9

ESTIMATION AND CONTROL OF SHEET TEMPERATURE IN THERMOFORMING Benoit Boulet1, Md. Muminul Islam Chy2 and Guy Gauthier3 1

Department of Electrical & Computer Engineering, McGill University, 3480 University Street, Montreal, Canada. 2 Centre for Intelligent Machines, McGill University, 3480 University Street, Montreal, Canada. 3 Department of Automated Manufacturing Enginering, École de Technologie Supérieure, 1100 Notre-Dame Ouest, Montréal, Canada.

1. INTRODUCTION The thermoforming process is divided into three different phases. The first phase is the heating of the plastic sheet in the oven (Figure 1). The heating phase is necessary to obtain a ductile sheet so that it can be molded. The temperature at which the sheet becomes ductile depends on the kind of polymer it is made of. Once the sheet has been heated to the adequate temperature, it goes into the second phase of the process. The second phase is the molding and cooling of the sheet. There are various ways to make sure that the warm plastic sheet takes the shape of the mold. This can be done, for example, by creating a vacuum between the sheet and the mold. The plastic sheet then cools down and loses its pliability, which makes it keep the shape of the mold. The molded plastic part is then extracted from the mold and transferred towards the last phase of the process. The third phase is the phase of trimming. In this phase, the excess of plastic material is removed to give to the part its final shape. In this chapter, we are interested in the heating phase of the plastic sheet. Heating is performed in a thermoforming oven. The oven contains radiant heaters located above and below the sheet. The temperatures of the heaters are controlled by local proportional, integral and derivative (PID) feedback loops. The heater temperature setpoints are adjusted by the operator of the thermoforming machine and the heater temperatures are measured by

194

Benoit Boulet, Md. Muminul Islam Chy and Guy Gauthier

thermocouples embedded in them. The operator adjusts the heater temperature setpoints iteratively such that the resulting thermoformed plastic parts achieve an acceptable quality.

Figure 1. Oven of a thermoforming machine.

To automate the oven, a temperature controller would have to adjust the heater temperature setpoints (like the human operator would) such that the temperature of the plastic sheet at the end of the heating cycle gets close to a desired temperature profile. The required temperature profile over the sheet at the end of the heating cycle depends on the desired final part thickness at different locations according to the intended application of the part. Therefore, it becomes very important to control the temperature profile over the sheet to the required temperature profile. Some drawbacks in the use of temperature controllers in the thermoforming process include complexity and the cost of accurate sensing of the sheet temperature. The sheet temperature profile at the end of the cycle and the rate of the heating process over the sheet depend on the efficiency of the controller, the number and locations of heater units, and the number and locations of sensors. To control the temperature profile at every point of the sheet with perfect accuracy, an infinite number of heater units in the oven and an infinite number of sensors to feedback the temperature would be required, which is not feasible. The challenge is then to estimate sheet temperature, and hence control the temperature profile of the sheet, using a small number of sensors and a fixed number of heaters in the oven. As heating is the first phase of the thermoforming process, the remaining phases of the process strongly depend on the outcome of this phase. The specification of the temperature profile over the whole sheet is a crucial step which should be achieved at the end of the heating cycle, because the mechanical properties of the polymer largely depend on the temperature of the sheet [1-3]. A change in temperature of the sheet results in a change in mechanical properties such as fluid behavior index and fluid viscosity of the plastic sheet [4,5]. In the forming phase, the plastic sheet is formed in the desired shape over the mould depending on these properties of the plastic. Moreover, different heat conduction boundary conditions exist within the part due to different sections of the polymer making contact with the multi-cavity mould boundary at different times. Due to these reasons, non-uniform

Estimation and Control of Sheet Temperature in Thermoforming

195

temperature profiles are often required to influence the mechanical formability of the plastic sheet based on the shape of the part. The required sheet temperature profile at the end of the heating phase depends on the desired final part thickness at different locations, and the intended application of the part. It therefore becomes a challenge to control the sheet temperature based on its estimation using a minimum number of sensors and a fixed number of heaters in the oven [6]. Estimation of a complete temperature profile over the whole sheet is necessary for an accurate and efficient control of the temperature, although the number of sensors should be low to minimize the cost of a real-time implementation. In previous work, the signal from the sensor was used directly as the feedback signal to compare with the desired temperature at the point of the sheet [7,8]. Such a controller acting on the error signals works to optimize the difference between the actual temperature and desired temperature in the sensed zones. But the differences between the actual and desired temperatures at points between any two sensors cannot be optimized. If the sensor output can be used to estimate or interpolate the whole temperature profile over the sheet and this information is incorporated in the design of the controller in such a way that it will control the whole sheet temperature distribution instead of the temperatures of some particular point of the sheet, then the sheet temperature can achieve the specified sheet temperature required for the forming phase. Virtual sensors have been added to the set of infrared sensors so that more measurements could be incorporated into the design of the controller to control the temperatures at more points on the sheet [6]. The temperatures given by the virtual sensors are estimated from the real sensor temperatures using a weighted average based on distance. Although more points can be incorporated in the design of the controller using the virtual sensors to optimize the difference between desired and actual temperatures at these virtual sensor points, the difference between the desired whole sheet temperature profile and the actual sheet temperature profile cannot be optimized by this method. Here, a new method is presented for estimating the temperature over the sheet based on a Fourier decomposition of the temperature profile into spatial harmonics. Then, controlling the spatial harmonics instead of the temperatures at specific points improves control performance over the whole sheet. Another problem in implementing a temperature controller on a thermoforming machine is the inverse heating problem (IHP). Complex thermal couplings between a large number of heaters and the sheet combined with low sensitivity increases the complexity of the calculation of the heater temperature set points that are required in order to reach the prescribed sheet temperature profile at the end of the heating cycle. The standard direct heat transfer problem is well-posed because the solution exists, is unique and stable under small changes in input data. But the IHP can be ill-posed or undertermined [9-12]. Some analytical methods like the exact method [13], the polynomial method [14] and the integral method [15] can be used to solve the IHP. But they are limited to one-dimensional linear problems with particular initial and boundary conditions that are not suitable for the solution of IHP of the heating phase in thermoforming. Some heuristic methods of solution were based on pure intuition rather than mathematical formalism. Some of them are still used to solve IHP in different processes because of their capability to treat complex problems even if they are not accurate. Because of computational cost and the potential ill-posedness of the thermoforming reheat process, the pseudo-inverse of the view factor matrix (which is basically the ratio of energy leaving each heater surface that reaches the surface of a zone on the sheet) has been

196

Benoit Boulet, Md. Muminul Islam Chy and Guy Gauthier

proposed. Although the computational cost is low in this method, the solution may be erroneous. In other works, a neural network is used to solve the IHP [16]. Kudo et al. [17] and Franca et. al [18] solved IHP for radiating systems. There are also some methods like adaptive filtering algorithm used to solve IHP based on a probabilistic approach [19]. Although discrete methods [20] have the advantage of being applicable to any problem, they may experience some oscillation due to the unstable nature of the inverse problem. Discrete methods are costly in terms of computation and require a large memory for real-time implementation. As in the case of combined heat transfer problems, the discretization may lead to a system of ill-conditioned equations with nonlinearities and an iterative method may be a good choice for solving such systems. In the iterative regularization principle, a sequential improvement of the solution takes place. The solution of IHP is found at the end of the iterative procedure. There are many algorithms used to produce robust and stable estimation of the solution of IHP which are frequently used in metallurgy, chemical industry, aerospace and electronics. As the conjugate gradient method is a good tool to solve linear and nonlinear equations, such a method is proposed here for solving IHP of the heating phase of thermoforming process such that it can provide setpoint temperatures for the heaters that produce a specific temperature distribution in the plastic sheet after a predefined cycle time.

2. ESTIMATION AND CONTROL OF SHEET TEMPERATURE USING THE 2D FOURIER TRANSFORM 2.1. Estimation of Temperature The effectiveness of a feedback controller depends on the accuracy of the measurement of the output. In the context of the heating phase in thermoforming, the temperature of the sheet is fed back to compare with the setpoint temperature. A good estimation is necessary prior to the design of the controller to control the temperature accurately. Thus, the main goal of the sheet temperature estimation technique is to obtain sufficient accuracy with a low number of sensors, which is directly related to the cost of the control system. In the proposed technique, the temperature of the whole sheet can be estimated instead of only certain points. The whole temperature profile over the sheet is assumed band-limited in terms of spatial frequencies. This means that no large temperature gradient exists in the sheet temperature, thus making it a smooth profile. In this case, the temperature profile can be expressed as combinations of a limited number of spatial sinusoidal harmonics. The Fourier Transform is an important signal processing tool which can be used to decompose the temperature profile into its sine and cosine components at harmonic spatial frequencies. The output of the transformation represents the temperature distribution in the sheet in the spatial frequency domain or Fourier domain, while the input is the temperature distribution in the plastic sheet. In the Fourier domain, each point represents a particular spatial frequency contained in the temperature distribution. The Fourier Transform is used in a wide range of applications, such as image analysis, image filtering and image reconstruction. The Fourier transform can be used to reconstruct the 2D temperature profile of the sheet. The real sensor output can be used as a sampled value of the temperature distribution over the sheet. So the number of sensors required in the sheet to estimate the

Estimation E andd Control of Sheet S Temperaature in Therm moforming

197

shheet temperatu ure will depennd on the spatiaal bandwidth oof the temperaature distribution and the acccuracy requirred by the conntroller. For a rectangular teemperature maap of size N×M M, the twodiimensional Diiscrete Fourierr Transform (D DFT) is given by :

F (m, n) =

N −1 M −1

∑∑

f ( k , l )e

− i 2π (

km ln ) + N M

(1)

k =0 l=0

w where f (k , l ) is the tempeerature profilee in the spatiial domain, F (m, n) are the t Fourier cooefficients off the spatial harmonics reepresentation of the tempeerature profilee, and the exxponential terrm is the basis function (hharmonic) corrresponding too F(m,n) in the t Fourier sppace. The equ uation can be interpreted i as follows: the vvalue of each point F(m,n) is obtained byy multiplying g the spatial teemperature prrofile over thhe sheet with the corresponnding basis fuunction, and summing s the result. The baasis functions are cosine annd sine waves (real and im maginary partts of the com mplex exponeential) with increasing freequencies, i.e,,. F (0, 0) reepresents the DC-componnent of the temperature which corressponds to thhe average teemperature of the sheet andd F ( N -1, M -1) represennts the highestt frequency. In I a similar way, the Fourieer map can bee transformed back into thee spatial domaain. The inverrse discrete w Foourier transforrm is given byy:

f (k , l ) =

1 NM

N −1 M −1

∑∑

m=0 n=0

F (m , n )e

i 2π (

km ln l + ) N M

.

(2)

Transform annd therefore it does not contain c all The DFT is the sampled Fourier T frrequencies forrming the tempperature profiile, but only a set of samplees which is larrge enough too fully describe the spatiaal domain disscretized tempperature distriibution. The number of frrequencies co orresponds to the numberr of sensors in the sheett, i.e., the teemperature diistribution in the t spatial andd Fourier dom mains are of thhe same size. If I the arrangem ment of the seensors in the sheet s is not suufficiently dennse to sample the t temperaturre distributionn according too the Nyquist criterion, c thenn the exact tem mperature distrribution cannoot be estimatedd as a result off this under-saampling. If thee number of seensors used to estimate tempperature is nott sufficient, thhen some of th he higher spatiial frequencies become aliaased to lower spatial s frequenncies in the saampled tempeerature profile. This creates undesirable artifacts a that decrease d the accuracy a of thhe estimation. Fortunately, the temperatuure profile over the sheet iss usually a baand-limited fuunction which h means thaat its spatial frequency domain d contaains componeents whose m magnitudes gett smaller for higher spatial ffrequencies. The T two-dimennsional Fourierr transform inndicates spatiaal frequency inn two directions of the tempperature distriibution on the sheet. The m maximum spatiial frequency in i any directioon that can be recovered froom the sensor sample can bee determined by b the Nyquisst theorem as:

f max( x ) =

2 , dx

(3)

198

Benoit Boulet, Md. Muminul Islam Chy and Guy Gauthier

where f max( x ) is maximum spatial frequency (shortest wavelength) in the x direction that can be recovered and dx is the distance between two consecutive sensors in the x direction. Thus, if the arrangement of sensors satisfies the Nyquist criterion, the temperature profile can be exactly reconstructed or estimated from the real sensor output.

25 Temperature at sensor point

Original temperature along the wire

20

Temperature

15

10

5

0 0

1

2

3 4 Distance along the length of the wire

5

6

Figure 2 (a). Original temperature distribution along the wire length and temperature at uniformlyspaced sensors.

25

Temparature at sensor position

20

15

10

5

0 0

1

2

3

4

5 6 Temperature sensor number

Figure 2 (b). Temperature at uniformly-spaced sensor positions.

7

8

9

10

11

Estimation and Control of Sheet Temperature in Thermoforming

199

160

140

120

FFt value

100

80

60

40

20

0

-20 1

2

3

4

5

6

7

8

9

10

FFT Sample

Figure 2 (c). Magnitude of the FFT of the sensor point temperatures.

160

140

120

FFT value

100

80

60

40

20 Zeros' are padded here 0

-20 0

20

40

60 FFT sample

80

100

Figure 3 (a). Magnitude of the FFT of the sensor point temperatures after zero padding.

200

Benoit Boulet, Md. Muminul Islam Chy and Guy Gauthier

25 Temperature at sensor point Reconstructed value of temperature

Reconstructed value of temperature

20

15

10

5

0 0

1

2

3 Distance along the wire

4

5

6

Figure 3 (b). Reconstructed temperature distribution along the wire length by IFFT.

2.2. Interpolation by Zero Padding Technique A function can be reconstructed exactly using its samples if the sampling distance satisfies the Nyquist criterion. This technique can be described using Fig.3 showing a onedimensional temperature profile along a wire. The Fourier transform of the samples are used to get the frequency contents of the profile. When the frequency response of the profile is compressed, which means zeros are padded from the high frequency side of the Fourier transform, the frequency response of the original signal is not changed. Because the magnitude and phase at frequencies lower than the maximum detectable frequency remain unchanged and new high-frequency samples with zero magnitude are added, zero padding has the same effect as low pass filtering of the samples by adding zeros at high frequency. So the inverse Fourier transform will recover the signal with the same number of extra samples as the number of zeros added which means that to interpolate the function by a factor N, {(N1)*number of samples} zeros have to be added.

2.3. Simulation Results for the Estimation of Temperature by 2D FFT The proposed method for estimation of a temperature profile is investigated through simulation in this section. A spatial temperature distribution on the sheet is considered as:

T ( x, y ) = 150 − 50e − ( x

2

+ y2 )

.

(4)

Estimation and Control of Sheet Temperature in Thermoforming

201

Estimated sheet temperature (C)

Estimated sheet temparature (C)

The proposed estimation technique is used to predict this temperature profile distributed over the sheet. The resulting estimated temperatures with different combinations of sensors are shown in Fig.4. The corresponding error in the estimation of the temperature profile over the sheet is shown in Fig.5. It is observed from the results of the simulation that the error in estimation is decreasing with the increase of the number of sensors. Using only one sensor, the estimated value of the temperature is equal to an average temperature which is the same as the value of the sensor temperature. The estimated temperature is fairly accurate with 4 sensors (2X2 array.) Then, increasing the number of sensors can capture more spatial harmonics of the temperature profile which provides a more accurate estimate. Another noticeable result is that the magnitude of the error is low between the sensors as compared to the outside of the area surrounded by sensors. Thus, the error is going to be larger in the periphery of the sheet. As expected, the error is zero at the sensor points. The whole temperature distribution over the sheet is predicted accurately using the proposed FFT technique.

135 134 133 132 131 130 2 1 Y-axis of sheet

0

0

120 110 100 90 2

Y-axis of sheet

X-axis of sheet

0

0

0.5

1

2

1.5

X-axis of sheet

(b)

150 140 130 120 110 100 90 2 1 y-axis of sheet

0

0

0.5

1

1.5

x-axis of sheet

2

Estimated sheet temperature (C)

Estimation of sheet temperature (C)

130

1

(a)

(c)

140

2

1.5

1

0.5

150

150 140 130 120 110 100 90 2 1 0

0

0.5

1

1.5

2

X-axis of sheet

Y-axis of sheet

(d)

Figure 4. The predicted temperature profile over the sheet using (a) 1 sensor (b) 4 sensors (2x2) (c) 9 sensors (3x3) (d) 16 sensors (4x4).

2.4. Incorporation of the FFT-based Temperature Estimate into the Design of the Controller One problem of the baseline controller design for the heating phase is the coupling among the plant inputs which are the outputs of the controller [22]. The baseline controller generates the control input based on the error signal between the output of the infrared sensor reading and the setpoint temperature at that location. But in the oven, the control inputs are

202

Benoit Boulet, Md. Muminul Islam Chy and Guy Gauthier

highly coupled with each other. This means that the temperature at one sensor point cannot be controlled independently due to the coupling among the heaters heating the zone. Due to this limitation, conventional single-loop proportional integral (PI) or proportional integral derivative (PID) controllers that work on the output of a sensor cannot be used efficiently.

2 Error in estimation (C)

Error in Estimation (C)

20

0

-20

-40 2 1 Y-axis of sheet

0

0

0.5

1

1.5

1 0.5 0 -0.5 2

2 1 0

Y-axis of sheet

X-axis of sheet

(a)

0

0.5

1.5

1

2

X-axis of sheet

(b)

1.5

0.8 Error in estimation (C)

Error in estimation (C)

1.5

1 0.5 0 -0.5 2 1 y-axis of sheet

(c)

0

0

0.5

1

1.5

2

0.6 0.4 0.2 0 -0.2 2 1.5 1 0.5

x-axis of sheet

y-axis of sheet

0

0

0.5

1

1.5

2

x-axis of sheet

(d)

Figure 5. The error between the actual temperature profile and predicted temperature profile over the sheet using (a) 1 sensor (b) 4 sensors (2x2) (c) 9 sensors (3x3) (d) 16 sensors (4x4).

The frequency-domain Fourier representation is a method of expressing the spatial distribution of temperature in terms of the sum of its projections onto a set of orthonormal basis functions. If the controller is designed in such a way that it will control the spatial harmonics instead of the conventional way of controlling the output of the sensor, two major problems can be solved. First, as orthogonal basis functions, the spatial harmonics are decoupled from each other. Thus, they can be controlled independently. Second, by controlling the spatial harmonics, the controller can control the temperature over the whole sheet instead of controlling the temperature at certain points. The complete algorithm of the proposed spatial harmonic controller is explained in Fig.6. The surface temperature of the sheet is measured by infrared sensors. As mentioned above, the number of sensors used depends on the expected temperature gradient so that they can recover the spatial harmonics required to control the temperature profile efficiently. The output of the infrared sensor is processed by the 2D FFT to get the spatial frequency spectrum of the temperature profile. The specified temperature profile is also processed through a 2D FFT transformation to get the spatial frequency content of the set point temperature profile. The 2D FFT transformation is basically a matrix multiplication as given in Equation (1), so the computational cost for this transformation is low. The spatial harmonics are compared with the actual temperature profile harmonics obtained from the sensors, and the spectral

Estimation and Control of Sheet Temperature in Thermoforming

203

coefficients of the error are passed to the PI controller. The output of the PI controller is processed through the 2D IFFT block to convert it into the spatial temperature domain on the sheet. Note that the 2D IFFT transformation is also a matrix multiplication just like the 2D FFT.

 

Digital world

Physical world do

di yd

2D-FFT

+

-

Spatial Harmonic Controller

2D-IFFT

2D-FFT

Inverse heating solver

Heater

ym

+

Plastic sheet

+

yo

IR sensors

Figure 6. Block diagram of the proposed spatial harmonic controller of heater bank.

The next block in Fig.6 is the inverse heating solver which computes the corresponding heater temperature set points of the heater bank. Recall that determining the set points of the heater temperatures that will heat the plastic sheet to the specified temperature at the end of the heating cycle is the inverse heating problem. There are three different forms of heat transfer known as conduction, convection and radiation by which the heater units heat the sheet. Among them, radiation is the fastest and predominant way of heating the sheet. Heating by radiation emitted from a heater to a sheet zone depends on the view factor between them. Thus, a pseudo-inverse of the view factor matrix is one of the computationally efficient choices to solve inverse heating problems. The view factor matrix is the matrix whose entries represent the view factors from a particular heater to a zone on the sheet. A view factor can be defined as a parameter which represents the fraction of the radiant energy exchanged between two surfaces having different area and orientation.

2.5. Simulation Results with the Fourier Controller A simulation model is developed using Matlab Simulink. The model of the thermoforming heating process developed in [21] was used for the simulations. The heater banks are composed of 6 heaters at the top and 6 at the bottom as shown in Fig.7. Four (2X2) real sensors are used to sense the temperature of the sheet. The performance of the proposed spatial harmonic controller is compared with the baseline controller for the thermoforming reheat process. The baseline controller that is used for comparison with the proposed controller is as described in [22]. There are 21 virtual sensors as well as four real infrared sensors that are used in the baseline controller. The performance of the proposed controller is presented in Fig.8. The set point temperature profile, the obtained temperature profile of the sheet and corresponding error after a 2000 second cycle are shown in the figure. The temperature at the middle of the sheet is higher than the two sides because it is getting heated by more heaters. But the controller sets the heater temperatures to minimize the error in sheet temperature, as a Fourier

204

Benoit Boulet, Md. Muminul Islam Chy and Guy Gauthier

approximation is optimal in the sense of minimizing the energy of the error function. In Fig.9(a), the temperature at the location of the real sensors is shown. It is observed that the temperatures at those points can exactly follow the desired temperatures whereas the temperatures at other points close to the edges cannot follow the setpoint temperatures as shown in Fig.9(b). As it was mentioned before, the temperature at every point on the sheet could be controlled to the desired temperature if an infinite number of heaters and sensors were used, which is practically impossible. But the proposed controller controls the heater bank temperatures to optimize the error in the sheet temperature. In Fig.10 and Fig.11, the performance of the proposed controller is compared with the baseline controller for a desired temperature profile of equation (4). It is clearly observed that the error in the temperature of the sheet after 2000 seconds is much less in the proposed controller as compared to the baseline controller. In the case of the baseline controller, the temperature at the real sensor point cannot follow the command temperature because of inaccurate estimation of the temperature at the virtual sensor point. If virtual sensors are not used in the baseline controller, then the temperatures at the real sensor point may follow the desired temperature but the temperatures between these sensors will be off. This is because the baseline controller controls the heater banks in such a way that the temperature at the real sensor points will reach the desired temperatures, but it does not seek to optimize the error over the whole sheet.

165

5 0

160 Error(C)

Temparature profile on the sheet (C)

Figure 7. Oven configuration and sensor positions.

155

-5 -10

150

-15 1

145 1 1.5 0.5

1.5 0.5

1

1 0.5

0.5 Y-axis

(a)

0

0

Y-axis

X-axis

0

0

X-axis

(b)

Figure 8. (a) Obtained temperature profile after 2000 second cycle (b) Error between desired and obtained temperature profile using proposed controller.

Estimation and Control of Sheet Temperature in Thermoforming

205

160 140

Temperature(C)

120 100 Command temparature of sheet

80

Measured temparature at sensor

60 40 20 0

200

400

600

800

1000 1200 Time (Sec)

1400

1600

1800

2000

(a)

180 160

Temperature(C)

140 120 100

Command temparature of sheet

80

Temparature at different extreme point of sheet

60 40

(b)

20 0

200

400

600

800

1000 1200 Time (Sec)

1400

1600

1800

2000

200

20

180

0 Error (C)

Temperature profile on the sheet (C)

Figure 9. The actual temperature and desired temperature at (a) the point of the real sensor (b) some extreme point of the sheet for a desired temperature profile of 150 C.

160 140

-20 -40

120

-60 1

100 1

1.5

1.5 0.5

0.5

1

0.5

0.5 Y-axis

(a)

0

0

1

Y-axis

X-axis

0

0

X-axis

(b)

Figure 10. (a) Obtained temperature profile after 2000 second cycle (b) Error between desired and obtained temperature profile using baseline controller for a desired temperature profile of equation (4).

Benoit Boulet, Md. Muminul Islam Chy and Guy Gauthier

5

180

0

160

Error (C)

Temperature profile on the sheet (C)

206

140

-5 -10

120

-15 1

100 1

1.5 0.5

1.5 0.5

1 Y-axis

0.5 0

Y-axis

0

X-axis

1 0.5 0

0

X-axis

(b)

(a)

Figure 11. (a) Obtained temperature profile after 2000 second cycle (b) Error between desired and obtained temperature profile using proposed controller for a desired temperature profile of equation (4).

Temperatue (C)

200

150

100

Temperature at the sensor position of the sheet Command temperature at the sensor position of the sheet

50

0 0

500

1000 Time (Sec)

1500

2000

(a)

160 140

Temperature(C)

120 100 80 60

Temparature at sensor position of sheet

40

Command temparature at sensor position of sheet

20 0

200

400

600

800

1000 1200 Time (Sec)

1400

1600

1800

2000

(b)

Figure 12. The actual temperature and desired temperature at the point of the real sensor for a desired temperature profile of equation (4) (a) baseline controller (b) proposed controller.

In Fig.12, it is observed that the temperatures of the real sensors are trying to reach the desired temperatures to get an optimum error all over the sheet for the proposed controller,

Estimation and Control of Sheet Temperature in Thermoforming

207

whereas in the baseline controller, the sheet temperature cannot follow the desired sheet temperature.

3. SOLVING THE INVERSE HEATING PROBLEM BYCONJUGATE

GRADIENT METHOD Before going into the main discussion, let us review the model that will be used for solving the direct and inverse heating problem.

3.1. Modeling of Sheet Reheat Phase in Thermoforming The model used in this section is developed in [21]. The interested reader can get details of the model therein, but it is briefly discussed here to illustrate the proposed method in solving the IHP. In this section, background modeling and mathematical notation are used which are incorporated into the proposed IHP solving method. Each IR temperature sensor looks at an area on the plastic sheet to perform the temperature measurement. Each area where the sensors are pointing is designated as a “zone”. To facilitate modeling, we assume that there are two IR sensors for each zone of the plastic sheet, one looking at the sheet from above and the other from below (Fig.13).

Plastic sheet

Zone Infrared temperature sensor Figure 13. Zone and IR temperature sensors.

To analyze the propagation of the heat inside the plastic sheet, heat transfer equations must be defined for some points inside the sheet. To do so, each zone is divided into layers throughout the thickness of the sheet (Fig.14). The layers have the same thickness, and at the middle of each layer there is a point referred to as a node. For each node, a differential equation describes the heat exchange of the corresponding layer. Since the surface of the plastic sheet is an important boundary of energy exchange, a node is located directly at the surface, see Fig.14. This forces the layer containing this point to have only half of the volume

208

Benoit Boulet, Md. Muminul Islam Chy and Guy Gauthier

inside the plastic sheet, and hence, its thickness is only half of that of internal layers. The layers having their node at the surface are identified as “surface layers” and the other layers are designated as the “internal layers”.

Node z/2

1

z z z

z z z z

2

N

z/2

Layer

Figure 14. Layers and nodes.

For each node, a differential equation describes the heat exchange of the corresponding layer. There are three ways (conduction, convection and radiation) to exchange energy between heaters, ambient air and nodes. Neglecting the conduction of heat between two zones, the conduction heat transfer between surface node and its adjacent node can be expressed as, d T su 2 = dt ρVC

p

⎡ kA ⎢⎣ Δ z

(T i n

⎤ − T s u )⎥ ⎦

(6)

where, ρ is density of the plastic sheet, Cp is the specific heat of the sheet, k is the heat conduction constant, Δz is the layer thickness, A is the zone area, V is the volume of the layer, Tsu is the surface node temperature and Tin is the temperature of the adjacent node of the surface. The convection has an effect only on the nodes of surface layers and expresses heat exchange between the ambient air and the sheet. The heat transfer can be expressed as,

⎛ d T su 2 = ⎜ ⎜ ρVC dt ⎝

p

⎞ ⎟⎟ h ( T ∞ − T s u ) ⎠

(7)

where, h, the convection coefficient, Tsu is the surface node temperature and T∞ is the ambient air temperature. The radiant energy exchange transmits energy from the heaters to all the nodes of the plastic sheet which can be expressed as,

dTi ⎛ 2 =⎜ ⎜ ρV C dt p ⎝

⎞ ⎡ M ⎤ 4 4 ⎟⎟ σ ε eff Ah ⎢ ∑ (θ j − Ti ) Fk , j ⎥ ⎣ j =1 ⎦ ⎠

(8)

Estimation and Control of Sheet Temperature in Thermoforming

209

where, σ is the Stefan Boltzmann constant, εeff is the effective emissivity, Ah is the area of the heater bank, Fk,j is the view factor between the j-th heater bank and the k-th zone,θj is the j-th heater bank temperature. Details of the method for calculating effective emissivity and view factors can be found in [21] and [22], respectively. If the infrared radiation is able to penetrate inside the plastic sheet, the surface node will not be able to absorb all the heat from the received incident radiant energy and if it penetrates through the thickness of the sheet, it will heat every node on its way and a part of the incident radiant energy is transmitted through the plastic sheet depending on the transmissivity factor. Combining all three forms of heat transfer into the equation for 2M heaters, Z zones and 2 nodes for each zone, and taking the transmissivity into account in the energy transfer from the radiant heaters to the plastic sheet, the model for the k-th zone in the heating phase becomes,

dTk ,top dt

dTk ,bottom dt

(

⎧⎛ kA ⎞ 2 ⎪⎜ (Tk 2 − Tk ,top ) ⎟ + h T∞top − Tk ,top = ⎠ ⎨⎝ Δz ρVC p ⎪ ⎩+ β1QRTk + β1 (1 − β1 )QRBk

)⎫⎪⎬ ⎪ ⎭

⎧⎛ kA ⎫ ⎞ 2 ⎪⎜ (Tk , N −1 − Tk ,bottom ) ⎟ + h (T∞bottom − Tk ,bottom ) ⎪ = ⎠ ⎨⎝ Δz ⎬ ρVC p ⎪ ⎪ (1 ) β β β Q Q + − + 1 RTk 1 RBk ⎩ 1 ⎭

(9)

where,

⎡M ⎤ Q RTk = σε eff Ah ⎢ ∑ (θ j4 − Tk4,top ) Fkj ⎥ ⎣ j =1 ⎦ ⎡ 2M ⎤ Q RBk = σε eff Ah ⎢ ∑ (θ j4 − Tk4,bottom ) Fkj ⎥ ⎣ j = M +1 ⎦ − AΔ z / 2 β 1 := 1 − e

3.2. Solving the Direct Heating Problem In this section, the direct heating problem is discussed and an algorithm is proposed to solve it. The main goal though is to solve the IHP, i.e., finding the correct heating element temperatures that, in a noise-free, perfect open-loop case, would produce the correct sheet temperature distribution. In such an inverse problem, one seeks to determine the appropriate input to obtain a predetermined result, whereas in the direct problem, one calculates the output resulting from the application of a certain input, which is typically easier. But in every iterative method for solving the IHP, the direct problem has to be solved at every iteration. The detailed algorithm solving the direct heating problem is presented in Fig.12(a). The solution of the direct heating problem depends on the initial temperature of the sheet, the cycle time, and the boundary conditions. Using the geometric configuration of the sheet and heaters and the physical and mechanical properties of the plastic, all parameters of the model

210

Benoit Boulet, Md. Muminul Islam Chy and Guy Gauthier

equation can be calculated. Then, the differential equation can be solved using a finite difference method with forward difference approximation considering the final temperature of each step as the initial temperature for the next step. The effect of changing the geometric configuration of the sheet and heating elements due to changes in temperature may be neglected, unless the processor is fast enough to take into account such changes. For example, sheet sag can be modeled. However, a consequence of changes in the geometry of the heating problem due to sheet sag is that the view factor matrix has to be updated at every sampling time. To incorporate the physical change into the direct problem solution, one can use the model equation from [9].

3.3. Solution to the Inverse Heating Problem The pseudo-inverse is a computationally efficient technique that can be used to solve the underdetermined inverse heating problem, namely to determine the heater set points that will heat the plastic sheet to the specified temperature at the end of the heating cycle. As radiation is the fastest and dominant way of heating the sheet, the pseudo-inverse of the view factor matrix has been proposed for solving the IHP [22]. The view factor matrix is the matrix whose entries represent view factors from the heaters to a point on the sheet. As the IHP can be mathematically ill-posed, it cannot be solved as easily as a direct heating problem. As the sensitivity of a sheet zone temperature to each individual heater is very low, a large change in heater temperature causes a small change in sheet temperature. Because of this “weakness” of the actuator, the control of the heating phase, hence the solution of IHP, is difficult and may be numerically unstable. Since the IHP is difficult to solve as compared to the direct heating problem, a promising approach is developed to solve the IHP based on iteratively solving the direct heating problem to get more accurate results in every iteration. In the proposed technique, a conjugate gradient method of optimization is used which is a straightforward and powerful iterative technique for solving linear and nonlinear inverse problems. In this iterative method, a suitable step size is taken along a direction of descent to minimize the objective function. The direction is obtained as a linear combination of the negative gradient direction at the current iteration and the previous direction of descent. Hence, the method not only minimizes the objective function along a negative gradient, it also minimizes the objective function along all previous directions. The proposed algorithm for solving the IHP of the heating phase in thermoforming is described below. Let us consider the desired temperature distribution at sensor points on the sheet after t1 seconds to be Ts = [Ts1

Ts 2 L TsZ ]T . The temperatures obtained according to the model

after t1 seconds at the same sensor points on the sheet are Ti = [Ti1

Ti 2 L TiZ ]T and an

initial guess of the temperatures Ts(initial) is taken as the current temperature of the sheet at different point. The heater temperatures are denoted as, θ = [θ1 θ 2 ........ θ M ]T . The heater temperature is considered as the current temperature for the first iteration. The subscript k used in this algorithm indicates the k th iteration. The objective function for minimizing the difference between the model output and desired temperatures is selected as,

Estimation and Control of Sheet Temperature in Thermoforming

R = (Ts − Ti ) (Ts − Ti ) . T

211 (10)

Step 1: As a first step, the direct heat transfer problem of the model is computed for t1 seconds to obtain Ti by using the initial guess of the sensor temperature and heater temperature obtained in previous iteration to estimate the sheet. The algorithm for solving the direct heat problem has already been discussed in Section 5.1. Step 2: Calculate the objective function in (10) with the solution that was obtained from from Step 1. The magnitude of the objective function is compared to the prescribed error margin ε and the iterative procedure will be stopped when R k +1 = (Ts − Ti k +1 ) (Ts − Ti k +1 ) < ε . T

(11)

Step 3: Compute the sensitivity matrix. Its entries can be expressed as

⎡ ∂T ⎤ S=⎢ i ⎥ ⎢⎣ ∂θ j ⎥⎦

T

⎡ ∂Ti1 ⎢ ∂θ ⎢ 1 ⎢ ∂Ti 2 = ⎢⎢ ∂θ1 ⎢ : ⎢ ⎢ ∂TiZ ⎢⎣ ∂θ1

∂Ti1 ∂θ 2 ∂Ti 2 ∂θ 2 : ∂TiZ ∂θ 2

∂Ti1 ⎤ ∂θ M ⎥ ⎥ ∂Ti 2 ⎥ .... ∂θ M ⎥⎥ : ⎥ ⎥ ∂TiZ ⎥ .... ∂θ M ⎥⎦ ....

(12)

The sensitivity matrix plays a significant role in the convergence to a stable solution. As the change of temperature in a particular sensor is very low following a change in a heater’s temperature, the entries of the sensitivity matrix are small. The computation of the sensitivity matrix is also complex and costly. Thus, a simplified procedure of calculating the sensitivity matrix will be presented in the next section. Step 4: The conjugate gradient direction of the objective function can be calculated by differentiating equation (10) with respect to heater temperatures,

∇ R k = −2( S k )T (Ts − Ti k ) .

(13)

Step 5: Conjugate coefficient

γ k can be calculated in a few ways. The Polak-Ribière

expression is used in the proposed method,

∑{⎡⎣∇R M

γ = k

j =1

k

⎤⎦ ⎡⎣∇R k − ∇R k −1 ⎤⎦ j j

M

∑ ⎡⎣∇R j =1

k −1

⎤⎦ j

2

}

0

for k=1,2,3,.. γ = 0

(14)

212

Benoit Boulet, Md. Muminul Islam Chy and Guy Gauthier

Step 6: The direction of descent is computed in this step. The direction of descent is the linear combination of the negative gradient direction of the objective function and the direction of descent of the previous iteration which can be expressed as,

d k = −∇R k + γ k d k −1 .

(15)

Step 7: The search for the step size β k can be computed in such a way that the new estimated values of the heating element temperatures will minimize the objective function in the direction of descent.

min β k R k +1 = min β k (Ts − Ti k +1 ) (Ts − Ti k +1 ) T

(16)

By expanding Ti k +1 with a Taylor series expansion and solving the optimization problem in equation (16) with respect to β , the following expression for step size can be obtained, k

T

⎡ S k d k ⎤⎦ ⎡⎣Ti k − Ts ⎤⎦ . β =−⎣ k k T k k ⎡⎣ S d ⎤⎦ ⎡⎣ S d ⎤⎦ k

(17)

Step 8: Compute the new estimate of the heater temperatures:

θ k +1 = θ k + β k d k .

(18)

Step 9: Increase the iteration index by 1 and return to Step 1.

3.4. Sensitivity Matrix Calculation The sensitivity matrix plays a big role in computing the solution to the inverse heating problem. The sensitivity matrix can be computed off-line at certain operating points and used on-line using interpolation, depending on the operating point of the heaters. A finite difference approximation is used here to compute the sensitivity coefficients. The forward difference equation can be used as an approximation to the derivative equation in (12):

Sij =

Ticyc − Tiss Δθ j

,

(19)

where S ij is the (i, j ) entry of the sensitivity matrix, Ticyc is the temperature of the ith sensor at the end of the heating cycle time, Tiss is the steady-state temperature of the ith sensor before applying the step change in temperature on the heater, and Δθ j is the magnitude of the step change in heater temperature of the jth heater.

Estimation and Control of Sheet Temperature in Thermoforming

Figure 15. (a) Algorithm for solving direct heating problem.

213

214

Benoit Boulet, Md. Muminul Islam Chy and Guy Gauthier

Figure 15. (b) Algorithm for solving indirect heating problem (continued).

Estimation and Control of Sheet Temperature in Thermoforming

215

3.5. Simulation Results of the Proposed Solution Method for the Inverse Heating Problem

C o m m a n d sh e e t te m p e ra tu re (C )

The proposed IHP solver is programmed in such a way that it will give the set points of the heaters to reach the desired sheet temperature at the 9 sensors (4 real and 5 virtual sensor) location in 50 seconds. The conventional method and proposed method is used to find out the exact value of the set point of heater temperature such that the sheet will achieve the desired uniform sheet temperature 80°C as shown in Fig.16 at the end of 50s considering the initial sheet and air temperature is 50°C. The set points for 6 heaters using the conventional method are shown in Fig.17(a). Then the simulink model was used to get the sheet temperature with these set points, and the result after 50s and corresponding error are shown in Fig. 17(b) and 17(c). It is observed that there are large errors as the conventional method considers the system as a linear system, and furthermore the cycle time and the current operating point of the system were not incorporated in solving the IHP.

81 80.5 80 79.5 79 20 10 0 -10 -20 Y-axis

-30

-20

0

-10

10

20

30

X-axis

Figure 16. The desired sheet temperature to solve for inverse heating problem.

The same simulation for the proposed method is presented in Fig.17. The errors are significantly reduced in this method. As the proposed method considers the temperature at the sensor points, the error at these points is very small. Although the proposed method increases the computations as compared to the conventional method, it gives more accurate results. It is observed that the proposed method converges to the solution in less than 20 iterations. As the heating phase is a slow process, the sampling period in a real-time application is as long as 1 or 2 seconds. So the increase in computation due to the proposed IHP solver does not cause any problem to the processor. The performance between the conventional and proposed method is compared for different operating point as shown in Tables I, II, and III. In Table I, the set points of the heater temperatures are resolved for a desired sheet temperature of 120°C and the heater is excited to that temperature. The corresponding sheet temperature at the sensor points is measured after 50s. The solution of the proposed method gives accurate results as compared to the conventional method. The same results for a desired sheet temperature 170°C and 220°C are presented in Tables II and III, respectively.

216

Benoit Boulet, Md. Muminul Islam Chy and Guy Gauthier

H e a te r te m p e ra tu re (C )

200 150 100 50 0 40 20 0 -20 -40

-20

-30

Y-axis

30

20

10

0

-10 X-axis

(a)

S h e e t te m p e ra tu re e rro r (C )

57 S h e e t te m p e ra tu re (C )

56.5 56

55.5 55

54.5 54 20 10 0 -10 -20

-30

-20

Y-axis

30

20

10

0

-10

26 25.5 25 24.5 24 23.5 23 20 10 0 -10

X-axis

-20

(b)

-20

-30

0

-10

Y-axis

30

20

10

X-axis

(c)

Figure 17. (a): the set point of the heater temperature calculated by conventional IHP solver to obtain desired uniform sheet temperature at 90°C (b): Sheet temperature obtained with the heater temperature (c): Error between desired and obtained sheet temperature.

H e a te r te m p e ra tu re (C )

500 400 300 200 100 0 40 20 0 -20 -40

-30

-10

-20

Y-axis

10

0

30

20

X-axis

(a)

S h e e t te m p e ra tu re e rro r (C )

S h e e t te m p e ra tu re (C )

100 95 90

-10

85

-15

80 20

-20 20

10 0 -10 -20 Y-axis

(b)

0 -5

-30

-20

0

-10

10

20

30

10 0 -10 -20 Y-axis

X-axis

-30

-20

0

-10

10

20

30

X-axis

(c)

Figure 18. (a): the set point of the heater temperature calculated by proposed IHP solver to obtain desired uniform sheet temperature at 90°C (b): Sheet temperature obtained with the heater temperature (c): Error between desired and obtained sheet temperature.

Estimation and Control of Sheet Temperature in Thermoforming

217

Table 1. Comparison between proposed and conventional method. Solution of IHP with Initial sheet temperature =100°C, air temperature =100°C, Command sheet temperature =120°C and corresponding sheet temperature at real sensor position with the IHP solution after 50s.

Zone 1 Zone 2 Zone 3 Zone 4 Zone 5 Zone 6

Heater Zone temperature (°C) Proposed Conventional method method 372.7 254.2 339.2 209.8 372.7 254.2 371.9 254.2 335.3 209.8 371.9 254.2

Sensor 1 Sensor 2 Sensor 3 Sensor 4

Sensor temperature (°C) Proposed Conventional method method 120.8 107.3 120.8 107.3 120.9 107.3 120.9 107.3

Table 2. Comparison between proposed and conventional method. Solution of IHP with Initial sheet temperature =150°C, air temperature =150°C, Command sheet temperature =170°C and corresponding sheet temperature at real sensor position with the IHP solution after 50s. Heater Zone temperature (°C) Proposed Conventional method method Zone 1 443.0 466.0 Zone 2 372.1 384.6 Zone 3 443.1 466.0 Zone 4 445.4 466.0 Zone 5 365.7 384.6 Zone 6 445.4 466.0

Sensor temperature (°C) Proposed method Sensor 1 221.8 Sensor 2 221.8 Sensor 3 221.8 Sensor 4 221.8

Conventional method 225.3 225.3 225.3 225.3

Table 3: Comparison between proposed and conventional method. Solution of IHP with Initial sheet temperature =200°C, air temperature =200°C, Command sheet temperature =220°C and corresponding sheet temperature at real sensor position with the IHP solution after 50s. Heater Zone temperature (°C) Proposed method Zone 1 398.7 Zone 2 361.6 Zone 3 398.9 Zone 4 402.9 Zone 5 360.8 Zone 6 402.9

Conventional method 360.1 297.2 360.1 360.1 297.2 360.1

Sensor temperature (°C) Proposed method Sensor 1 171.3 Sensor 2 171.3 Sensor 3 171.0 Sensor 4 171.0

Conventional method 164.3 164.3 164.3 164.3

218

Benoit Boulet, Md. Muminul Islam Chy and Guy Gauthier

CONCLUSION In thermoforming, the sheet heating phase is crucial to form a good-quality part. Sheet temperature control is still a challenging problem because of the distributed nature of heat transfer and the typically large and mismatched numbers of heaters and sheet zones equipped with temperature sensors that are required for feedback. In particular, high sensor cost imposes a limitation in the number of sensors that can be used in a machine. In this chapter, we proposed a method based on the two-dimensional discrete Fourier transform of the sheet temperatures to contain the error in temperature estimates at sheet zones that do not have an infrared sensor reading their temperature. The temperature readings of the sensors are used as spatially sampled data of the sheet temperature to compute the Fourier transform. Once the Fourier transform is computed, the temperature at any point can be estimated. However, the accuracy of the estimation depends on the number of sampled temperatures from the sensors. Fortunately, the temperature distribution on a sheet is typically a band-limited function in terms of spatial harmonics which means that it cannot fluctuate rapidly with a change of position. This band-limited property of the temperature distribution allows us to estimate the temperature more accurately with a reduced set of sensors. Hence, for a desired accuracy in sheet temperature control, an analysis can be performed in order to find the minimum number of equally-spaced infrared sensors that will provide temperature estimates anywhere on the sheet to within the desired accuracy. The inverse heating problem is another important issue in controlling the sheet temperature. Temperature control can be greatly enhanced by solving the inverse heating problem, namely, by computing the heater set points that will heat the plastic sheet to the specified temperature at the end of the heating cycle. The heaters are then pre-heated to their computed set points at the beginning of the heating cycle, just before the sheet temperature controller is turned on. An approach is developed to solve the IHP based on iteratively solving the direct heating problem to get more accurate results at each iteration. The conjugate gradient method is used in the developed approach because of its superiority in terms of computation, number of iterations, and gradual convergence to an actual solution, over other methods like the Newton-Raphson, Quasi-Newton, and Gaussian methods. The sensitivity matrix plays a big role in the convergence of the solution for a nonlinear system such as the thermoforming process. As the system is nonlinear, the sensitivity changes a lot depending on the operating point. Although a finite difference method is proposed to obtain the sensitivity matrix in this chapter, more work could be done to improve the sensitivity matrix at different operating points. If the sensitivity matrix is not accurate, then the method may take a longer time to converge.

REFERENCES [1] [2]

Nam, G.J.; Lee, J.W. Journal of Reinforced Plastics and Composites. 2001, 20, 11821190. Pham, X.-T.; Bates, P.; Chesney, A. Journal of Reinforced Plastics and Composites. 2005, 24, 287-298.

Estimation and Control of Sheet Temperature in Thermoforming [3] [4]

[5]

[6]

[7]

[8] [9] [10] [11] [12] [13] [14] [15] [16] [17]

[18] [19]

[20] [21]

[22]

[23]

219

Rozant, O.; Bourban, P.E.; Manson, J.A.E. Journal of Thermoplastic Composite Materials. 2000, 13, 510-523. Marangou, M.G. Thermoforming of polystyrene sheets deformation and tensile properties, Thesis, Department of Chemical Engineering. McGill University, Montreal, Canada, 1986. Tang, T. Finite element analysis of nonlinear viscoelastic membranes in relation to thermoforming , thesis, Department of Civil Engineering and Applied Mechanics. McGill University, Montreal, Canada, 1991. Hao, Y. Infrared sensor placement optimization and monitoring in thermoforming ovens, Master’s thesis, Department of Electrical and Computer Engineering. McGill University, Montreal, Canada, 2008. Girard, P.; Thomson, V.; Boulet, B. Advanced In-cycle and Cycle-to cycle On-line adaptive Control for Thermoforming of large Thermoplastic Sheets. SAE Advances in Plastic Components, Processes and Technologies, 2005. Boulet, B.; Thomson, V.; Girard, P.; di. Raddo, R.; Haurani, A. Intelligent Processing and Manufacturing of Materials. 2005, Monterey, CA. Duarte, F.M.; Covas, J.A. Plastics, Rubber and Composites. 2002, 31, 307-317. Duarte, F.M.; Covas, J.A. Plastics, Rubber and Composites. 2003, 32, 32-39. Lee, K.H.; Baek, S.W.; Kim, K.W. International Journal of heat and Mass Transfe . 2008, 51, 2772-2783. Duarte, F.M.; Covas, J.A. Plastics rubber and composites processing and applications. 1997, 26, 213-221. Burggraf, J. R. Journal of Heat transfer. 1964, 86, 373-382. Arledge, R.G.; Haji-Sheikh, A. Numer. Hear Transfer. 1978, 1, 365-376. Woo, K.C.; Chow, L.C. Numerical Heat Transfer. 1981, 4, 499-504. Cortes, O.; Urquiza, G.; Hernandez, J.A.; Cruz, M.A. In Prooceedings of Electronics, Robotics and Automotive Mechanics Conference, 2007. pp. 198-201. Kudo, K.; Kuroda, A.; Eid, A.; Saito, T.; Oguma, M. In Radiative Transfer I: Proc. First Int. Symp. on Radiative Heat Transfer, Menguç, M.P.; Ed.; Begell House, New York, 1996, pp. 568–578. França, F.; Ezekoye, O.; Howell, J. In Proc. of the ASME 1999 International Mechanical Engineering Congress and Exposition, Nashville, vol. 1, 1999, pp. 45–52. Janicki, M.; Soraghan, J.; Napieralski, A.; Zubert, M. In Proceedings of MIXDES’98— Fifth International Conference on Mixed Design of Integrated Circuits and Systems, Lódz , Poland, 18–20 June 1998, pp. 183–188. Krutz, G.W.; Schoendals, R.J.; Hore, P.S. Numer. Heat Transfer. 1978, 1 , 489-498. Gauthier, G. Terminal iterative learning for cycle-to-cycle control of industrial processes, PhD thesis, Department of Electrical and Computer Engineering. McGill University, Montreal, Canada, 2008. Ajersch, M. Modeling and Real-Time Control of Sheet Reheat Phase in Thermoforming, M.Eng. thesis, Department of Electrical and Computer Engineering, McGill University, 2004. Kumar, V. Estimation of absorptivity and heat flux at the reheat phase of thermoforming process, thesis, Department of Mechanical Engineering. McGill University, Montreal, Canada, 2005.

220 [24] [25] [26] [27] [28] [29] [30]

[31]

Benoit Boulet, Md. Muminul Islam Chy and Guy Gauthier Beaudoin, N.; Beauchemin, S.S. In Proceedings of 16th International conference on pattern recognition, vol. 3, 2002, pp-935-939. Michael, G.; Porat, M. In Proceedings of international conference on image processing, Vol.-1, 2001, pp-213-216. Fienup, J.R. Optics Letters. 1978, 3, 27-29. Sacchi, M.D.; Ulrych, T.J.; Walker, C.J. IEEE Transactions on signal Processing. 1998, 46, 31-38 Dai, Y.H.; Yuan, Y. SIAM J. Optim. 1999, 10, 177-182. Lin, D.T.W.; Yan, W.-M.;,Li, H.-Y. International Journal of Heat and Mass Transfer. 2008, 51, 993-1002. Shuonan, Y. Cycle-to-cycle control of plastic sheet heating on the AAA thermoforming machine, M.Eng. thesis, Department of Electrical and Computer Engineering, McGill University, 2008. Gauthier, G.; Ajersch, M.A.; Boulet, B.; Haurani, A.; Girard, P.; DiRaddo, R. In Proceedings of ANTEC 2005- the 63rd annual technical conference & exibition, Boston, MA, May 1-5, 2005. Society of plastic Engineers, pp. 1209-1213.

INDEX A accounting, 67 activation energy, 121, 122 additives, ix, 87 adhesives, 124 aerospace, 198 aesthetics, 1 Africa, 163, 165 agriculture, vii, 1 air temperature, 217, 219 algorithm, 4, 12, 23, 24, 27, 50, 52, 53, 55, 61, 64, 65, 69, 70, 71, 73, 74, 76, 77, 78, 79, 91, 101, 102, 103, 111, 117, 131, 138, 152, 153, 154, 155, 156, 158, 159, 161, 172, 178, 182, 188, 198, 204, 211, 212, 213 ambient air, 173, 210 ambient air temperature, 210 amino, 124 ammonium, 124 annealing, 48, 150, 155, 159, 161 applied mathematics, 193 Arrhenius law, 156 articulation, 65 Artificial Neural Networks, 73, 74 Asia, 166 aspiration, 35, 45, 46, 47, 52, 57 Austria, 81 automate, 196 automation, 2

B bandwidth, 199 banks, 205, 206 base, 65 Belgium, 165, 166, 167

benefits, 116, 157, 161, 171 blends, 1, 119 blow molding, vii, ix, 1, 87 Boltzmann constant, 211 bounds, 12, 32, 40 bulk polymerization, 120

C cables, ix, 87 CAD, 172 calibration, ix, 6, 145, 146, 147, 149, 150, 152, 153, 155, 156, 159, 160, 161 calibration/cooling system, ix, 6, 145, 146, 149, 150, 152, 153, 155, 156, 159, 161 capillary, 156 case studies, 89, 90, 126 case study, 148, 150, 162 cellulose, 124 chemical, ix, 1, 2, 6, 87, 115, 117, 118, 119, 126, 177, 178, 198 chemical industry, 198 chemical reactions, ix, 6, 87, 117, 126 Chicago, 82 China, 164 clarity, 14, 182 classification, 33 clustering, 62, 63, 102, 126 coherence, 91 combined effect, 71, 88, 89 commercial, ix, 3, 5, 87 community, viii, 30, 177 comparative analysis, 56 compatibility, 184 compensation, 44 complexity, x, 51, 74, 99, 118, 145, 148, 169, 177, 196, 197 complications, 73

222

Index

composites, 221 compounds, ix, 87, 120 compression, 89, 100 computation, 5, 6, 16, 61, 71, 80, 171, 172, 173, 176, 177, 181, 191, 192, 198, 213, 217, 220 computational fluid dynamics, ix, 59, 148 computational grid, 153 computer, ix, 5, 59, 148 computing, ix, 21, 23, 59, 61, 172, 177, 185, 187, 214, 220 conduction, 89, 90, 91, 96, 118, 170, 172, 183, 196, 205, 210 conductivity, 2, 127, 148, 171, 173 conference, 58, 81, 83, 193, 222 configuration, 6, 88, 107, 108, 109, 116, 126, 127, 128, 131, 137, 138, 141, 157, 206, 211 conflict, viii, 29, 53 Congress, 56, 57, 58, 81, 82, 221 conjugate gradient method, 7, 198, 212, 220 conservation, 3, 88, 177 constant rate, 118 construction, vii, ix, 94, 141, 145, 150, 157 consumption, ix, 1, 3, 87, 91, 95, 97, 103 contour, 19, 66 convergence, 45, 49, 75, 146, 152, 182, 191, 213, 220 conversion rate, 119, 121, 123, 124, 127, 128, 131 convex optimization problems, vii, viii, 11 cooling, ix, x, 2, 6, 7, 116, 145, 146, 147, 148, 149, 150, 152, 153, 155, 156, 159, 160, 161, 162, 169, 170, 171, 172, 173, 188, 189, 190, 192, 195 cooling process, 150 coordination, 120 copolymers, 120 correlation, 1, 60, 72, 135, 179 cosmetics, 124 cost, vii, 11, 17, 18, 19, 21, 24, 52, 60, 91, 101, 128, 135, 172, 177, 192, 196, 197, 198, 204, 220 CPU, 171, 178, 191, 192 critical value, 146 customer service, viii, 29 cycles, 171, 192 Czech Republic, 165

D Decision Maker (DM), viii, 30 decomposition, 60, 178, 182, 187, 197 decoupling, 94 defects, 146, 171 deformation, 221 degradation, 100, 116, 119 Delta, 174

Denmark, 165, 167 depth, 89 derivatives, 14, 18, 19, 20, 21, 22, 23, 176 designers, 148 detectable, 202 deterministic algorithms, vii, 11, 27 deviation, 34, 42, 70 DFT, 199 diffusion, 150, 172 diffusivity, 148, 172 dimensionality, 150, 182 discretization, 177, 180, 181, 182, 183, 198 discs, 95, 116, 124, 129, 136 dispersion, 69, 71, 76, 77, 116 displacement, 75, 78, 79, 171 distribution, 51, 62, 146, 152, 153, 155, 156, 167, 170, 174, 191, 197, 198, 199, 200, 202, 203, 204, 211, 212, 220 diversity, 48, 49, 51 dominance, viii, 29, 31, 32, 49, 50, 51, 68, 75 drainage, 124 drug release, 120

E economics, 55 elastomers, 117 energy, 2, 3, 88, 89, 91, 95, 96, 101, 107, 109, 126, 137, 170, 177, 197, 205, 206, 209, 210, 211 energy consumption, 101 energy transfer, 211 engineering, vii, viii, 3, 29, 59, 60, 61, 69, 177, 178 engineering problems, vii, 60, 61 England, 54 environment, ix, 3, 59, 65, 89 equality, 12, 14, 19, 30 equipment, vii, ix, 2, 3, 87, 94, 116, 148 Europe, 1, 162, 163, 165, 167 evidence, 3, 90 evolution, 3, 23, 51, 69, 79, 91, 97, 119, 124, 149, 152, 158, 178, 180, 181, 183, 184, 186, 187 evolutionary computation, 53, 58 Evolutionary Multiobjective Optimization (EMO), viii, 30 exercise, 137, 141 expertise, 3 exploitation, 74 exposure, 2 extraction, 2, 178 extrusion, vii, ix, x, 1, 3, 4, 5, 6, 76, 87, 88, 91, 92, 93, 96, 98, 99, 100, 101, 103, 104, 105, 106, 107, 108, 109, 110, 111, 115, 116, 117, 119, 120, 127,

Index 141, 145, 146, 147, 148, 149, 150, 152, 153, 154, 155, 156, 157, 159, 161, 162, 167

F FEM, 150, 153, 172, 175, 182 FFT, 201, 202, 203, 204 fibers, ix, 87, 124 film thickness, 91 films, 90, 96, 118 finite element method, 149, 173, 174, 182 fishing, 1 fitness, 48, 49, 50, 51, 52, 63, 64, 69, 70, 71, 72, 73, 74 flexibility, 2, 34 flight, 96, 100, 118 flow field, 118, 153 fluid, 60, 90, 91, 159, 196 food, ix, 2, 88, 124 force, 90, 91 formation, 90, 96, 118, 147 formula, 52 foundations, vii, 11 France, 57, 115, 137, 169 free volume, 121, 123 freedom, 134, 157, 183 friction, 89, 90 functional zones, 89, 92, 96, 98, 152 funding, 162

G genetics, 23 geometrical parameters, 91, 100, 104, 156 geometry, 2, 3, 6, 88, 89, 92, 95, 96, 99, 100, 103, 116, 117, 118, 137, 141, 146, 147, 150, 152, 156, 161, 162, 171, 172, 177, 189, 212 Germany, 54, 58, 80, 162, 164, 167 goal attainment, 36, 54 granules, 170 graph, 111, 161 gravity, 88, 89 grid computing, ix, 59 grids, 153 growth, 91

heat transfer, 2, 3, 60, 61, 89, 95, 96, 122, 123, 147, 149, 150, 156, 159, 172, 173, 176, 182, 192, 197, 198, 205, 209, 210, 211, 213, 220 height, 15, 16, 21, 91 historical reason, 149 history, 171, 179, 190, 191 homogeneity, 2 House, 221 human, 196 hybrid, ix, 73, 78, 87, 101, 150 hybridization, 6, 60, 61 hydroxyl, 124, 125, 137

I ideal, 32, 41, 44, 45, 52, 66, 67, 68 identification, 61, 90, 99, 141, 177 image, 30, 32, 33, 198, 222 image analysis, 198 images, 31 improvements, 148, 158 independence, 71 independent variable, 74 India, 56, 164 individuals, 24, 27, 48, 49, 50, 51, 52, 63, 64, 71, 73, 74, 75, 102, 126 industries, vii, ix, 88, 145 industry, viii, 29, 124, 137 inequality, 12, 13, 14, 19, 22, 30 ingredients, 115 initiation, 116 injection molding, vii, ix, x, 1, 6, 87, 169, 171, 172, 192 insertion, 3, 120 insulation, 1 integration, 53, 173 integrity, ix, 145 interface, 4, 159 intervention, 150, 162 inversion, 183 investments, 162 Islam, vi, 195 issues, 6, 51, 146 Italy, 166 iteration, 6, 44, 46, 47, 91, 171, 176, 191, 211, 212, 213, 214, 220

H HDPE, 101, 102 heat capacity, 123

223

J Japan, 164, 167 justification, 6, 131

224

Index

K kinetic constants, 118, 119 kinetic equations, 118 kinetics, 117, 118, 121, 122

L Lagrangian formulation, 187 landscape, 74 lead, 183, 198 learning, 52, 221 leisure, vii, 1 light, 53, 55 light beam, 53, 55 linear function, 13, 177, 188 linear model, 33, 183 linear programming, 34 linear systems, 177 liquids, 1, 116 local conditions, 90 lubricants, ix, 87 Luo, 80 lying, 49

M machine learning, 56 magnitude, 77, 119, 178, 180, 202, 203, 213, 214 manufacturing, vii, ix, 59 mapping, 73 mass, ix, 3, 87, 88, 91, 100, 148, 177 materials, ix, 2, 23, 87, 119, 150, 172 mathematical programming, viii, 30, 33, 47, 48, 53, 188 matrix, 14, 15, 18, 19, 20, 21, 22, 38, 66, 117, 176, 179, 181, 183, 186, 188, 190, 197, 204, 205, 212, 213, 214, 220 matter, 17, 24, 61 measurement, viii, 29, 198, 209 measurements, 3, 69, 180, 188, 197 mechanical properties, 196 media, 187 medical, ix, 1, 120, 145 melt, ix, x, 1, 3, 87, 88, 89, 90, 91, 92, 95, 96, 97, 98, 99, 100, 104, 115, 116, 117, 118, 126, 127, 138, 141, 145, 146, 147, 148, 149, 152, 154, 155, 169, 170 melting, ix, 1, 88, 89, 90, 91, 92, 94, 95, 96, 97, 103, 105, 115, 116, 118, 127, 141 melting temperature, 96 melts, 2, 88, 89, 169

membranes, 221 memory, 198 metallurgy, 198 methodology, ix, 5, 6, 55, 59, 61, 62, 71, 73, 80, 88, 111, 117, 119, 152, 172, 188, 192 mixing, ix, 3, 87, 88, 89, 91, 94, 95, 101, 103, 115, 116, 119 model reduction, 177, 178, 185, 187 models, 27, 60, 92, 96, 98, 99, 118, 119, 148, 149, 177, 180, 181, 182, 186, 187 modifications, 69, 73, 101, 116, 119, 124, 147, 152 modules, 4, 117, 119 mold, x, 2, 7, 169, 170, 171, 172, 173, 189, 190, 191, 192, 195 molds, 172 molecular weight, 1, 94, 116, 121, 122, 127 molecular weight distribution, 1, 116 momentum, 3, 88, 91, 96, 177 monomers, 121 morphology, x, 2, 3, 119, 145 motivation, 52 moulding, 192 Multi-Criteria Decision Making (MCDM), viii multidimensional, 182 Multi-Objective Evolutionary Algorithms (MOEAs), viii multiobjective optimization, viii, 29, 30, 31, 32, 39, 44, 45, 52, 53, 55 Multiobjective Optimization Problems (MOPs), viii, 29 multiplication, 12, 204 multiplier, 20 mutation, 24, 76, 101, 102, 126 mutation rate, 102

N nanocomposites, ix, 87 natural evolution, 47 Netherlands, 165 neural network, 73, 74, 78, 198 neutral, 95 nitrogen, 125 nodes, 74, 177, 178, 182, 188, 210, 211 non-dominated solutions, viii, 60, 63, 66, 74, 75, 128, 131, 138 null, 15, 19 numerical analysis, 177, 178 numerical tool, 149

Index

O operating costs, viii, 29 operations, 62, 88, 94, 156 opportunities, 64 optimization algorithms, vii, 5, 6, 11, 89, 149, 153 optimization method, 6, 7, 47, 61, 64, 65, 69, 99, 111, 172, 176, 192 organ, 120 originality, 187 orthogonal functions, 193 oscillation, 198 oxygen, 120

P parallel, ix, 2, 59, 115, 146, 147, 152, 177 parents, 24, 48, 50 Pareto, viii, 6, 29, 30, 31, 32, 33, 35, 36, 37, 38, 39, 40, 41, 42, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 60, 61, 62, 64, 65, 66, 67, 68, 69, 71, 72, 73, 75, 76, 77, 79, 101, 104, 106, 107, 108, 110, 111, 126, 128, 131, 134, 141 Pareto optimal, viii, 30, 31, 32, 33, 35, 36, 37, 38, 39, 41, 42, 44, 46, 48, 52, 53, 54, 60, 65, 71 Pareto optimal solutions, viii, 30, 32, 38, 39, 41, 42, 44, 52, 65, 71 partial differential equations, 118, 177 partition, 187 pattern recognition, 222 permeability, 1 permission, iv, 123, 126 pharmaceutical, 124 pharmaceuticals, ix, 88 phase inversion, 117 phenotype, 51 physical and mechanical properties, 211 physical phenomena, 2, 91 physical properties, vii, 148, 157, 190 pitch, 88, 89, 100, 129 plastic components, vii Plasticating extrusion, ix, 87 plasticizer, 94 plastics, ix, 1, 87, 88, 164 Plastics processing, vii Poland, 166, 221 polyamides, 1, 116 polyesters, 1 polymer, vii, ix, x, 1, 2, 3, 5, 6, 60, 61, 66, 87, 88, 89, 90, 94, 96, 100, 111, 115, 116, 118, 119, 120, 121, 122, 141, 145, 146, 147, 148, 149, 150, 154, 155, 157, 159, 169, 170, 172, 173, 190, 195, 196

225

polymer blends, ix, 87, 116 polymer chain, 120 polymer materials, 66 polymer melts, 170 polymer processing technologies, vii, ix, 87 polymer properties, 3, 88, 173 polymer system, vii, ix, 87, 94, 116, 141 polymeric materials, 117 polymerization, 6, 115, 117, 119, 120, 121, 122, 123, 126, 127, 129, 131 polymerization mechanism, 120 polymers, ix, x, 87, 116, 148, 169, 170 polyolefins, 1 polypropylene, 116, 156 polysaccharide, 124 polystyrene, 221 polyurethanes, 1 population, viii, 24, 26, 30, 48, 49, 50, 51, 52, 53, 60, 62, 63, 64, 71, 73, 75, 101, 138 population size, 49 Portugal, 1, 11, 59, 81, 87, 112, 115, 145 practical knowledge, 5 preparation, iv preservation, 51 pressure gradient, 118, 122 probability, 64, 70, 126, 134 probability distribution, 70 problem solving, 150 processing variables, 99 production costs, 171 profit, vii, 11 programming, viii, 14, 23, 33, 34, 35, 41, 42, 52, 54, 55, 75 project, 162, 189 propagation, 209 prostheses, 120 pulp, 124 PVC, 88, 163, 165, 167

Q quadratic programming, 6, 55

R radar, 91 radiation, 2, 205, 210, 211, 212 radius, 15, 16, 18 reactant, 115 reaction rate, 117, 119 reactions, 116, 117, 119, 126 Reactive extrusion, ix, 115, 117

226

Index

reactive sites, 120 reading, 203, 220 reagents, 116, 119, 124 reality, 23 reasoning, 131 reciprocity, 6 recombination, 24, 27, 73 recommendations, iv, 148 reconstruction, 198 rectification, 171 regression, 74 relevance, 149 reliability, 147 reproduction, 48, 63, 101 requirements, 62, 149 researchers, 47, 48 residuals, 187 resistance, 94, 95, 150 resolution, 7, 68, 149, 177 resources, ix, 53, 59, 149, 153, 177 response, 2, 3, 4, 202 restrictions, 147, 148 rheology, 60, 147 roughness, 148 routes, viii, 29 routines, 5, 6, 80, 88, 89 rubber, 221 rules, vii, viii, 11, 24, 27, 148

S savings, 2, 178 science, 60, 148, 177 scientific method, 111 scope, 131 search space, vii, viii, 11, 13, 24, 62, 71, 74 segregation, 89 sensing, 2, 196 sensitivity, 54, 91, 94, 197, 212, 213, 214, 220 sensors, 88, 196, 197, 198, 199, 200, 203, 204, 205, 206, 208, 209, 217, 220 sequencing, 101 shape, ix, x, 2, 15, 47, 48, 68, 70, 96, 141, 145, 146, 147, 152, 169, 170, 172, 188, 195, 196 shear, 3, 91, 95, 97, 101, 117, 122, 127, 146, 156 showing, 151, 160, 202 signals, 74, 197 signs, 34 simulation, 63, 119, 150, 153, 156, 172, 176, 177, 178, 180, 181, 182, 188, 189, 202, 203, 205, 217 sine wave, 199 Singapore, 112 skin, 120

SMS, 57 software, 3, 5, 61, 88, 118, 123, 126, 172 solid state, 170 solidification, x, 2, 159, 169, 170 solution, vii, viii, ix, 3, 4, 5, 11, 12, 13, 18, 19, 20, 21, 22, 23, 24, 27, 29, 30, 31, 32, 35, 36, 37, 38, 41, 42, 43, 44, 45, 46, 48, 51, 52, 53, 60, 61, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 101, 103, 115, 116, 128, 148, 149, 152, 153, 156, 157, 158, 159, 161, 162, 172, 173, 174, 175, 177, 178, 180, 181, 182, 183, 184, 185, 186, 187, 191, 197, 198, 211, 212, 213, 214, 217, 219, 220 Spain, 81, 112, 142 spatial frequency, 198, 199, 200, 204 specialists, 177 species, 1, 94, 118 specific heat, 127, 210 spending, 95 sprue, 170 stabilizers, ix, 87 standard deviation, 24, 70, 155, 156, 161 starch, 6, 117, 119, 120, 124, 125, 126, 136, 137, 138, 139, 140, 141 stars, 184, 185 state, 6, 148, 187, 214 stress, 3, 66, 67, 68, 146 stress-strain curves, 66 structural defects, 171 structure, 48, 53, 62, 66, 125, 178 styrene, 116 substitution, 43, 65, 124, 125, 137 succession, 89 surface area, 16 surface layer, 210 survival, 48 suspensions, 96 Sweden, 163 Switzerland, 56

T Taiwan, 166 target, 33, 34, 124 techniques, vii, viii, ix, 1, 2, 30, 32, 33, 39, 48, 53, 59, 60, 61, 78, 147, 149, 177 technologies, vii, ix, 1, 88 technology, ix, 1, 59, 141 testing, 77 theoretical approaches, 117 thermal degradation, 146 thermal expansion, 148 thermoforming, vii, x, 1, 2, 7, 195, 196, 197, 198, 205, 212, 220, 221, 222

Index Thermoplastic profiles, ix, 145 thermoplastics, 1 three-dimensional model, 149 threshold level, 188 titanium, 120 toys, vii, 1 trade, viii, 30, 38, 49, 53, 60, 61, 103 trade-off, viii, 60, 61, 103 training, 74 transformation, x, 34, 146, 180, 181, 184, 198, 204 transformation matrix, 180 trial, 2, 75, 88, 148, 149, 153, 154, 156, 157, 158, 159 turbulence, 178 two-dimensional space, 103, 107

U UK, 57, 80, 83 uniform, 91, 96, 149, 172, 196, 217, 218 universal gas constant, 121 updating, 177 USA, 54, 58, 81, 82

227

101, 111, 137, 149, 155, 156, 157, 159, 161, 188, 190 variations, 6, 60, 69, 71, 147, 190 vector, 11, 12, 13, 14, 18, 19, 20, 30, 31, 32, 35, 37, 38, 39, 41, 44, 45, 46, 48, 65, 66, 67, 68, 176, 179, 180, 182, 188 velocity, 3, 89, 91, 146, 147, 154, 157, 158, 177 veto, 47, 53 viscosity, 3, 96, 97, 101, 102, 117, 119, 120, 121, 122, 123, 124, 125, 127, 134, 135, 138, 141, 156, 196 vulcanization, 117

W Washington, 56, 57 water, 94, 115, 124, 147, 148, 150 weakness, 212 wear, 150 wires, ix, 87

Y yield, 141

V vacuum, x, 94, 147, 148, 195 Valencia, 81 validation, 111 variables, vii, viii, 3, 6, 11, 12, 13, 18, 23, 30, 34, 35, 60, 61, 62, 69, 73, 74, 75, 80, 88, 91, 99, 100,

Z Zone 1, 219 Zone 2, 219 Zone 3, 219