Nonlinear Optimization in Electrical Engineering with Applications in MATLAB® 1849195439, 9781849195430

Nonlinear Optimization in Electrical Engineering with Applications in MATLAB provides an introductory course on nonlinea

376 10 5MB

English Pages 308 [326] Year 2013

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
1 Mathematical background

Introduction
Vectors
Matrices
The solution of linear systems of equations
Derivatives
Subspaces
Convergence rates
Functions and sets
Solutions of systems of nonlinear equations
Optimization problem definition

2 An introduction to linear programming

Introduction
Examples of linear programs
Standard form of an LP
Optimality conditions
The matrix form
Canonical augmented form
Moving from one basic feasible solution to another
Cost reduction
The classical Simplex method
Starting the Simplex method
Advanced topics

3 Classical optimization

Introduction
Single-variable Taylor expansion
Multidimensional Taylor expansion
Meaning of the gradient
Optimality conditions
Unconstrained optimization
Optimization with equality constraints
Lagrange multipliers
Optimization with inequality constraints
Optimization with mixed constraints

4 One-dimensional optimization-Line search

Introduction
Bracketing approaches
Derivative-free line search
Interpolation approaches
Derivative-based approaches
Inexact line search

5 Derivative-free unconstrained techniques

Why unconstrained optimization?
Classification of unconstrained optimization techniques
The random jump technique
The random walk method
Grid search method
The univariate method
The pattern search method
The Simplex method
Response surface approximation

6 First-order unconstrained optimization techniques

Introduction
The steepest descent method
The conjugate directions method
Conjugate gradient methods

7 Second-order unconstrained optimization techniques

Introduction
Newton’s method
The Levenberg–Marquardt method
Quasi-Newton methods

8 Constrained optimization techniques

Introduction
Problem definition
Possible optimization scenarios
A random search method
Finding a feasible starting point
The Complex method
Sequential linear programming
Method of feasible directions
Rosen’s projection method
Barrier and penalty methods

9 Introduction to global optimization techniques

Introduction
Statistical optimization
Nature-inspired global techniques

10 Adjoint sensitivity analysis

Introduction
Tellegen’s theorem
Adjoint network method
Adjoint sensitivity analysis of a linear system of equations
Time-domain adjoint sensitivity analysis
Recommend Papers

Nonlinear Optimization in Electrical Engineering with Applications in MATLAB®
 1849195439,  9781849195430

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Renewable Energy Series 17

Nonlinear Optimization in Electrical Engineering with Applications in MATLAB

®

Mohamed Bakr

Nonlinear Optimization in Electrical Engineering with Applications in MATLAB‡

Nonlinear Optimization in Electrical Engineering with Applications in MATLAB‡ Mohamed Bakr

The Institution of Engineering and Technology

Published by The Institution of Engineering and Technology, London, United Kingdom The Institution of Engineering and Technology is registered as a Charity in England & Wales (no. 211014) and Scotland (no. SC038698). † The Institution of Engineering and Technology 2013 First published 2013 This publication is copyright under the Berne Convention and the Universal Copyright Convention. All rights reserved. Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may be reproduced, stored or transmitted, in any form or by any means, only with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publisher at the undermentioned address: The Institution of Engineering and Technology Michael Faraday House Six Hills Way, Stevenage Herts, SG1 2AY, United Kingdom www.theiet.org While the author and publisher believe that the information and guidance given in this work are correct, all parties must rely upon their own skill and judgement when making use of them. Neither the author nor publisher assumes any liability to anyone for any loss or damage caused by any error or omission in the work, whether such an error or omission is the result of negligence or any other cause. Any and all such liability is disclaimed. The moral rights of the author to be identified as author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.

British Library Cataloguing in Publication Data A catalogue record for this product is available from the British Library

ISBN 978-1-84919-543-0 (hardback) ISBN 978-1-84919-544-7 (PDF)

Typeset in India by MPS Limited Printed in the UK by CPI Group (UK) Ltd, Croydon

To my wife Mahetab, my children Jannah, Omar, and Youssef, and to my parents to whom I am indebted for as long as I live

Contents

Preface Acknowledgments

xi xv

1

Mathematical background 1.1 Introduction 1.2 Vectors 1.3 Matrices 1.4 The solution of linear systems of equations 1.5 Derivatives 1.5.1 Derivative approximation 1.5.2 The gradient 1.5.3 The Jacobian 1.5.4 Second-order derivatives 1.5.5 Derivatives of vectors and matrices 1.6 Subspaces 1.7 Convergence rates 1.8 Functions and sets 1.9 Solutions of systems of nonlinear equations 1.10 Optimization problem definition References Problems

1 1 1 3 6 11 11 12 14 15 16 18 20 20 22 25 25 25

2

An introduction to linear programming 2.1 Introduction 2.2 Examples of linear programs 2.2.1 A farming example 2.2.2 A production example 2.2.3 Power generation example 2.2.4 Wireless communication example 2.2.5 A battery charging example 2.3 Standard form of an LP 2.4 Optimality conditions 2.5 The matrix form 2.6 Canonical augmented form 2.7 Moving from one basic feasible solution to another 2.8 Cost reduction 2.9 The classical Simplex method 2.10 Starting the Simplex method 2.10.1 Endless pivoting 2.10.2 The big M approach 2.10.3 The two-phase Simplex

29 29 29 29 30 31 32 32 33 37 39 40 42 45 46 49 51 51 52

viii

Nonlinear optimization in electrical engineering with applications in MATLAB

2.11 Advanced topics A2.1 Minimax optimization A2.1.1 Minimax problem definition A2.1.2 Minimax solution using linear programming A2.1.3 A microwave filter example A2.1.4 The design of coupled microcavities optical filter References Problems

55 55 55 57 59 61 65 65

3

Classical optimization 3.1 Introduction 3.2 Single-variable Taylor expansion 3.3 Multidimensional Taylor expansion 3.4 Meaning of the gradient 3.5 Optimality conditions 3.6 Unconstrained optimization 3.7 Optimization with equality constraints 3.7.1 Method of direct substitution 3.7.2 Method of constrained variation 3.8 Lagrange multipliers 3.9 Optimization with inequality constraints 3.10 Optimization with mixed constraints A3.1 Quadratic programming A3.2 Sequential quadratic programming References Problems

69 69 69 71 73 76 76 78 79 80 84 86 92 92 95 99 99

4

One-dimensional optimization-Line search 4.1 Introduction 4.2 Bracketing approaches 4.2.1 Fixed line search 4.2.2 Accelerated line search 4.3 Derivative-free line search 4.3.1 Dichotomous line search 4.3.2 The interval-halving method 4.3.3 The Fibonacci search 4.3.4 The Golden Section method 4.4 Interpolation approaches 4.4.1 Quadratic models 4.4.2 Cubic interpolation 4.5 Derivative-based approaches 4.5.1 The classical Newton method 4.5.2 A quasi-Newton method 4.5.3 The Secant method 4.6 Inexact line search A4.1 Tuning of electric circuits

101 101 102 103 104 105 105 106 108 111 112 113 116 119 119 121 122 123 124

Contents

ix

A4.1.1 Tuning of a current source A4.1.2 Coupling of nanowires A4.1.3 Matching of microwave antennas References Problems

125 127 128 129 130

5

Derivative-free unconstrained techniques 5.1 Why unconstrained optimization? 5.2 Classification of unconstrained optimization techniques 5.3 The random jump technique 5.4 The random walk method 5.5 Grid search method 5.6 The univariate method 5.7 The pattern search method 5.8 The Simplex method 5.9 Response surface approximation A5.1 Electrical application: impedance transformers A5.2 Electrical application: the design of photonic devices References Problems

131 131 131 132 133 134 135 137 140 143 146 149 151 152

6

First-order unconstrained optimization techniques 6.1 Introduction 6.2 The steepest descent method 6.3 The conjugate directions method 6.3.1 Definition of conjugacy 6.3.2 Powell’s method of conjugate directions 6.4 Conjugate gradient methods A6.1 Solution of large systems of linear equations A6.2 The design of digital FIR filters References Problems

153 153 153 156 157 158 162 164 169 173 173

7

Second-order unconstrained optimization techniques 7.1 Introduction 7.2 Newton’s method 7.3 The Levenberg–Marquardt method 7.4 Quasi-Newton methods 7.4.1 Broyden’s rank-1 update 7.4.2 The Davidon–Fletcher–Powell (DFP) formula 7.4.3 The Broyden–Fletcher–Goldfarb–Shanno method 7.4.4 The Gauss–Newton method A7.1 Wireless channel characterization A7.2 The parameter extraction problem A7.3 Artificial neural networks training References Problems

175 175 175 178 179 180 182 184 185 188 189 193 201 201

x

Nonlinear optimization in electrical engineering with applications in MATLAB

8

Constrained optimization techniques 8.1 Introduction 8.2 Problem definition 8.3 Possible optimization scenarios 8.4 A random search method 8.5 Finding a feasible starting point 8.6 The Complex method 8.7 Sequential linear programming 8.8 Method of feasible directions 8.9 Rosen’s projection method 8.10 Barrier and penalty methods A8.1 Electrical engineering application: analog filter design A8.2 Spectroscopy References Problems

203 203 203 204 206 208 210 212 215 218 221 224 227 230 231

9

Introduction to global optimization techniques 9.1 Introduction 9.2 Statistical optimization 9.3 Nature-inspired global techniques 9.3.1 Simulated annealing 9.3.2 Genetic algorithms 9.3.3 Particle swarm optimization A9.1 Least pth optimization of filters A9.2 Pattern recognition References Problems

233 233 233 236 237 240 246 247 252 257 258

10 Adjoint sensitivity analysis 10.1 Introduction 10.2 Tellegen’s theorem 10.3 Adjoint network method 10.4 Adjoint sensitivity analysis of a linear system of equations 10.5 Time-domain adjoint sensitivity analysis A10.1 Sensitivity analysis of high-frequency structures References Problems

261 261 262 264 278 282 295 298 299

Index

303

Preface

In 2008, I was asked to give a sequence of lectures about optimization to graduate electrical engineering students in University of Waterloo, Canada. These lectures were attended by over 40 students, postdoctoral fellows, and industry CAD people. I tried in this course to teach optimization, which is a highly mathematical subject, with more focus on applications. The course was well received by all attendees. The feedback I received from them regarding other optimization courses, in comparison to my course, was surprising. Most of them attended other courses in optimization that contributed very little to their understanding of the subject. Most of these courses were too abstract with very little connection to practical examples. There were so many confusing mathematical proofs that distracted them from the actual meanings and implementations. Engineering students, both graduate and undergraduate, would like to get a better focus on the meaning hidden behind the lengthy proofs. They are interested in real-life engineering applications rather than abstract material that add very little to their global understanding of the subject. I was encouraged by this experience to write an optimization book dedicated to electrical engineers; a book that combines simplicity in introducing the subject with mathematical rigor; a book that focuses more on examples and applications rather than on lengthy mathematical proofs. I use MATLAB as a tool to show simplified implementations of the discussed algorithms. This should help in removing the abstractness of the theorems and improve understanding of the different subjects. This book is intended as an initial book in optimization for undergraduate students, graduate students, and industry professionals. Third-year engineering students should have the necessary background to understand different concepts. Special focus is put on giving as many examples and applications as possible. Only absolutely necessary mathematical proofs are given. These proofs, usually short in nature, serve the purpose of increasing the depth of understanding of the subject. For lengthy proofs, the interested reader is referred to other more advanced books and published research papers in the subject. For the applications given, I try to show a MATLAB implementation as much as possible. In more advanced applications that require non-MATLAB commercial codes, I just report on the problem formulation and the results. All the MATLAB codes given in this book are only for educational purposes. The students should be able to fully understand these codes, change their parameters, and apply them to different problems of interest. These codes can also be developed further for researchand industry-related applications. The full MATLAB codes can be downloaded through the link www.optimizationisfun.com. Even though most of the addressed examples and applications are related to electrical engineering, students, researchers, and technical people from other engineering areas will find the material also useful. Electrical engineering has been integrated with other areas resulting in innovative programs in, for example,

xii

Nonlinear optimization in electrical engineering with applications in MATLAB

mechatronics and biomedical engineering. I give as much explanation to the theory behind every application so that the unfamiliar reader will grasp the basic concepts. This book is organized as follows. In Chapter 1, we review the basic mathematical background needed in this book. Most third-year electrical engineering students will have good familiarity with the content of this chapter. We discuss in this chapter basic rules of differentiation of vectors and matrices. I added more content related to linear algebra that is useful in understanding many of the discussed optimization subjects. Chapter 2 gives an overview of the linear programming approach. As well known in the area of engineering, nonlinear problems can be solved by converting them to a sequence of linear problems. Linear programming addresses the issue of optimizing a linear objective function subject to linear constraints. I explain in this chapter the basic Simplex method and the theory behind it. I give several examples and applications relevant to the area of electrical engineering. Chapter 3 reviews some classical optimization techniques. These techniques found wide applications before the advent of the digital computer era. The mathematical foundations of these techniques form the bases for many of the numerical optimization techniques. In Chapter 4, we address the one-dimensional line search problem. This problem is very important in optimization theory as it is routinely utilized by other linear and nonlinear optimization techniques. I introduce different approaches for solving this problem including derivative-free techniques, first-order line search techniques, and Newton-based techniques. Derivative-free nonlinear optimization techniques for unconstrained problems are discussed in Chapter 5. These techniques do not require any derivative information for solving general multidimensional problems. They are useful in problems where sensitivity information may not be available or costly to obtain. Only objective function values are used to guide the optimization iterations. Chapter 6 discusses gradient-based techniques for unconstrained optimization. These are techniques that assume the availability of first-order sensitivity information. We discuss the basic steepest descent method and a number of conjugate optimization techniques. These techniques have robust convergence proofs under certain assumptions on the objective function. Quasi-Newton techniques for the unconstrained optimization of general nonlinear problem are discussed in Chapter 7. These techniques aim at approximating second-order sensitivity information and using them in guiding the optimization iterations. These techniques are known for their good convergence rate. I illustrate these techniques through examples and applications. Chapter 8 reviews some constrained optimization techniques. Some of these techniques build on different concepts of unconstrained optimization. Other techniques sequentially approximate the nonlinear problem by a corresponding linear problem. All the techniques discussed in Chapters 1 through 8 are local optimization techniques. They obtain a minimum of the optimization problem that satisfies certain optimality conditions only locally. Chapter 9 introduces a number of techniques for obtaining the global minimum of an optimization problem. This is the minimum that has the lowest value of the objective function over all other minima. A number of nature-inspired techniques are discussed and illustrated through examples and applications.

Preface

xiii

Chapter 10 addresses adjoint-based techniques for estimating the gradient of an objective function. I show in this chapter how, by using only one extra simulation, the sensitivities of an objective function with respect to all parameters are estimated regardless of their number. I illustrate this approach for the adjoint network method, the frequency-domain case, and the time-domain case. Several examples and applications are presented. Finally, I should say that the area of optimization is a huge field with so many techniques and applications. No one book can cover all material by its own. I just attempted here to help the readers scratch the surface of this area. I encourage them to build on the material in this book and continue to seek more advanced learning in this area. Dr. Mohamed Bakr Professor, Department of Electrical and Computer Engineering, McMaster University, Hamilton, Ontario, Canada July 2013

Acknowledgments

I would like to acknowledge the mentorship of Drs. Hany L. Abdel Malek and Abdel Karim Hassan from Cairo University, Egypt. They were the first ones to introduce me as a Master student to the fascinating area of optimization. Being electrical engineers themselves, they have always attempted to focus on applications without abandoning the mathematical rigor. I would like also to thank Drs. John Bandler from McMaster University and Radek Biernacki (currently with Agilent technologies) for their guidance during my graduate studies in Canada. I have learnt a lot from my Ph.D. supervisor Dr. John Bandler. He is a dedicated person with high commitment to excellence. His work in computer aided design (CAD) of microwave structures has helped shape the field and motivated all his students. It was also my pleasure that I helped teach EE3KB3 course with Dr. Biernacki. Dr. Biernacki gave me the opportunity to conduct the tutorials for this applied optimization course. This was an amazing experience to learn about applications of optimization theory to the design of microwave circuits and antennas. I include the material of this course as a reference in some of the chapters because they helped inspire several of the presented examples. I would like to thank all the good researchers who worked with me over the years and helped shape my experience in the areas of optimization and computational electromagnetics. These excellent researchers include Dr. Payam Abolghasem, Dr. Peter Basl, Dr. Ezzeldin Soliman, Dr. Mahmoud ElSabbagh, Dr. Ahmed Radwan, Dr. Mohamed Swillam, Dr. Osman Ahmed, Peipei Zhao, Kai Wang, Laleh Kalantari, Yu Zhang, Mohamed Negm, Harman Malhi, and Mohamed Elsherif. I would also like to thank my wife and children for their patience during the development of this book. Last but foremost, I would like to thank God for giving me the strength and energy to continue this work to its end.

Chapter 1

Mathematical background

1.1 Introduction Mathematics is the language we use to express different concepts and theorems in optimization theory. Like all languages, one has to master the basic building blocks of a language such as the letters, the words, and the symbols in order to gain some fluency in a language. The main target of this chapter is to briefly review some of the mathematical concepts needed for subsequent chapters. We will review some properties of vectors and matrices. I will introduce some of the vocabulary used in optimization theory. We will also discuss the properties of the solution of a system of linear equations.

1.2 Vectors A vector expresses a number of variables or parameters combined together. It offers a more compact way for manipulating these parameters. A general n-dimensional real vector x 2 0) %there is improvement in objective function oldPoint=newPoint; %accept new point oldErrors=newErrors; %make new error vector current error vector maxErrors=max(oldErrors); %get maximum of new errors end %adjust trust region size according to quality of linearization Ratio=actualReduction/predictedReduction; %ratio between actual reduction and predicted reduction if(Ratio>0.8) %linearization is goodinearization trustRegionSize=trustRegionSize*1.1 %make step bigger else if(Ratio0)) %there at least one active constraint with positive multiplier solutionFound=true; %set flag to true end if((size(Lambda,1))==0) %no active constraint then if this is the unconstrained minimum then solutionFound=true; %feasible unconstrained minimum end end Counter=Counter+1; %move to the next solution end Parameters

In this problem, we have four constraints (m ¼ 4). It follows that up to 16 solutions have to be checked to find the optimal solution. The above MATLAB code M3.1 receives an arbitrary quadratic function and arbitrary number of linear inequality constraints and searches for the optimal solution that satisfies the necessary optimality conditions. The output for this problem for the given matrices and vectors is: Parameters ¼ 1.0000 1.0000 1.0000 This solution x ¼ ½1:0 1:0 1:0ŠT is the unconstrained minimum of the objective function. The first constraint x1 þ x2 þ x3  3 is active at this point but it has a zero

Classical optimization

95

Lagrange multiplier as rf ¼ 0 at this point. If this constraint is changed to x1 þ x2 þ x3  2, the output of the program becomes: Parameters ¼ 1.5000 0.5000 0

This solution is different from the unconstrained minimum. Only the first constraint is active at this point with a positive Lagrange multiplier l1 ¼ 0.5. This can be checked at this point by writing:

rf ðxÞjx

2

2 1 ¼ 41 2 1 0

where a1 ¼ ½1:0

3 3 2 3 2 32 0:5 4 1:5 1 0 54 0:5 5 þ 4 3 5 ¼ 4 0:5 5 ¼ 0:5 2 0 1

0:5a1

ð3:82Þ

1:0 1:0ŠT is the gradient of the first constraint.

A3.2 Sequential quadratic programming The discussion covered by (3.76)–(3.79) addresses the case of a quadratic objective function with linear constraints. For a problem with a general objective function and general constraints, quadratic programming can be applied in an iterative way to reach the optimal solution. This is referred to as sequential quadratic programming (SQP). It is one of the most widely used techniques in solving nonlinear optimization problems because of its demonstrated convergence properties. We discuss briefly SQP and illustrate it through an electrical engineering application. The problem we aim at solving is given by: min x

f ðxÞ

subject to

gðxÞ ¼ 0

ð3:83Þ

where g(x) 2