201 7 20MB
English Pages 159 [161]
NIEBE • ERLEBEN NIEBE NIEBE •• ERLEBEN ERLEBEN
SSyntheSiS yntheSiS yntheSiS L LectureS ectureS ectureS on on on c omputer omputer G GraphicS raphicS raphicS and and and a animation nimation nimation computer
Series Series SeriesISSN: ISSN: ISSN:1933-8996 1933-8996 1933-8996
Series Series SeriesEditor: Editor: Editor:Brian Brian BrianA. A. A.Barsky, Barsky, Barsky,The The TheUniversity University Universityof of ofCalifornia, California, California,Berkeley Berkeley Berkeley
Sara Sara SaraNiebe Niebe Niebeand and andKenny Kenny KennyErleben, Erleben, Erleben,University University UniversityofofofCopenhagen Copenhagen Copenhagen
Linear Linear Linearcomplementarity complementarity complementarityproblems problems problems(LCPs) (LCPs) (LCPs)have have havefor for formany many manyyears years yearsbeen been beenused used usedin in inphysics-based physics-based physics-basedanimaanimaanimation tion tionto to tomodel model modelcontact contact contactforces forces forcesbetween between betweenrigid rigid rigidbodies bodies bodiesin in incontact. contact. contact.More More Morerecently, recently, recently,LCPs LCPs LCPshave have havefound found foundtheir their their way way wayinto into intothe the therealm realm realmof of offluid fluid fluiddynamics. dynamics. dynamics.Here, Here, Here,LCPs LCPs LCPsare are areused used usedto to tomodel model modelboundary boundary boundaryconditions conditions conditionswith with with fluid-wall fluid-wall fluid-wallcontacts. contacts. contacts.LCPs LCPs LCPshave have havealso also alsostarted started startedto to toappear appear appearin in indeformable deformable deformablemodels models modelsand and andgranular granular granularsimulations. simulations. simulations. There There Thereisisisan an anincreasing increasing increasingneed need needfor for fornumerical numerical numericalmethods methods methodsto to tosolve solve solvethe the theresulting resulting resultingLCPs LCPs LCPswith with withall all allthese these thesenew new new applications. applications. applications.This This Thisbook book bookprovides provides providesaaanumerical numerical numericalfoundation foundation foundationfor for forsuch such suchmethods, methods, methods,especially especially especiallysuited suited suitedfor for foruse use use in in incomputer computer computergraphics. graphics. graphics. This This Thisbook book bookisisismainly mainly mainlyintended intended intendedfor for foraaaresearcher/Ph.D. researcher/Ph.D. researcher/Ph.D.student/post-doc/professor student/post-doc/professor student/post-doc/professorwho who whowants wants wantsto to to study study studythe the thealgorithms algorithms algorithmsand and anddo do domore more morework/research work/research work/researchin in inthis this thisarea. area. area.Programmers Programmers Programmersmight might mighthave have haveto to toinvest invest invest some some sometime time timebrushing brushing brushingup up upon on onmath math mathskills, skills, skills,for for forthis this thiswe we werefer refer referto to toAppendices Appendices AppendicesA A Aand and andB. B. B.The The Thereader reader readershould should should be be befamiliar familiar familiarwith with withlinear linear linearalgebra algebra algebraand and anddifferential differential differentialcalculus. calculus. calculus. We We Weprovide provide providepseudo pseudo pseudocode code codefor for forall all allthe the thenumerical numerical numericalmethods, methods, methods,which which whichshould should shouldbe be becomprehensible comprehensible comprehensibleby by byany any any computer computer computerscientist scientist scientistwith with withrudimentary rudimentary rudimentaryprogramming programming programmingskills. skills. skills.The The Thereader reader readercan can canfind find findan an anonline online onlinesupplemensupplemensupplementary tary tarycode code coderepository, repository, repository,containing containing containingMatlab Matlab Matlabimplementations implementations implementationsof of ofmany many manyof of ofthe the thecore core coremethods methods methodscovered covered coveredin in in these these thesenotes, notes, notes,as as aswell well wellas as asaaafew few fewPython Python Pythonimplementations implementations implementations[Erleben, [Erleben, [Erleben,2011]. 2011]. 2011].
ABOUT ABOUT ABOUTSYNTHESIS SYNTHESIS SYNTHESIS
MORGAN MORGAN MORGAN& CLAYPOOL CLAYPOOL PUBLISHERS PUBLISHERS PUBLISHERS & &CLAYPOOL wwwwwwwww...m m mooorrrgggaaannnccclllaaayyypppoooooolll...cccooom m m
ISBN: ISBN: ISBN: 978-1-62705-371-6 978-1-62705-371-6 978-1-62705-371-6
90000 90000 90000 999781627 781627 781627053716 053716 053716
M OR G G AN & & CL AY AY P OOL OOL M M OR OR G AN AN & CL CL AY P P OOL
This This Thisvolume volume volumeisisisaaaprinted printed printedversion version versionof of ofaaawork work workthat that thatappears appears appearsin in inthe the theSynthesis Synthesis Synthesis Digital Digital DigitalLibrary Library LibraryofofofEngineering Engineering Engineeringand and andComputer Computer ComputerScience. Science. Science.Synthesis Synthesis SynthesisLectures Lectures Lectures provide provide provideconcise, concise, concise,original original originalpresentations presentations presentationsof of ofimportant important importantresearch research researchand and anddevelopment development development topics, topics, topics,published published publishedquickly, quickly, quickly,in in indigital digital digitaland and andprint print printformats. formats. formats.For For Formore more moreinformation information information visit visit visitwww.morganclaypool.com www.morganclaypool.com www.morganclaypool.com
NUMERICAL METHODS FOR FOR LINEAR COMPLEMENTARITY COMPLEMENTARITY PROBLEMS IN IN PHYSICS-BASED ANIMATION ANIMATION NUMERICAL NUMERICAL METHODS METHODS FOR LINEAR LINEAR COMPLEMENTARITY PROBLEMS PROBLEMS IN PHYSICS-BASED PHYSICS-BASED ANIMATION
Numerical NumericalMethods Methodsfor forLinear Linear Complementarity ComplementarityProblems Problemsin in Physics-Based Physics-BasedAnimation Animation
MOR MOR MORG G GA A AN N N& CL C LAY AY AYPOOL POOL POOL PU PU PUBLI BLI BLISSSH H HERS ERS ERS &CL
Numerical Numerical Methods Methods for for Linear Linear Complementarity Complementarity Problems Problems in in Physics-Based Physics-Based Animation Animation
Sara SaraNiebe Niebe Kenny KennyErleben Erleben
SSyntheSiS yntheSiS yntheSiS L LectureS ectureS ectureS on on on computer omputer omputer G GraphicS raphicS raphicS and and and a animation nimation nimation c Brian Brian BrianA. A. A.Barsky, Barsky, Barsky,Series Series SeriesEditor Editor Editor
Numerical Methods for Linear Complementarity Problems in Physics-Based Animation
Synthesis Lectures on Computer Graphics and Animation Editor Brian A. Barsky, University of California, Berkeley
is series will present lectures on research and development in computer graphics and geometric modeling for an audience of professional developers, researchers and advanced students. Topics of interest include Animation, Visualization, Special Effects, Game design, Image techniques, Computational Geometry, Modeling, Rendering and others of interest to the graphics system developer or researcher.
Numerical Methods for Linear Complementarity Problems in Physics-Based Animation Sarah Niebe and Kenny Erleben 2015
Mathematical Basics of Motion and Deformation in Computer Graphics Ken Anjyo and Hiroyuki Ochiai 2014
Mathematical Tools for Shape Analysis and Description Silvia Biasotti, Bianca Falcidieno, Daniela Giorgi, and Michela Spagnuolo 2014
Information eory Tools for Image Processing Miquel Feixas, Anton Bardera, Jaume Rigau, Qing Xu, and Mateu Sbert 2014
Gazing at Games: An Introduction to Eye Tracking Control Veronica Sundstedt 2012
Rethinking Quaternions Ron Goldman 2010
Information eory Tools for Computer Graphics Mateu Sbert, Miquel Feixas, Jaume Rigau, Miguel Chover, and Ivan Viola 2009
iii
Introductory Tiling eory for Computer Graphics Craig S.Kaplan 2009
Practical Global Illumination with Irradiance Caching Jaroslav Krivanek and Pascal Gautron 2009
Wang Tiles in Computer Graphics Ares Lagae 2009
Virtual Crowds: Methods, Simulation, and Control Nuria Pelechano, Jan M. Allbeck, and Norman I. Badler 2008
Interactive Shape Design Marie-Paule Cani, Takeo Igarashi, and Geoff Wyvill 2008
Real-Time Massive Model Rendering Sung-eui Yoon, Enrico Gobbetti, David Kasik, and Dinesh Manocha 2008
High Dynamic Range Video Karol Myszkowski, Rafal Mantiuk, and Grzegorz Krawczyk 2008
GPU-Based Techniques for Global Illumination Effects László Szirmay-Kalos, László Szécsi, and Mateu Sbert 2008
High Dynamic Range Image Reconstruction Asla M. Sá, Paulo Cezar Carvalho, and Luiz Velho 2008
High Fidelity Haptic Rendering Miguel A. Otaduy and Ming C. Lin 2006
A Blossoming Development of Splines Stephen Mann 2006
Copyright © 2015 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations in printed reviews, without the prior permission of the publisher.
Numerical Methods for Linear Complementarity Problems in Physics-Based Animation Sarah Niebe and Kenny Erleben www.morganclaypool.com
ISBN: 9781627053716 ISBN: 9781627053723
paperback ebook
DOI 10.2200/S00621ED1V01Y201412CGR018
A Publication in the Morgan & Claypool Publishers series SYNTHESIS LECTURES ON COMPUTER GRAPHICS AND ANIMATION Lecture #18 Series Editor: Brian A. Barsky, University of California, Berkeley Series ISSN Print 1933-8996 Electronic 1933-9003
Numerical Methods for Linear Complementarity Problems in Physics-Based Animation
Sarah Niebe and Kenny Erleben University of Copenhagen
SYNTHESIS LECTURES ON COMPUTER GRAPHICS AND ANIMATION #18
M &C
Morgan & cLaypool publishers
ABSTRACT Linear complementarity problems (LCPs) have for many years been used in physics-based animation to model contact forces between rigid bodies in contact. More recently, LCPs have found their way into the realm of fluid dynamics. Here, LCPs are used to model boundary conditions with fluid-wall contacts. LCPs have also started to appear in deformable models and granular simulations. ere is an increasing need for numerical methods to solve the resulting LCPs with all these new applications. is book provides a numerical foundation for such methods, especially suited for use in computer graphics. is book is mainly intended for a researcher/Ph.D. student/post-doc/professor who wants to study the algorithms and do more work/research in this area. Programmers might have to invest some time brushing up on math skills, for this we refer to Appendices A and B. e reader should be familiar with linear algebra and differential calculus. We provide pseudo code for all the numerical methods, which should be comprehensible by any computer scientist with rudimentary programming skills. e reader can find an online supplementary code repository, containing Matlab implementations of many of the core methods covered in these notes, as well as a few Python implementations [Erleben, 2011].
KEYWORDS linear complementarity problems, Newton methods, splitting methods, interior point methods, convergence rates, performance study
vii
Contents 1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1
1.2
1.3
2
Understanding e Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1.1 First-Order Optimality is a Linear Complementarity Problem . . . . . . . . . 4 1.1.2 Nonsmooth Root Search Reformulations . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.1.3 e Boxed Linear Complementarity Problem . . . . . . . . . . . . . . . . . . . . . 11 1.1.4 Other Reformulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 e Problem in n-Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.2.1 1D BLCP to 4D LCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.2.2 e Boxed Linear Complementarity Problem in Higher Dimensions . . 23 1.2.3 BLCP and the QP formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.2.4 Converting BLCP to LCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.2.5 Nonsmooth reformulations of BLCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Examples from Physics-Based Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 1.3.1 Fluid-Solid Wall Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 27 1.3.2 Free-Flowing Granular Matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 1.3.3 Density Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 1.3.4 Joint Limits in Inverse Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 1.3.5 Contact Force Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Numerical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.1
2.2
2.3
Pivoting Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.1.1 Direct Methods for Small-Sized Problems . . . . . . . . . . . . . . . . . . . . . . . 49 2.1.2 Incremental Pivoting “Baraff Style” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Projection or Sweeping Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.2.1 Splitting Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 2.2.2 Using a Quadratic Programming Problem . . . . . . . . . . . . . . . . . . . . . . . . 62 2.2.3 e Blocked Gauss-Seidel Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 2.2.4 Staggering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 2.2.5 e Projected Gauss-Seidel Subspace Minimization Method . . . . . . . . . 70 2.2.6 e Nonsmooth Nonlinear Conjugate Gradient Method . . . . . . . . . . . . 73 e Interior Point Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
viii
2.4
3
Guide for Software and Selecting Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 3.1 3.2 3.3
A
Newton Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 2.4.1 e Minimum Map Newton Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 2.4.2 e Fischer-Newton Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 2.4.3 Penalized Fischer-Newton Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 2.4.4 Tips, Tricks and Implementation Hacks . . . . . . . . . . . . . . . . . . . . . . . . 102
Overview of Numerical Properties of Methods Covered . . . . . . . . . . . . . . . . . 107 Existing Practice on Mapping Models to Methods . . . . . . . . . . . . . . . . . . . . . . 108 3.2.1 Existing Software Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Future of LCPs in Computer Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Basic Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 A.1
A.2 A.3
Order Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 A.1.1 What Is a Limit? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 A.1.2 e Small-o Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 A.1.3 e Big-O notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Lipschitz Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
B
First-Order Optimality Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
C
Convergence, Performance and Robustness Experiments . . . . . . . . . . . . . . . . . 135
D
Num4LCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 D.1
Using Num4LCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Authors’ Biographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
1
CHAPTER
1
Introduction In computer graphics, linear complementarity problems (LCPs) are notorious for being hard to solve and difficult to understand. LCPs found their initial application in rigid body dynamics [Baraff, 1989, 1993, 1994, 1995]. However, LCPs provide important general-purpose models, extending beyond rigid body dynamics. Deformable models, granular materials and fluids may all be formulated using LCPs [Alduán and Otaduy, 2011, Batty et al., 2007, Chentanez and Müller, 2011, Duriez et al., 2006, Gascón et al., 2010, Otaduy et al., 2007, 2009]. Literature is sparse on the topic of numerical methods for solving LCPs in computer graphics. Baraff [Baraff, 1994] introduced a Dantzig pivoting method to the graphics community. Lemke’s pivoting method has also received some attention [Eberly, 2010, Hecker, 2004, Kipfer, 2007]. ere are examples of projected Gauss-Seidel (PGS) type methods [Courtecuisse and Allard, 2009, Erleben, 2007, Gascón et al., 2010]. ere is a tendency in literature to ignore the fact that PGS methods may not converge to solutions for the problems they intend to solve. Recently, a multilevel PGS method was presented [Chentanez and Müller, 2011] but did not provide any convergence guarantees. In [Otaduy et al., 2007], a multilevel method for solving the elastic deformation of a deformable model was combined with a PGS-style solver. However, the paper did not address multigrid PGS. Once researchers gain access to fast and efficient numerical solutions, through simple (to implement) methods, LCP models may become even more attractive to the computer graphics community. We give a few examples of already used models, and some inspiration for new applications, in Section 1.3. e prerequisites for understanding the content of this book include good mathematical skills and perhaps even a background in research. We assume the reader knows calculus and, to some extent, numerical optimization. Readers without this pre-knowledge may find some help in studying Appendices A and B thoroughly. e aim of the book is not to give a complete guide to which method solves what problem, but to strive toward giving a theoretical overview of the field. Our hope is to give the reader a strong foundation on which to base their own work and an adventure into using LCPs as models, or solving LCPs. While we do believe the book will have value to programmers, we also recognize that absorbing some of the theoretical content may require some effort. To alleviate this, we have tried to scatter—implementation friendly—descriptions of numerical methods throughout the book.
2
1. INTRODUCTION
For those who like to gain intuition from playing with code and implementations before tackling theory, we have created an online repository of many of the numerical methods [Erleben, 2011]. More details on the online code repository can be found in Appendix D. We hope that the combination of method descriptions, supplied code and deep theoretical treatment are appreciated by our readers as much as by ourselves. In Chapter 3 we will present more detailed information on recommendations of which method to use for what. Here we limit ourselves to a briefer treatment. • Pivoting methods for symmetric positive-definite linear complementarity problems are found in Section 2.1.2. Such methods are great when accuracy outweighs the computational cost consideration. • General purpose small-size direct methods for linear complementarity problems are presented in Section 2.1.1. We find such methods to be great building blocks in more complex methods. One such example could be the blocked methods from Section 2.2.3. • General splitting methods for both linear complementarity problems and boxed linear complementarity problems are a class of very robust although not very accurate methods that often are applied in interaction simulation context where computational resources are sparse. We cover these methods in Section 2.2. • Quadratic programming problem reformulations for symmetric linear complementarity problems are an interesting approach that offers accuracy and reuse of quadratic programming solvers, which are commonplace today. is idea is the core of Section 2.2.2. e symmetric nature may seem limited but, when combined with idea of staggering in Section 2.2.4, the result is quite interesting. • Interior points methods and Newton methods for solving linear complementarity problems are the main topic of Section 2.3 and Section 2.4. ese classes of methods offer very accurate solutions without using too many computational resources. ey are a class of really general-purpose methods, and some of the best available software solutions (such as PATH from CPNET [Path, 2005]) are based on them. We mostly consider line-search methods. We feel this collection of methods forms a baseline of need-to-know information. It presents a big theoretical step for the graphics community. We hope that this book will provide the reader with a sturdy foundation for further studies. We will not cover trust region methods, continuation methods or multilevel solvers. We only present the proverbial tip-of-the-iceberg of non-smooth LCP reformulations. e fact is, there are just too many to present them all within the scope of this book. For more on reformulations, we refer the reader to [Anitescu and Tasora, 2008, Billups, 1995, Niebe, 2014]. Most of the methods we present do generalize to other reformulations. It is the intent to equip the reader with the skills to make such generalizations.
1.1. UNDERSTANDING THE PROBLEM
y
. .0; 0/
x
Figure 1.1: Solution space of a one-dimensional complementarity problem, given by the positive x and y axes.
As we focus on the numerical methods, we will not provide too many concrete application examples. We leave it up to the graphics community to provide more examples, or even come up with new applications. With these final words on what this book is not about, we move our focus to what this book is about. To do so, we start by understanding what a complementarity problem is. We will attack this from both a mathematical and an intuitive direction. As we generalize concepts, the mathematical treatment will get more abstract. Hang in there, the rewards will come!
1.1
UNDERSTANDING THE PROBLEM
In its simplest form, the one-dimensional complementarity problem (CP) is to determine the values of two variables, x; y 2 Œ0I 1/, such that the following constraints are satisfied y>0
)
xD0
_
x>0
)yD0
(1.1)
ese conditions imply that the solution to the complementarity problem is a state where at most one of the two variables is positive. A more compact notation is 0y
?
x0
(1.2)
e solution space forms a corner shape given by the positive x and y axes, illustrated in Figure 1.1. If we let y D ax C b , for a; b 2 R, we can state the one-dimensional linear complementarity problem (LCP), y D ax C b y0 x0 xy D 0
(1.3a) (1.3b) (1.3c) (1.3d)
3
4
1. INTRODUCTION
e solution space of the LCP is the intersection between the previous corner shape and the line ax C b . is geometric tool provides us with an approach to study the nature of LCPs. Consider the following situations: • both a and b are negative, • both a and b are 0. How will these specific values affect the solution space? As seen from Table 1.1 and Table 1.2, the two cases correspond to having no solution or infinitely many solutions. Clearly the parameters a; b determine whether there exists a solution to the LCP and whether or not this solution is unique. Substituting ax C b for y , we can rewrite the LCP, ax C b 0 x0 x .ax C b/ D 0
(1.4a) (1.4b) (1.4c)
1.1.1
FIRST-ORDER OPTIMALITY IS A LINEAR COMPLEMENTARITY PROBLEM is new form suggests a different approach to finding the solutions. We observe that the complementarity condition (1.4c) has the familiar form of a quadratic function. Because of the inequalities (1.4a)-(1.4b) (called “unilateral constraints”), any feasible x value will result in a nonnegative value of the quadratic function. Feasible means that the constraints are fulfilled; that is x 0 and ax C b 0. For a strictly feasible x we have x > 0 and ax C b > 0 and the quadratic function will be strictly positive always. Hence, we observe that the quadratic function will be 0 only for a solution of the LCP. In other words, we may think of the LCP as solving the optimization problem x D arg min x .ax C b/ (1.5) x
subject to the following conditions x0
(1.6)
ax C b 0
(1.7)
and is means that x is a feasible solution, for which x .ax C b/ x.ax C b/
(1.8)
for all x 0 and ax C b 0. A feasible solution is a solution that complies with the constraints, i.e., x 0 and ax C b 0.
1.1. UNDERSTANDING THE PROBLEM
Table 1.1: Impact of the sign of variables a and b . e solution space is given by intersection points between the two positive axes and the ax C b (shown in blue)
a>0
aD0
a0 2 1 1
aD0 0 1 1
a 0 one has a strict convex optimization problem subject to linear constraints. us, constraint qualifications are fulfilled and one is guaranteed that a solution exists [Nocedal and Wright, 1999]. Constraint qualifications are sufficient conditions for an optimization problem such that the tangent cone and the set of linearized feasible directions are the same set. ese are necessary regularity conditions that ensure the first-order conditions are well posed. If a < 0, then the objective is unbounded from below and we may get into trouble. Observe that the 1D LCP is a combinatorial problem. As soon as one has chosen whether y or x is positive, then the problem is reduced to that of a linear relation from which the solution is trivially computed. is above sign analysis can be observed geometrically from Table 1.3. Notice how the minimizer of the quadratic function relates to the different sign combinations we studied in Table 1.1 and Table 1.2.
1.1.2 NONSMOOTH ROOT SEARCH REFORMULATIONS Before moving into higher-dimensional spaces, we introduce some thoughts on reformulating the LCP into other types of problems. e first reformulation we present is known as the minimum map reformulation [Cottle et al., 1992, Murty, 1988, Pang, 1990] h.x; y/ min.x; y/
(1.17)
An alternative convenient notation is min .x; y/ h.x; y/, which we use later when we generalize methods for all reformulations (see Section 2.4.4). For now, we keep the h-notation. e minimum map function, min.x; y/, is defined as ( x if x < y (1.18) min.x; y/ D y otherwise Figure 1.2 shows a surface plot of the minimum map function. Notice the sharp ridge corresponding to non-differentiable points on the surface, e claim is that any set x ; y that is a solution to the LCP will also satisfy h.x ; y / D 0, more formally h.x ; y / D 0 iff 0 y ? x 0 (1.19) is claim can be proven by a case-by-case analysis h.x; y/ x0
y0 >0 D0 0 ) x D l ax Cb < 0 ) x D u ax Cb D 0 ) l x u
(1.23a) (1.23b) (1.23c)
If we let y D a x C b , we can write a slightly more compact version of Equation (1.23) y>0)xDl y0)xD0 y0)xD 1 y 0
0
otherwise
y
if y < 0
0
otherwise
(1.27a) (1.27b)
In one dimension, this mathematical trick may seem pointless, however, the intuition gained here will be much needed when we move to n-dimensional BLCPs in Section 1.2.2. Using Equation (1.27), we have that y D y C y which we can substitute for in Equation (1.24),
14
1. INTRODUCTION
Table 1.5: Impact of the sign of variables a and b . e solution space is given by intersection points between the function and the two positive axes. e blue line is the function ax C b . e red line is the reformulation FB .x; ax C b/
a>0
aD0
a0)xDl y / 0, by their definition (1.27), this implies that y C > 0 and y D 0: yC > 0 ) x D l (1.29)
If y C D 0, we know that either y > 0 or that y D 0. In either case, x has a lower bound of l . We can combine the two cases by stating that: y C > 0 ) .x y C D 0 ) .x
l/ D 0 l/ > 0
(1.30a) (1.30b)
is is exactly what we saw in Equation (1.1), so the first of the three LCPs is thus: 0 yC
?
.x
l/ 0
(1.31)
Analogously, if y < 0 then y C D 0 and y D y > 0: y 0)u
xD0
(1.33)
With a reversal of arguments, if y D 0, we know that either y C > 0 or that y C D 0. In either case, x has an upper bound of u. is gives us the second LCP: 0y
?
.u
x/ 0
(1.34)
e third and final LCP stems from the very definitions of y C and y . .y C .y C .y C .y C .y C .y C
y y y y y y
/ > 0 ) yC > 0 />0)y D0 / D 0 ) yC D 0 /D0)y D0 / < 0 ) yC D 0 /0
(1.35a) (1.35b) (1.35c) (1.35d) (1.35e) (1.35f )
Both y C and y are non-negative. At most, one of y C and y can be positive at any given time. is is the very essence of the LCP, giving that the third LCP is: 0 yC
?
y 0
(1.36)
We may write the above three LCPs more compactly using a minimum map reformulation from (1.20) of each LCP min.y C ; y / D 0 min.y C ; x l/ D 0 min.y ; u x/ D 0
(1.37a) (1.37b) (1.37c)
16
1. INTRODUCTION
If you are familiar with optimization, you may recognize the form of the BLCP as the KarushKuhn-Tucker (KKT) conditions¹ of the convex QP: x D arg min x
1 a x2 C b x 2
(1.38)
subject to x u
(1.39a) (1.39b)
l 0 x 0:
In the BLCP, y C and y act as the Lagrange multipliers of the KKT. Assuming that a > 0, the convex QP always has at least one solution—although not necessarily a unique solution. Let us introduce the slack variables ˇ and such that 0ˇ y 0 Cy
? ?
ˇ0
0
(1.40a) (1.40b)
We can then use ˇ and as measures for y C and y . Using these slack variables, we can rewrite the minimum map reformulations of the BLCP as follows: y D ax Cb min.ˇ y; ˇ/ D 0 min. C y; / D 0 min.ˇ; x l/ D 0 min. ; u x/ D 0
(1.41a) (1.41b) (1.41c) (1.41d) (1.41e)
Later, we will return to Equation (1.41) and use this as a starting point to show how to rephrase the BLCP as an LCP. For now, let us return to Equation (1.37). By a line of cascading arguments, we shall see how these three coupled equations can be rewritten as a single equation. First, if y > 0 then y C D 0 (from Equation (1.37a)) which in turn means that x
l 0 (from Equation (1.37b))
Given this, we can (legally) subtract y from Equation (1.37b) to get min.y C
y ;x
l/ D
y
(1.42)
To see this is indeed legal, let us assume for a moment that y D 0, then Equation (1.42) implies that x l D 0 and thus min.y C ; x l/ D 0 is trivially satisfied. Next we substitute (1.42) for y in (1.37c) and obtain, min.u x; max.l x; y// D 0 (1.43) ¹Otherwise, see [Nocedal and Wright, 1999] or Appendix B.
1.1. UNDERSTANDING THE PROBLEM Minimum map reformulation of boxed LCP intersection by plane $y = a x + b$
2
2
1.5
1.5
1
1
0.5
0.5 m i n ( x, y)
m i n( x, y)
Minimum map reformulation of boxed LCP
17
0 −0.5
0 −0.5
−1
−1
−1.5
−1.5
−2 −2
−2 −2 −1
−1 0 1 2
2
1.5
0.5
1
0
−0.5
−1
−1.5
−2
0 1 2
x
y
1.5
2
1
0.5
0
−0.5
−1
−1.5
−2
x
y
m i n ( u − x , m a x ( − ( a x + b) , l − x ) ) 1
0.8
0.6
0.4
y
0.2
0
−0.2
−0.4
−0.6
−0.8
−1 −2
−1.5
−1
−0.5
0 x
0.5
1
1.5
2
Figure 1.7: is figure illustrates the surface of the minimum map reformulation of the boxed complementarity problem. Adding the linear relation y D ax C b corresponds to the intersection of a plane with the surface. Notice how the resulting intersection curve corresponding to the boxed linear complementarity problem has two non-differentiable points.
is is a more compact reformulation than (1.37) and eliminates the need for auxiliary variables y C and y . We refer to (1.43) as the minimum map reformulation of the BLCP. Figure 1.7 illustrates the minimum reformulation of the BLCP. e reader is encouraged to make the caseby-case analysis based on the sign of a and b to observe that BLCP—like the LCP—can have: no-solutions, one unique solution or multiple solutions. We can do sort of the same trick, using the Fischer function (1.21). First, we rewrite (1.27) as, yC; x l D 0 (1.44a) .y ; u x/ D 0 (1.44b) Observe, for any arbitrary positive scalar k , (1.44a) is equivalent to, ky C ; x l D 0
(1.45)
18
1. INTRODUCTION
Further, there always exists a positive scalar k such that, ky C D y
yC; u
x
(1.46)
is is proven by a case-by-case analysis. If y C D 0, then (1.46) reduces to (1.44b), which trivially holds for any value of k . If y C > 0, then we must have y D 0 and (1.46) reduces to, q ky C D .y C /2 C .u x/2 .u x/ Cy C (1.47) „ ƒ‚ … c
Using the triangle inequality, we always have c > 0 so k D c=y C C 1 > 1. us, for any given y C > 0 we can always find a k > 1 such that (1.46) holds. We can now substitute (1.46) into (1.45) and obtain the Fischer reformulation of our problem, .x
l; . y; u
x// D 0
(1.48)
We refer to (1.48) as the Fischer-Burmeister reformulation. Observe our derivation does not rely on explicitly enforcing u l at all times; instead, this holds implicitly for any solution of the problem. Figure 1.8 illustrates the Fischer-Burmeister reformulation of the BLCP can be used as a geometric tool for a case-by-case analysis to gain intuition about existence and uniqueness of solutions. We leave this as an exercise for the reader.
1.1.4 OTHER REFORMULATIONS As mentioned previously, there exist many other reformulations based on complementarity functions. In spirit, they can all be used similar to the minimum map and Fischer-Burmeister function examples we have given here. However, there exist other reformulations that are different in nature. Here we simply wish to briefly mention two such popular reformulations. e first is based on variational inequality (VI). In 1D the VI can be stated as f .x/T .z
x/ 0
8z 2 K
(1.49)
where K R and f .x/ W K 7! R. For the specific case of the 1D LCP, one would define f .x/ D y D ax C b and have K D RC , then we find y.x/T .z
x/ 0
8z 2 RC
(1.50)
If a solution x exists to the above VI, then it is also a solution to the LCP 0y
?
x0
(1.51)
A case-by-case analysis quickly shows the equivalence of the VI and LCP formulations. Another interesting reformulation is the proximal map formulation, which for a 1D case we can state as the fixed-point problem x D proxK .x
ay/
(1.52)
1.1. UNDERSTANDING THE PROBLEM Fischer Burmesiter reformulation of BLCP intersection by plane $y = a x + b$
5
5
4
4 φ( x − l , φ( − y , u − x ) )
φ( x − l , φ( − y , u − x ) )
Fischer Burmeiser reformulation of BLCP
19
3 2 1 0 −1
3 2 1 0 −1
−2
−2
−3 −2
−3 −2 −1
−1 0 1 2
2
0.5
1
1.5
0
−0.5
−1
−1.5
−2
0 1 2
x
y
2
1.5
1
0.5
0
−0.5
−1
−1.5
−2
x
y
φ( x − l , φ( − ( a x + b) , u − x ) ) 1.5
1
y
0.5
0
−0.5
−1
−1.5 −2
−1.5
−1
−0.5
0 x
0.5
1
1.5
2
Figure 1.8: is figure illustrates the surface of the Fischer-Burmeister reformulation of the boxed complementarity problem. Adding the linear relation y D ax C b corresponds to the intersection of a plane with the surface. Notice how the resulting intersection curve corresponding to the boxed linear complementarity problem is smooth compared to the minimum map reformulation.
where K is some convex set and the proximal map prox is defined as proxK .z/ arg min.x x2K
z/2
(1.53)
at is, if z is outside K, we obtain the closest point from K to z . If z belongs to K, we obtain z itself as the solution. Choosing K D RC the proximal map reformulation corresponds to the 1D LCP problem. e reader may explore this relationship by a case-by-case analysis. e proximal map formulations was explored [Niebe, 2014] for contact modeling. Variational inequalities are applied in [Stewart, 2011].
20
1. INTRODUCTION
1.2
THE PROBLEM IN n-DIMENSIONS
Having gained familiarity with the one-dimensional LCP, we now extend the ideas to higher dimensions. Let b; x denote two n-dimensional vectors, and A denote a n n matrix. en the vector y is defined as y D Ax C b. Using the index i 2 Œ1; : : : ; n, we state the n-dimensional LCP where for all i (1.54a) (1.54b) (1.54c)
xi 0 .Ax C b/i 0 xi .Ax C b/i D 0
We can write this compactly in matrix-vector notation as (1.55a) (1.55b) (1.55c)
x0 .Ax C b/ 0 xT .Ax C b/ D 0
using the convention that inequality operations between vectors are element-wise and must be true for all i . Now we can reintroduce the reformulations we used for the one-dimensional LCPs. Let us briefly examine some numerical examples to gain familiarity with the higherdimensional setting. Consider the example 3 2 3 2 1 1 0 0 0 6 17 60 2 0 0 7 7 6 7 (1.56) AD6 40 0 2 05 ; b D 4 25 0 0 0 4
0
Due to the A being diagonal, the 4D LCP corresponds to 4 1D LCPs that are independent of T each other. e solution of the 4D LCP is seen to be x D 1 0 1 0 . Let us study a little more complex example 3 2 3 2 1 0 0 1 1 60 2 0 07 6 17 7 6 7 (1.57) AD6 4 0 0 2 0 5 ; b D 4 15 2 0 0
4
1
Here we observe that variables 1 and 4 are coupled through the non-diagonal non-0s in A. e T solution of the 4D LCP is x D 21 0 0 12 and can be proven by computing y D Ax C b and verifying the complementarity conditions. More low-dimensional examples can be found in Section 2.1.1. It is non-trivial to easily find solutions as the last rather simple low dimensional example illustrates. Clearly the complimentarily conditions results in a combinatorial mess of trial and errors when one seek a solution using paper and pencil. Hence dealing with LCPs numerically is much more convenient and efficient.
1.2. THE PROBLEM IN n-DIMENSIONS
21
Assuming a symmetric A-matrix, the Quadratic Programming (QP) reformulation becomes 1 T T x D arg min x Ax C b x (1.58) x0 2 e first-order optimality conditions of the optimization problem is the LCP [Nocedal and Wright, 1999], y D Ax C b y 0 x0 xT y D 0
(1.59a) (1.59b) (1.59c) (1.59d)
Applying the minimum map reformulation in an element-wise manner results in the nonsmooth root search problem, 3 2 h.x1 ; y1 / (1.60) H.x/ D H.x; y/ D 4 : : : 5 D 0 h.xn ; yn / If we recall the definition of the minimum map (1.17), then by substitution we observe that we can write out our new definition as follows 3 P 3 2 2 min x1 ; i A1i xi C b1 min.x1 ; y1 / 5 5D4 (1.61) H.x/ D H.x; y/ D 4 ::: ::: D0 P min xn ; i Ani xi C bn min.xn ; yn / e Fischer-Burmeister function can be applied individually to each complementarity constraint to create the root search problem, 2 3 FB .x1 ; y1 / 6 7 :: F .x/ D F.x; y/ D 4 (1.62) 5D0 : FB .xn ; yn /
As before, we can make the substitution of FB , as defined in (1.21), into the above to gain some familiarity with this reformulation. 3 2 3 2q 2 FB .x1 ; y1 / x1 C y12 x1 y1 7 6 7 6 :: 7 :: F.x/ D F.x; y/ D 4 5D6 : 4 5 : p FB .xn ; yn / xn2 C yn2 xn yn 2q 3 (1.63) 2 P P x12 C A x C b x A x b 1i i 1 1 1i i 1 i i 6 7 6 7 :: D6 7D0 : 4q 5 2 P P 2 xn C xn bn i Ani xi C bn i Ani xi
22
1. INTRODUCTION
e next step is to consider which method to use to solve these problems numerically, we present a selection of such methods in Chapter 2. First, however, we take a look at applications of LCPs in physics-based animation, presenting examples of such in Chapter 1.3. Let us pause for a brief moment and reuse our previous number example to gain some familiarity and intuition about the two new high-dimensional reformulations. Given the data 2
1 60 AD6 40 2
0 2 0 0
0 0 2 0
3 1 07 7; 05 4
3 1 6 17 7 bD6 4 15 1 2
(1.64)
we note, by substitution into our definition, that the minimum map reformulation becomes 2
3 min.x1 ; 1 x1 x4 / 6 min.x2 ; 1 C 2 x2 / 7 7 H.x/ D 6 4 min.x3 ; 1 C 2 x3 / 5 D 0 min.x4 ; 1 2 x1 C 4 x4 /
(1.65)
T Notice by substitution that our solution, x D 12 0 0 12 , is a root for this equation. Using the number example on the Fischer-Burmeister reformulation results in 2
q x12 C . 1 x1 x4 /2 q x 2 C .1 C 2 x2 /2 q 2 x32 C .1 C 2 x3 /2
6 6 6 F.x/ D 6 6 6 4q x42 C . 1
As before, substitution of x D
1
2
2 x 1 C 4 x 4 /2
0 0
1 T 2
x1 C 1 C x1 C x4 x2
1
2 x2
x3
1
2 x3
x4 C 1 C 2 x1
4 x4
3
7 7 7 7D0 7 7 5
(1.66)
verifies this is a solution for the reformulation.
1.2.1 1D BLCP TO 4D LCP Having introduced the higher-dimensional LCP definition, we now have the necessary tools to show how the 1D BLCP can be rewritten into an LCP form. As we show in equations (1.27)(1.41), we can rewrite the BLCP as the following set of minimum map formulations: y D ax Cb min.ˇ y; ˇ/ D 0 min. C y; / D 0 min.ˇ; x l/ D 0 min. ; u x/ D 0
(1.67a) (1.67b) (1.67c) (1.67d) (1.67e)
1.2. THE PROBLEM IN n-DIMENSIONS
23
Using a reversal of arguments, we can reformulate the BLCP as a regular LCP. First we write the matrix equation: 2
3 2 1 0 ˇ y a 6 C y 7 60 1 0 6 7 6 4 ˇ 5 D 41 0 0
0 1 0 „ ƒ‚ … „ ƒ‚ z
32 ˇ 0 6 a7 76 0 5 4x 0
3 b al 7 6b Cau7 7 7C6 5 l5 4 0 3
u x … „ ƒ‚ … q
M
2
„
0 ƒ‚ c
(1.68)
…
which we use to state the four-dimensional LCP: z 0; q 0;
and zT q D 0
(1.69)
is LCP form includes slack variables, a zero-diagonal block and non-symmetry of the matrix M. Hence, in this form, we can not rewrite the BLCP as an equivalent QP reformulation. is mathematical exercise shows one advantage of the BLCP form; the BLCP is a more compact representation than the equivalent LCP. Also, the LCP reformulation in Equation (1.69) lacks some of the nice properties of the original BLCP, e.g., symmetry.
1.2.2
THE BOXED LINEAR COMPLEMENTARITY PROBLEM IN HIGHER DIMENSIONS Next we will consider the n-dimensional case of the BLCP. We introduce the vectors of lower and upper bounds l; u 2 Rn . Given the usual x; b 2 Rn and A 2 Rnn , the problem is to find x such that y D Ax C b and yi > 0 ) xi D li yi < 0 ) xi D ui yi D 0 ) li xi ui
(1.70a) (1.70b) (1.70c)
must hold for all i . We assume li < ui holds for all i . We write this compactly as l < u. We observe that the n-dimensional extension is a simple extension of the one-dimensional definition to include all n dimensions. If we let li D 1 and ui D 1 for all i : yi > 0 ) xi D 1 yi < 0 ) xi D 1 y i D 0 ) 1 xi 1
we see that the solution to the BLCP is reduced to solving the linear system Ax C b D 0
(1.71a) (1.71b) (1.71c)
24
1. INTRODUCTION
is is equivalent to what we found in the one-dimensional case. On the other hand, if li D 0 and ui D 1 for all i : y i > 0 ) xi D 0 y i < 0 ) xi D 1 y i D 0 ) 0 xi 1
(1.72a) (1.72b) (1.72c)
we have reduced the BLCP to a regular LCP. Again, this is consistent with the one-dimensional case. Since we have n BLCPs, we may find that we have both cases appearing, that is we have both a number of linear equations and LCPs. Let us define
E fi j yi D 0g
(1.73)
N fi j 0 yi ? xi 0g
(1.74)
and Now, for all i 2 E we set li D 1 and ui D 1 and for all i 2 N we set li D 0 and ui D 1. e results is a complementarity problem that is a mix of linear system and an LCP. Making the imaginary partitioning AEE AEN xE b yE D C E (1.75) AN E AN N xN bN yN We have that to solve the BLCP corresponds to solving the linear system (1.76)
yE D 0
while simultaneously solving LCP 0 yN
?
xN 0
(1.77)
Because there is this coupling of the linear system and the LCP, this specific problem class is termed a mixed LCP (MLCP). It should now be apparent that MLCPs and LCPs are simply subsets of BLCPs. Furthermore, LCPs are a subset of MLCPs where E D ;.
1.2.3 BLCP AND THE QP FORMULATION In Section 1.2.1, we show how to rewrite the 1D BLCP as a QP, to do so in n-dimensions it is required that A must be symmetric positive definite. Let us start from the point of the ndimensional QP, x D arg min x
1 T x Ax C bx 2
(1.78)
subject to x l0 u x 0:
(1.79a) (1.79b)
1.2. THE PROBLEM IN n-DIMENSIONS
25
As we did in the 1D case, we again apply the first-order optimality conditions (see Appendix B). e Lagrangian is
LD
1 T x Ax C bx 2
.x
l/T y C
.u
x/T y
(1.80)
We use y C and y in place of the Lagrange multipliers. Notice that for a given index we can never have both the lower and upper constraint active at the same time. Hence, by design we know T y C 0; y 0; and y C .y / D 0 (1.81) First order optimality conditions require that r L D 0 and that the constraints (1.81) are not violated. We can write this as a coupled system: yC
.x .u
(1.82a) (1.82b) (1.82c) (1.82d) (1.82e) (1.82f ) (1.82g) (1.82h)
y D Ax C b x l0 u x0 yC 0 y 0 T C l/ y D 0 x/T y D 0
and so we have our BLCP.
1.2.4 CONVERTING BLCP TO LCP We have shown that the BLCP can be reformulated as a QP in both 1D and nD, now let us look at reformulating the nD BLCP as an LCP. Retracing our steps from Section 1.2.1, we arrive at the linear relation 3 2 2 ˇ y I 0 A 6 C y 7 60 I 0 6 7 6 4 ˇ 5 D 4I 0 0
0 I 0 „ ƒ‚ … „ ƒ‚ z
32 0 ˇ 6 A7 76 0 5 4x
0
3 b Al 7 6b C Au7 7C6 7 5 l5 4 0
u x … „ ƒ‚ … q
M
3
2
„
0 ƒ‚ c
(1.83)
…
and the LCP 0q
?
z0
(1.84)
is LCP has 4 times as many variables as the corresponding BLCP. is could be worrying from a practical viewpoint. Hence, we will now derive an equivalent LCP of lower dimensionality. Again, given the definition of the n-dimensional BLCP (1.70), we split y into positive y C and negative components y such that y D yC y (1.85)
26
1. INTRODUCTION
and yiC D yi if yi > 0 and 0 otherwise. Similar yi D yi if yi < 0 and 0 otherwise. en we rewrite the BLCP as follows yiC > 0 ) xi yiC D 0 ) xi y i > 0 ) ui yi D 0 ) ui
(1.86a) (1.86b) (1.86c) (1.86d)
li D 0 li > 0 xi D 0 xi > 0
must hold for all i . Assume there exists a matrix B 2 Rnn such that BA D I , then we have that Ax C b D y Ax D y b x D By Bb x D By C By
(1.87a) (1.87b) (1.87c) (1.87d)
Bb
Substitution yields yiC > 0 ) By C By C . Bb l/i D 0 yiC D 0 ) By C By C . Bb l/ i > 0 yi > 0 ) By C C By C .u C Bb/i D 0 yi D 0 ) By C C By C .u C Bb/ i > 0
Introducing matrix notation we may write C B B yC Bb l x D C B B y x u C Bb „ƒ‚… „ ƒ‚ … „ƒ‚… „ ƒ‚ … w
z
M
(1.88a) (1.88b) (1.88c) (1.88d)
(1.89)
q
that is w D Mz C q
(1.90)
en we simply have the LCP 0w
?
z0
(1.91)
Compared to the previous form, we see that this LCP reformulation only has twice as many variables as the corresponding BLCP. However, M may be very dense and it requires the inverse (or pseudo-inverse) of A, which can be very expensive to compute. A final note on BLCPs is that the upper and lower bounds, u and l, may actually be functions of x. is is a further generalization of the BLCP. When the bounds are variable, the BLCP cannot be reformulated as an LCP. is can be seen by the dependence on x in q, which cannot be moved into w without ruining the complementarity condition. In Section 1.3.5 we present an example of a model that ends up being a BLCP with variable linear bounds.
1.3. EXAMPLES FROM PHYSICS-BASED ANIMATION
27
1.2.5 NONSMOOTH REFORMULATIONS OF BLCP e reformulations of the 1D BLCP (1.43) can be extended straightforwardly to higher dimensions by applying the 1D definitions component-wise. e resulting higher-dimensional version looks like this 2
min.u1
HBLCP .x; y; l; u/ D 4 min.un
3 x1 ; max. y1 /; l1 x1 / 5D0 ::: xn ; max. yn ; ln xn //
(1.92)
A similar approach can be taken for the 1D Fischer-Burmeister reformulation (1.48), resulting in the equivalent higher-dimensional version. 2 3 FB .x1 l1 ; FB . y1 ; u1 x1 // 6 7 :: F .x/ D F.x; y/ D 4 (1.93) 5D0 : FB .xn
1.3
ln ; FB . yn ; un
xn //
EXAMPLES FROM PHYSICS-BASED ANIMATION
In this chapter we present examples of modeling physical phenomena with LCPs. e first example is boundary behavior for a fluid when in contact with a solid wall. e second example is free-flowing granular matter. en we present density correction, inverse kinematics, and lastly contact forces between two colliding objects. is is not an exhaustive survey of applications, plenty of others could be thought of. For instance, we could also model joint limits and actuators (joint motors) as LCPs [Erleben, 2005]. We speculate that optimal control (ie. animation control) could be an interesting application where LCP could be used to turn control forces on/off in regions of interest. Hoping to have motivated the use of LCP models through several examples, we turn our attention toward developing numerical methods for solving such LCP models in Chapter 2.
1.3.1 FLUID-SOLID WALL BOUNDARY CONDITIONS e second example of LCPs in physics-based animation is modeling fluid-solid wall boundary conditions [Batty et al., 2007, Chentanez and Müller, 2011]. is is a recent approach and examples in the literature are still sparse. In physics-based animation, most works use the incompressible Euler equations [Bridson, 2008, Gerszewski and Bargteil, 2013] @u D .u r/ u @t r uD0
rp
f
(1.94a) (1.94b)
where is mass density, u is the velocity field, p is the pressure field and f is the external force density. e traditional approach is to apply the boundary conditions p D 0 on free surfaces
28
1. INTRODUCTION
between fluid and vacuum and u n D 0 between the fluid and a static solid wall with unit outward normal n. For simplicity, we just present ideas for a single-phase flow in vacuum. e ideas trivially generalize to multiphase flow and dynamic solid wall boundary conditions [Batty et al., 2007, Chentanez and Müller, 2011]. In physics-based animation, coarse grids are used to keep the computational cost down. is causes a problem with the traditional solid wall boundary condition u n D 0. Namely, that cell-size thick layers of fluid are getting stuck on walls. is appears visually unrealistic. us, it has been proposed to change the solid wall boundary condition to, 0p
?
un0
(1.95)
is allows the fluid to separate from the wall. e condition u n > 0 enforces p D 0, making the interface act like a free surface. On the other hand, if u n D 0, then the fluid is at rest at the wall and there must be a pressure p > 0 acting on the fluid to keep it at rest. Figures 1.9-1.12 illustrate the difference in the traditional free-slip boundary condition and the separating wall boundary condition. Notice how the free-slip conditions make water seem viscous and as if it is sticking to the walls. A popular approach is to spatially discretize the equations of motion on a staggered regular grid using finite difference approximations of the spatial derivatives. For the temporal derivative, one deals with the partial differential equation using a fractional step method (known as operator splitting) [Stam, 1999]. is means that in the last sub-step of the fractional step method the problem being solved is, unC1 D u0
t rp
r unC1 D 0
(1.96a) (1.96b)
where unC1 is the final divergence-free velocity of the fluid and u0 is the fluid velocity obtained from the previous step in the fractional step method. e time-step is given by t . Substituting the first equation into the second yields r unC1 D r u0
t 2 r pD0
(1.97)
Introducing the spatial discretization, we obtain the Poisson equation, which for notational convenience we write as Ap C b D 0 (1.98) n o t 2 where p is the vector of all cell-centered pressure values and A r and b fr u0 g. e matrix A is a symmetric diagonal banded matrix. In 2D it will have five bands when using a five-point stencil, in 3D it will have seven bands for a seven-point stencil. For regular grids, all off-diagonal bands have the same value. Furthermore, A is known to be a PSD matrix, but adding the boundary condition p D 0 ensures that a unique solution can be found. Once the
1.3. EXAMPLES FROM PHYSICS-BASED ANIMATION
29
(a) Free slip solid wall boundary conditions
(b) Separating solid wall boundary conditions
Figure 1.9: Visual comparison of boundary conditions. Observe how the separating solid wall boundary conditions gives a more realistic everyday scale perception of the fluid behavior. In (a), the liquid sticks to the wall and slides off, whereas in (b), the liquid falls down.
(a) Free slip solid wall boundary conditions
(b) Separating solid wall boundary conditions
Figure 1.10: Visual comparison of boundary conditions showing water splashing inside a circle and interacting with solid walls.
30
1. INTRODUCTION
(a) Free slip solid wall boundary conditions
(b) Separating solid wall boundary conditions
Figure 1.11: Visual comparison of boundary conditions. Showing flow difference in a water labyrinth scenario.
(a) Free slip solid wall boundary conditions
(b) Separating solid wall boundary conditions
Figure 1.12: Visual comparison of boundary conditions for a square of water splashing inside a square container.
1.3. EXAMPLES FROM PHYSICS-BASED ANIMATION
31
pressure vector has been computed, it can be used to compute the last step of the fractional step method (1.96a). Let us revisit the complementarity problem arising from the modified boundary condition and examine what happens if unC1 n > 0 at a solid wall boundary. To start the analysis we examine what happens with (1.94b) in an arbitrary small control volume V around a solid wall boundary point, Z I V
r unC1 dV D
S
unC1 n dS > 0
(1.99)
e last inequality follows from the assumption that unC1 n > 0. is means that, if we pick the row of the discrete Poisson equation that corresponds to the solid wall boundary point, we obtain (for the j th row) Aj p C bj > 0 (1.100)
If on the other hand unC1 n D 0 at the solid wall, then we rediscover Aj p C bj D 0. Although we skipped the details of the discretization, it should be intuitively clear that the pressure solve for the new modified boundary conditions is given by the LCP, 0p
?
Ap C b 0
(1.101)
1.3.2 FREE-FLOWING GRANULAR MATTER Simulating free-flowing granular materials using a continuum model makes a complementarity condition the natural choice. Here we briefly sketch out how such a model may be derived. We outline the ideas from [Narain et al., 2010]. e same granular continuum model was used in [Alduán and Otaduy, 2011], although using a predictive-corrective incompressible SPH method in place of the complementarity model derived below. e full material derivative is given by D @ D C .u r/ Dt @t
(1.102)
where u is the velocity field. Applying conservation of mass results in the mass transport equation @ C r .u/ D 0 @t
(1.103)
where is the mass density of the granular material. e Cauchy stress tensor is written as D
(1.104)
pI C s
where p 0 is the isotropic mean stress and s the traceless deviatoric stress tensor, which represents frictional stress within the granular matter. e isotropic mean stress can be interpreted as a kind of internal pressure that makes an analogy to the liquid model from Section 1.3.1. From Cauchy’s equation we obtain the equations of motion
Dv Df Dt
rp
r s
(1.105)
32
1. INTRODUCTION
Here f is the external body force density. A Drucker-Prager yield criterion is used to model the change in internal frictional behavior p k s kF 3˛p (1.106) qP s2ij and the material parameter ˛ is where k kF is the Frobenius norm defined as k s kF D a frictional coefficient related to the angle of repose through r 2 ˛D sin (1.107) 3 From time-discretization of the equation of motion we obtain the intermediate velocity 1
unC 2 D un C
t .f n
rp
(1.108)
r s/
Assuming the granular matter can not be compressed beyond a given critical maximum density max , we have max (1.109) Dividing by max we find a relation for the volume fraction D
1 max
(1.110) 1
Applying a time-discretization to the mass transport equation using unC 2 and dividing by max we obtain 1 nC1 D n tr n unC 2 (1.111) 1
Substituting for unC 2 yields
nC1
D
n
t t r n un C n .f
rp
r s/
(1.112)
which we can rewrite as
nC1
D „
n
t tr n un C n .f ƒ‚ nC1 jpD0
t 2 2 r s/ C r p max …
(1.113)
Here we introduce the symbol nC1 jpD0 to denote the updated volume fraction given 0 isotropic mean stress. Now p is chosen such that 0 1 nC1 always holds. When p is active (positive), then nC1 must equal 1. is implies the linear complementarity condition p 1 nC1 D 0 (1.114)
1.3. EXAMPLES FROM PHYSICS-BASED ANIMATION
33
1.3.3 DENSITY CORRECTION In particle simulation, it can be meaningful to have some sort of constraint on the compressibility of the particles. In simulations, advection may be due to discretization, and round-off errors violate this constraint. Hence, a correction of the density field is desired such that the constraint is fulfilled. e danger of violating the constraint could cause incorrect pressure gradients in, say, liquid simulations. Hence, leaving this untreated could cause simulations to blow up. Density correction has been used in [Narain et al., 2010] and [Gerszewski and Bargteil, 2013]. Here we briefly describe the overall idea. e full material derivative is given by D @ D C .u r/ Dt @t
(1.115)
where u is the velocity field. Applying conservation of mass results in the mass transport equation @ @t
r .u/ D 0
(1.116)
where is the mass density. Given a density field 0 and given that the matter of interest can not be compressed beyond a certain maximum, then we may write this as max
Time discretization of the mass transportation equation yields nC1 D n C tr n unC1
(1.117)
(1.118)
We may move t inside the operational operator and think of dnC1 D t unC1 as a displacement of the -field, resulting in nC1 D n C r n dnC1 (1.119) Usually, in a fractional step method at this point, we would be left with the pressure term
@u D @t
rp
(1.120)
leading to a time discretization such as unC1 D un
t rp n
(1.121)
Multiplying by t and assuming the initial displacement to be zero gives us dnC1 D
t 2 rp nC1 n
(1.122)
34
1. INTRODUCTION
As t 2 is a positive constant, we can hide the multiplier in the p nC1 -field, let us define x nC1 D t 2 p nC1 , this can be thought of as a displacement potential field dnC1 D
1 rx nC1 n
(1.123)
Back-substitution into the discrete mass transport equation results in nC1 D n C r n dnC1 1 nC1 D n C r n rx n n 2 nC1 r x D nC1
(1.124) (1.125) (1.126)
nC1
Now, we wish to solve for x 0 such that max , which is the same as 0 max nC1 nC1 . However, if the constraint is fulfilled, then x must be 0. If the constraint initially is not met, then x nC1 > 0. Notice that x nC1 is a Lagrange multiplier for the density constraint. All combined, this implies the linear complementarity problem y nC1 D r 2 x nC1 C max
n 0 x 0 y nC1 x nC1 D 0 nC1
(1.127) (1.128) (1.129)
Once a solution x nC1 is found, the corresponding displacement can be computed using (1.123). Knowing the displacement field, the the density field can be updated using (1.119).
1.3.4 JOINT LIMITS IN INVERSE KINEMATICS Inverse kinematics is—in short—the problem of determining the joint parameters of an articulated figure, when all you know is the goal position of the articulated figure’s end effector. Readers who are not familiar with inverse kinematics can find a thorough introduction in [EngellNørregård, 2012]. is work uses a mathematical and numerical optimization approach similar to the one we use below. We denote the end effector function of a kinematic chain by F. / W Rn 7! Rm and the goal vector as g 2 Rm . Let f W Rn 7! R and C W Rn 7! Rs ,
1 .g F. //T W .g F. //; (1.130) 2 where W is a positive definite weighting matrix. e nonlinear inverse kinematics problem is then the minimization problem, f . / D
D arg min f . /
subject to
C. / 0
(1.131)
In the general case, C./ can be any constraint function. In the particular case of box limits, we have s D 2n and l Inn l C. / D D C (1.132) u Inn u
1.3. EXAMPLES FROM PHYSICS-BASED ANIMATION
35
n
where l; u 2 R are constant lower and upper bounds such that li ui for all i D 1 to n. Given an iterate k 2 Rn for some positive iteration number k 2 NC , we wish to find such that D k C :
(1.133)
Using this, we can make the first order approximation to the objective function @f . k / f . / D f . k C / f . k / C „ƒ‚… „ ƒ‚ @ … k f
(1.134)
@f k @
In a sense, this first order linear approximation measures the residual from an optimal solution f . / D 0. us, we define the residual as r. / D f k C
@f k @
(1.135)
Now we can restate our optimization problem as a sequence of problems, each trying to minimize the residual error, 1 k D arg min r. /2 2
subject to
First order optimality conditions yields @ 1 r. /2 @ 2
C. k C / 0
@C. k / D0 @ x0 i C. C / 0 T x C. i C / D 0 xT
(1.136)
(1.137a) (1.137b) (1.137c) (1.137d)
From (1.137a) we have r. /
@r. / @
T H C cT
where H D
@f k @
T
@f k @
k
and c D f k @f@
@C. k / D 0T @ @C. k / xT D 0T @ xT
(1.138a) (1.138b)
T
. We can now isolate on the left hand side ! kT 1 @C D H x c @ „ ƒ‚ … h.x/
(1.139)
36
1. INTRODUCTION
Observe that h.x/ is an affine function. Substitution into the first-order optimality conditions yields x0 C. k C h.x// 0 xT C. k C h.x// D 0
(1.140a) (1.140b) (1.140c)
g.x/ D C. k C h.x//
(1.141)
defining In the case of a general constraint function, C./, the first-order optimality conditions become the nonlinear complementarity problem (NCP) (1.142a) (1.142b) (1.142c)
x0 g.x/ 0 xT g.x/ D 0
Using the definition of the box-limit constraint function (1.132), the g-function becomes an affine mapping
T k Inn Inn 1 g.x/ D H xC Inn Inn u „ „ ƒ‚ … A
l H 1c k C H 1c ƒ‚ …
(1.143)
b
and the first-order optimality conditions turn into an LCP: x0 Ax C b 0 xT .Ax C b/ D 0
(1.144a) (1.144b) (1.144c)
e above formulation shows that the inverse kinematic problems can be rephrased as a series of LCP problems. is may prove to be a merely academic result, as we have applied so many linearizations that we may have over-regularized the local LCPs. e regularizations may imply that solving the LCP problems becomes far more complicated than solving, for instance, local QP models. However, the purpose was merely to motivate that other problems may exist where LCP modeling can be used.
1.3.5 CONTACT FORCE EXAMPLES In this section, we will use LCPs to model frictional contact force computations [Anitescu and Potra, 1997, Lötstedt, 1984, Stewart and Trinkle, 1996]. It should be noted that contact forces can be modeled in other ways than using LCPs [Anitescu and Tasora, 2008, Bertails-Descoubes et al.,
1.3. EXAMPLES FROM PHYSICS-BASED ANIMATION
37
2011, Daviet et al., 2011]. We refer the interested reader to [Bender et al., 2012] for complete references on alternatives. Figures 1.13-1.15 illustrate typical simulation examples as used in [Niebe, 2014], [Erleben, 2007] and [Silcowitz et al., 2010b]. Here contact forces are modeled as complementarity problems and solved using numerical methods similar to the ones described in Section 2.2.
Figure 1.13: Stills from a rigid body simulation using a proximal map reformulation to compute contact forces [Niebe, 2014].
We will start out by introducing the original LCP formulation, which correctly describes the physics of frictional contacts. is model is widely known as the Stewart-Trinkle model. Acknowledging that this is not the predominant model used in interactive simulations, we follow up on the Stewart-Trinkle model with a derivation of the less accurate BLCP model. Many variations exist on these two models and they are still the subject of ongoing research. One such example is the PEG model which avoids some issues of the Stewart-Trinkle model by a more tight coupling of the collision detection and the dynamics solver Flickinger et al. [2013, 2015], Williams et al. [2012]. We leave it up to the interested reader to explore these extended ideas. e Stewart-Trinkle model
To simplify notation, we present the model as though we were considering a single contact point, this is done without any loss of generality. Consider two bodies in contact, there will be some constraints imposed on such two bodies. e two bodies are not be allowed to occupy the same space at the same time, that is, penetration is not allowed. is imposes the first constraint, the so-called non-penetration constraint. To model the non-penetration constraint, we use a complementarity problem 0 vn
?
n 0
(1.145a)
e scalar vn is the normal component of the relative contact velocity and n is the magnitude of the normal contact impulse. If there is a separation vn > 0, then the normal contact impulse must be 0. On the other hand, if there is a normal contact impulse n > 0, the contact is a resting contact with 0 relative contact velocity, vn D 0.
38
1. INTRODUCTION
Figure 1.14: Stills from a rigid body simulation where a blocked BLCP is solved to compute contact forces [Erleben, 2007].
Figure 1.15: Stills from a rigid body simulation where a BLCP is solved to compute contact forces using the NNCG method [Silcowitz et al., 2010b].
1.3. EXAMPLES FROM PHYSICS-BASED ANIMATION
39
B1 n
B.2
Figure 1.16: Two circular bodies, B1 and B2 , in exact contact. e horizontal line is the contact plane, with unit normal n.
When two bodies are in contact, any movement along the tangential plane of the contact point will be constrained by the friction force occurring between the objects. e friction force can be modeled as a polyhedral cone [Anitescu and Potra, 1997, Stewart and Trinkle, 1996], given as a positive span of K unit vectors ti . Let n be the magnitude of the normal impulse and T t D t1 tK the vector of friction impulses. e linearized model is 0 vn 0 ˇe C v t ! X 0 n ti i
? ?
n 0 t 0
(1.146a) (1.146b)
?
ˇ0
(1.146c)
where e is a K -dimensional vector of 1s. e first complementarity constraint models the nonpenetration constraint, as before. e second equation makes sure that, in case we do have friction ti > 0 for some i , then ˇ will estimate the maximum sliding velocity along the ti ’s directions. Observe this equation is a K -dimensional vector equation. Its main purpose is to choose the ti direction that best approximates the direction of maximum dissipation. e last equation makes sure the friction force is bounded by the Coulomb friction cone. Notice that if ˇ > 0 the last equation will force the friction force to lie on the boundary of the polyhedral friction cone. If ˇ D 0, the two last equations model static friction. at is, no sliding can occur and any friction force inside the friction cone is feasible. Due to the positive span of ti , one usually has several v ti ¤ 0 for sliding motion. However, the model will pick only one ti to be non-zero. e ti direction chosen by the model is the one mostly opposing the sliding direction. Only in the rare case where the sliding direction is symmetrically between ti -directions the model may pick two positive ti values. T P Observe that i ti D eT t and we have v D vn v t . From the discretization of the C b. e term b Newton-Euler equations we have the contact velocity-impulse relation v D B
40
1. INTRODUCTION
nD4
nD6
nD8
.
Figure 1.17: e accuracy of the polyhedral cone approximation—light blue shading—increases as the number of tangential directions increases. e improvement in accuracy is at the cost of an increasing problem size.
contains initial velocity terms, hence, v is the final velocity obtained by applying the impulse . Using all this, we can write the final matrix form of the contact model as, 2 Bnn Bnt 4 0 B tn B t t eT „ ƒ‚ A
32 3 2 3 0 n bn 5 5 4 4 e t C bt 5 0 ˇ 0 … „ƒ‚… „ƒ‚… x
b
3 n 4 t 5 0 ˇ „ƒ‚… 2
?
(1.147)
x
where Bnn D Jn M 1 JnT , Bnt D BTtn D Jn M 1 J tT , B t t D J t M 1 J tT , bn D Jn M 1 F , and b t D J t M 1 F . Here, M is the generalized mass matrix, Jn and J t are the normal and tangential parts of the contact Jacobian such that fn D JnT n and f t D J tT t , and F is a vector including external loads and gyroscopic forces. Observe that A is non-symmetric and has a zero-diagonal block. Further, the subblock B is symmetric and a positive-semi-definite (PSD) matrix. As will be clear from Section 2, this implies that we cannot use PGS for the contact LCP model. Rather, we can use either Lemke’s method or a Newton-based method, such as the one in Section 2.4.2. Understanding how ˇ works
To gain more familiarity with the complementarity modeling of the contact force problem, let us study a small 2D example. Without loss of generality and to keep things simple, we study a single point of contact between two bodies that share exactly one point of contact. Let the unit-length contact normal—at the point of contact—be n and let the unit-lenght orthogonal contact plane vector be t : knkD1 ktkD1 nt D0
(1.148) (1.149) (1.150)
1.3. EXAMPLES FROM PHYSICS-BASED ANIMATION
41
e vectors n and t makes up the coordinate axes vectors in a contact frame. We may now use these coordinate axes to define the relative contact plane velocities as tv tv
(1.151) (1.152)
vn D n v :
(1.153)
v1 D v2 D
and the velocity in the normal’s direction
In this 2D world the friction force f t may be written as f t D 1 t
(1.154)
2 t
where (1.155) (1.156)
1 > 0 ) 2 D 0 2 > 0 ) 1 D 0
Here the ’s are the “components” of the friction force projected onto the positive span of the two vectors t and t . From physics we know that if there is a normal force n then the friction force f t is bounded by the cone: k f t kD 1 C 2 n (1.157) where > 0 is a positive constant known as the coefficient of friction. If there is sliding—i.e., v1 or v2 ¤ 0—then the friction force f t works against the sliding direction and attains maximum possible value. If there is no sliding, then the friction force can have any value within (and on the surface of ) the friction cone. So far, we only defined appropriate coordinate systems and the concept of the friction cone. We are missing the relationship between sliding behavior and the friction force. To help us complete the modeling we introduce the scalar ˇ 0. Using ˇ we may write 1 > 0 ) .ˇ C v1 / D 0 2 > 0 ) .ˇ C v2 / D 0 .ˇ C v1 / > 0 ) 1 D 0 .ˇ C v2 / > 0 ) 2 D 0
(1.158) (1.159) (1.160) (1.161)
at is 8i
0 .ˇ C vi /
?
i 0
(1.162)
is is a critical part in understanding the modeling. Notice that, if there is sliding—thus implying that 1 or 2 are positive—then ˇ estimates the magnitude of the sliding speed. Using ˇ as an
42
1. INTRODUCTION
estimate of the magnitude of the sliding speed, we rewrite Equation (1.157) (Coulomb’s friction cone):
.n
1
ˇ > 0 ) .n 1 2 / > 0 ) ˇ D 0
2 / D 0
(1.163) (1.164)
Again, we recover complementarity conditions 0ˇ
?
.n
1
2 / 0:
(1.165)
We see that the role of ˇ is not only to measure if there is sliding, but also to decide the direction of the friction force. Example 1.1
Let us look at a three-dimensional example. First, let the contact frame be spanned
by the vectors nD
t1 D t2 D t3 D t4 D
We choose a velocity at random, v :
0
0
1
1
0
0
0
1
0
1
0
0
1
T
(1.166)
T
(1.167)
T
(1.168)
T 0 T 0
(1.169) (1.170)
2 3 5 4 v D 15 0
(1.171)
e relative contact plane velocities are: 3 2 2 3 2 v t1 v1 6v2 7 6v t2 7 6 7 6 6 7D6 4v3 5 4v t3 5 D 4 v t4 v4
3 5 17 7 55
(1.172)
1
e non-negative constraints are then: 2
3 2 3 ˇ C v1 ˇC5 6ˇ C v2 7 6 ˇ C 1 7 6 7 6 7 4ˇ C v3 5 D 4ˇ C . 5/5 0 ˇ C v4 ˇ C . 1/
(1.173)
1.3. EXAMPLES FROM PHYSICS-BASED ANIMATION
43
Remember the additional constraint ˇ 0. is means that ˇ 5 for these constraints to be true. For ˇ D 5 we have 0 < .ˇ C v1 / 0 < .ˇ C v2 / 0 D .ˇ C v3 / 0 < .ˇ C v4 /
? ? ? ?
(1.174) (1.175) (1.176) (1.177)
1 D 0 2 D 0 3 > 0 4 D 0
e size of 3 is then determined from 0ˇ
?
.n
1
2
3
4 / 0
(1.178)
Since ˇ > 0, we have 0 D .n
1
2
3
4 / D n
3
(1.179)
Hence, the solution is 3 D n . What if we choose ˇ > 5? en 0 < .ˇ C v1 / 0 < .ˇ C v2 / 0 < .ˇ C v3 / 0 < .ˇ C v4 /
? ? ? ?
1 2 3 4
D0 D0 D0 D0
(1.180) (1.181) (1.182) (1.183)
Since ˇ > 0, we get 0 D .n
1
2
3
4 / D n
(1.184)
is is only possible if n D 0; that means we have a separating contact. e insight gained: • If n > 0 then there is unique solution for ˇ • If n D 0 then there are infinitely many solutions for ˇ e interactive model
ere exists an alternative complementarity problem formulation which drops the ˇ -part of the model by ignoring the principle of maximum dissipation. e resulting model is no longer an LCP, but one could apply a splitting-based PGS method for this alternative model [Erleben, 2007]. is is similar to the method we present in Section 2.2.1. e alternative model is physically flawed in the sense that friction directions are decoupled and no convergence guarantees can be given for the PGS-type method [Silcowitz et al., 2009, 2010b]. We begin by reformulating Coulomb’s 2D friction model as two 1D models, corresponding to the two vectors spanning the tangential plane. For each direction of the two vectors t1 and t2 , the following must be enforced: v ti < 0 ) ti D n v ti > 0 ) ti D n v ti D 0 ) n ti n
(1.185a) (1.185b) (1.185c)
44
1. INTRODUCTION
t2
.
t1
Figure 1.18: Outer pyramidal cone—light blue shading—spanned by the two independent vectors t1 and t2 . is model tends to overshoot the friction force.
is implies a decoupling of the two directions, leading to an outer pyramidal approximation of the true friction cone, Figure (1.18). Let v D Œvn v t1 v t2 T , D Œn t1 t2 T and J D Jn J t and rewrite the relative velocity 1 T v D Ju D J J M 1 Fext „Mƒ‚ J … C „ ƒ‚ … A
(1.186)
b
We introduce the splitting of the relative velocities, such that v D vC
(1.187)
v
where v C 0;
v 0
and
vC
T
.v / D 0
(1.188)
/ and u. / and the limit functions l.
i / D l. i / D u.
( (
0 n 0 n
i 2N
i 2T
i 2N i 2T
(1.189) (1.190)
1.3. EXAMPLES FROM PHYSICS-BASED ANIMATION
45
Where N and T are the index sets for normal and tangential forces, respectively. We can write the full contact problem as vC
Cb v D A C v 0 v 0 / 0 u. / 0 l.
T l. // D 0 v C . T / / D 0 .v / .u. C T v .v / D 0
(1.191a) (1.191b) (1.191c) (1.191d) (1.191e) (1.191f ) (1.191g) (1.191h)
is looks very similar to the BLCP in (1.82), with the exception of the upper and lower bounds. In this model, these are linear functions of the variable . Strictly speaking, this makes it an NCP model, which in this case cannot be converted to an LCP, see Section 1.2.4. In an iterative solver, we can ignore the variable bounds by using values from iteration t 1 to fixate the bounding functions at iteration t .
47
CHAPTER
2
Numerical Methods ere are many different methods for solving LCPs. In this chapter we focus on methods that are robust, efficient and fast. e methods are all tailored to solve the type of LCPs found in physics-based animation.
2.1
PIVOTING METHODS
LCPs are combinatorial by nature. Pivoting methods exploit this by guessing the sets of active variables and testing various combinations. In principle, if we test all possible guesses, the result is a naive direct-enumeration method that finds all solutions. e pivoting methods have some similarity with active set methods for constrained QP problems [Nocedal and Wright, 1999]. In the case of symmetric positive semidefinite (PSD) A-matrices, the problem could be restated as a QP problem, rather than implementing the pivoting method. is is especially beneficial due to the availability of QP solvers such as MOSEK, CPLEX, LANCELOT, SQP, SNOPT and many more. is relationship was exploited in the fluid paper [Gerszewski and Bargteil, 2013], which implements the modified proportioning with the reduced-gradient projections (MPRGP) method. Before we start to derive a pivoting method, let us first define a few index sets. First, the set of all indices I f1; : : : ; ng (2.1) where n is the dimensionality of the LCP. We divide the index set I into two sets: the active set A, and the free set F . e index i 2 I belongs to the active set when xi > 0, that is
A fi j xi > 0g
(2.2)
Let yi D .Ax C b/i , the index i belongs to the free set when yi > 0,
F fi j yi > 0g
(2.3)
By naming the sets active and free we are hinting at the relation between pivoting methods and active set methods for solving quadratic problems (QPs). e word “active” implies that the linear constraint 0 D .Ax C b/i is activated. e word “free” means that xi is not constrained. e important part is not the actual names of these sets, they merely indicate a labeling to differentiate between two cases.
48
2. NUMERICAL METHODS
Since the complementarity constraint requires that xi yi D 0, it follows that
A\F D;
(2.4)
For now, we will assume strict complementarity, this means that there is no index i such that both xi D 0 and yi D 0. Strict complementarity implies that
A[F DI
Example 2.1
(2.5)
To understand why these sets are useful, let us look at a four-dimensional example: 32 3 2 3 2 3 2 A1;1 A1;2 A1;3 A1;4 y1 x1 b1 6y2 7 6A2;1 A2;2 A2;3 A2;4 7 6x2 7 6b2 7 76 7 6 7 6 7D6 (2.6) 4y3 5 4A3;1 A3;2 A3;3 A3;4 5 4x3 5 C 4b3 5 A4;1 A4;2 A4;3 A4;4 y4 x4 b4 „ƒ‚… „ ƒ‚ … „ƒ‚… „ƒ‚… y
x
A
b
Assume that the solution is found to be A f1; 4g and F f2; 3g, then 32 3 2 3 2 3 2 b1 x1 A1;1 A1;2 A1;3 A1;4 0 6y2 7 6A2;1 A2;2 A2;3 A2;4 7 6 0 7 6b2 7 76 7 6 7 6 7D6 4y3 5 4A3;1 A3;2 A3;3 A3;4 5 4 0 5 C 4b3 5 b4 x4 A4;1 A4;2 A4;3 A4;4 0
(2.7)
Notice how x effectively nulls out the columns in A whose indices are in the set A, meaning we can write: 32 3 2 3 2 3 2 b1 x1 A1;1 0 0 A1;4 0 6y2 7 6A2;1 0 0 A2;4 7 6 0 7 6b2 7 76 7 6 7 6 7D6 (2.8) 4y3 5 4A3;1 0 0 A3;4 5 4 0 5 C 4b3 5 b4 x4 A4;1 0 0 A4;4 0 e solution space of Equation (2.8) is equivalent to this reduced problem: 2 3 2 2 3 3 0 0 A1;1 A1;4 b1 61 0 7 y2 6A2;1 A2;4 7 x1 6b2 7 7 6 6 7 6 7 40 15 y3 D 4A3;1 A3;4 5 x4 C 4b3 5 0 0 „ƒ‚… b4 A4;1 A4;4 „ƒ‚… „ ƒ‚ … yF „ ƒ‚ … xA IF
(2.9)
AA
Matrix IF is an identity matrix with columns nulled out analogously to AA , using the indices in the set F . Equation (2.9) can be rewritten as a more compact equation: yF Db (2.10) I F AA xA
2.1. PIVOTING METHODS
49
e solution to this equation, assuming that yF > 0 and xA > 0, is equivalent to those of the LCP. If we let C D ŒIF AA and s D ŒyF xA T , we can think of the LCP as a linear programming (LP) problem in the form of Cs D b
subject to
s0
(2.11)
e matrix C is called a complementarity matrix. Since C is composed of columns from A and I , there will be 2n different complementarity matrices and therefore potentially 2n LPs to solve. is means a worst-case time complexity of O.n3 2n /, which is not computationally efficient. Pivoting methods are able to find the exact solution to an LCP, but in order to do so, we need to assemble the full A matrix. is assembly is computationally expensive, we will see other types of methods get around this in later sections.
2.1.1 DIRECT METHODS FOR SMALL-SIZED PROBLEMS If we take a geometric approach, we can think of Equation (2.11) as a test of whether b is in the positive cone of C . at is, b 2 fCs j s 0g :
(2.12)
is presents a different geometric tool for solving the LPs. Let us illustrate this with a simple 2D example. First, we introduce the specific column notation, 1 0 D e1 e2 0 1 a a12 A D 11 D a1 a2 a21 a22 ID
(2.13) (2.14)
In 2D, we can test whether b is in the positive cone of complementarity matrix C , simply by testing if b lies within the span of the two column vectors of C . is simple geometric approach is illustrated in Figure 2.1. We can go further with this geometric idea and define a solver specifically for 2D LCPs T based on testing angle intersections. Let m1 D 0 0 be an initial bitmask. In the k th iteration we will have that ( 1 if i 2 A mki D (2.15) 0 if i 2 F is bitmask is used incrementally to construct all possible complementarity cones and test for intersection with vector b. A full outline is shown in Algorithm 1. is geometric approach may be extended to 3D LCPs too. Given a complementarity cone C D c1 c2 c3 , assume without loss of generality that det .C/ > 0, simply verify that b be-
50
2. NUMERICAL METHODS
a2
AD
2 1
e2 e1
.
1 1 ;b D 1 2
a1
b
e2
AD
1 2
1 2 ;b D 1 1
e1
. a2
b
a1
e2
AD
1 1
2 1 ;b D 1 1
.
e1
a2
b
a1
Figure 2.1: Visual inspection of which complementarity cone b belongs to. When b intersects an angle then the pair of vectors corresponds to a complementarity cone solution. In the top row there are no solutions, middle row there is exactly one solution and bottom row shows exactly two solutions.
2.1. PIVOTING METHODS
51
Algorithm 1 Direct method for finding a solution to any 2D LCP.
Input : a1 ; a2 ; b 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14:
for k D 1 to 4 do m 2k 1 c1 c2 e1 e2 if m ^ 0x01 then c1 a1 end if if m ^ 0x10 then c2 a2 end if if det .c1 ; b/ det .c2 ; b/ 0 then 1 return x D c1 c2 b end if end for return No solution exist
longs to the spherical triangle defined by c1 , c2 , and c3 . at is .c1 c2 / b 0 .c2 c3 / b 0 .c3 c1 / b 0
(2.16) (2.17) (2.18)
must all hold. e assumption that det .C/ > 0 ensures that the vertices of the spherical triangle are in counter clockwise order. If not, swap the order of any two vertices before doing the angle inclusion test. However, remember to swap back before solving for x. e algorithm 1 can easily be extended to cover the 3D case too. However, this approach quickly gets too complicated for higher dimensions. ese small dimensional direct LCP solvers are perfect building blocks in the blocked splitting methods which we discuss later in Section 2.2.3.
2.1.2 INCREMENTAL PIVOTING “BARAFF STYLE” Baraff introduced an incremental pivoting method [Baraff, 1994]. is method incrementally builds up the index sets while keeping the complementarity constraints as invariants. In each 1 1 iteration the method computes AAA . Whenever A is a symmetric positive definite (PD), AAA exist. Baraff reported that, even when A is PSD (which is often the case in practice due to re1 dundant contact constraints), he was able to compute AAA . Baraff proves that the inner pivoting loop of his method only can be performed a finite number of times as the index set A is never
52
2. NUMERICAL METHODS
1 repeated. us, the cost of the inner loop is at most that of computing AAA which is O.n3 /. e outer loop of this method runs for at most n iterations yielding a worst-case time complexity of O.n4 /. Noticing that the pivot step only swaps one index and therefore only changes the size of 1 A by one, it is clear that an incremental factorization method can be used for computing AAA . 2 ere exist incremental factorization running in O.n / time complexity. us, a more realistic overall time complexity for the pivoting method is O.n3 /. To explain Baraff ’s method, we first need to define yet another index set, the set of unprocessed indices, U I n fF [ Ag (2.19)
Intially, index sets A and F are both empty, meaning that U I . In each iteration, a new index, j 2 U , will be selected from the current set of unprocessed indices, and then added to either of the active set, A or the free set, F . roughout, the complementarity conditions are kept as invariants. We use superscript k to indicate the iteration number. For any unprocessed index j 2 U , we assume xjk D 0. Initially in the k th iteration we use the partitioning 32 k 3 2 3 2 k3 2 yA bA xA AA;A AA;F AA;j 4yFk 5 D 4AF ;A AF ;F AF ;j 5 4xkF 5 C 4bF 5 (2.20) k bU AU ;A AU ;F AU ;U 0 yU We choose the index from U that minimizes yjk , as this corresponds to the index in U with the most violated complementarity constraint. If the chosen index j gives us that yjk 0, we terminate the method, as this would indicate that all the remaining unprocessed indices trivially satisfy the complementarity conditions. If no unique feasible minimum exists, we choose a minimizing index at random. In the k th iteration, we use the partitioning and keep the complementarity kC1 conditions as invariants, implying yA D 0 and xkC1 D 0, so F 2 3 2 3 2 kC1 3 2 3 0 x bA AA;A AA;F AA;j 6 A 7 4 5 6 kC1 7 4 5 0 y : (2.21) C D b A A A 5 4 4 F 5 F F ;A F ;F F ;j kC1 kC1 bj Aj;A Aj;F Aj;j xj yj e changes in yF and xA with respect to xjkC1 > 0 are given by xkC1 D xkA C xA xjkC1 A yFkC1 D yFk C yF xjkC1 yjkC1 D yjk C yj xjkC1
(2.22a) (2.22b) (2.22c)
where xA D xA D yj D
AA1;A AA;j AA1;A AA;j Aj;j Aj;A AA1;A AA;j
(2.23) (2.24) (2.25)
2.2. PROJECTION OR SWEEPING METHODS
53
e idea is to increase xjkC1 as much as constraints. us, xjkC1 is limited by the
possible without violating any of the complementarity blocking constraint value sets ˇ ( ) xkq ˇˇ BA (2.26a) ˇ q 2 A ^ xq < 0 xq ˇ ˇ ( ) yrk ˇˇ BF (2.26b) ˇ r 2 F ^ yr < 0 yr ˇ
If no blocking constraints exist, xjkC1 is unbounded by A and F . us, each partition results in the bounds ( min BA W if BA ¤ ; xjA D (2.27a) 1 W else if BA D ; ( min BF W if BB ¤ ; F (2.27b) xj D 1 W else if BF D ; 8 k < yj W if yj < 0 xjj D yj (2.27c) :0 W else if y 0 j
e solution for the value of xjkC1 will be the minimum bound. If a blocking constraint is found from BA , a pivot operation is initiated, moving the blocking index from A to F and vice versa if a blocking constraint is found in BF . e blocking constraint sets are changed as the active and free index sets A and F are changed by a pivoting operation. is implies that xjkC1 could be increased further after a pivoting step. us, we will continue to look for blocking constraints and perform pivoting on them until no more blocking constraints exist. Depending on the final value of xjkC1 , index j is assigned to either F or A. Algorithm 2 shows the full pivoting method in pseudocode.
2.2
PROJECTION OR SWEEPING METHODS
In this section we focus on a popular “class” of methods commonly referred to as projected GaussSeidel methods, often simply as PGS methods. We will derive the PGS method from two different angles. First, we use this idea of splitting up our initial problem, to get a group of more manageable problems that we can solve iteratively. Second, we return to QPs and show how we can reformulate QPs so they can be solved using a PGS type method. As we shall see, these first two approaches are not without limits, so we reapply the idea of splitting in a recursive fashion to derive the blocked Gauss-Seidel method and the Staggering method, often used for contact problems. Blocking and staggering are general concepts that can be applied in other problem domains, as they serve as a general scheme for “combining” different solvers into one overall solver.
54
2. NUMERICAL METHODS
Algorithm 2 Pivoting method.
Input : : 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23:
A ; F ; U D f1; : : : ; ng while U ¤ ; do j D arg mini2U yik U U n fj g while pivoting do xjkC1 min xjA ; xjF ; xjj q blocking constraint index if q 2 BA then F F [ fqg A A n fqg else if q 2 BF then A A [ fqg F F n fqg end if end while if xjkC1 > 0 then F F [ fj g else A A [ fj g end if end while
e PGS type of methods are interesting building blocks for many other types of numerical methods, such as sub-space minimization [Silcowitz et al., 2010a] and non-smooth non-linear conjugate gradient (NNCG) method [Silcowitz et al., 2010b].
2.2.1 SPLITTING METHODS Before we start on deriving the PGS method, we start with the more general concepts of splitting methods. Splitting methods are well known in numerical optimization, applied in methods like Gauss-Seidel, Jacobi and successive over-relaxation (SOR). Splitting methods are iterative methods, which, unlike the pivoting methods described previously, are practically incapable of
2.2. PROJECTION OR SWEEPING METHODS
55
computing the exact solution to the LCP. Iterative methods approximate the solution, but do so in a much more efficient way, both computationally and storage-wise. Recall the LCP definition (2.28a) (2.28b) (2.28c)
Ax C b 0 x0 T x .Ax C b/ D 0
Splitting methods start with the splitting of the A matrix, such that A D M N , we leave M and N unspecified for now. Using the new definition of A, Equation (2.28) becomes Mx xT .Mx
(2.29a) (2.29b) (2.29c)
Nx C b 0 x0 Nx C b/ D 0
Let us assume that we have an iterative method, such that xk ! xkC1 for k ! 1 (for now we will not care which other assumptions must be fulfilled to ensure this property). Using the knowledge of converging to an accumulation/limiting point, we can rewrite (2.29) as a fixedpoint formulation MxkC1 T xkC1 .MxkC1
Nxk C b 0 xkC1 0
Nxk C b/ D 0
In the k th iteration of the iterative method, we let ck D b
(2.30a) (2.30b) (2.30c)
Nxk , and the LCP (2.30) becomes
MxkC1 C ck 0 xkC1 0 .xkC1 /T .MxkC1 C ck / D 0
(2.31a) (2.31b) (2.31c)
is is a fixed-point formulation, for which we hope that, for a suitable choice of M and N , the complementarity subproblem (2.31) might be easier to solve than the original problem (2.28). Imagine, for instance, letting M be the diagonal of A. is choice decouples all variables and we have a subproblem of n independent 1D LCPs. e general splitting method can be summarized as: Step 1 Initialization, set k D 0 and choose an arbitrary nonnegative x0 0. Step 2 Given xk 0 solve the LCP (2.31). Step 3 If xkC1 satisfies your termination criteria then stop, otherwise set k step 2.
k C 1 and go to
56
2. NUMERICAL METHODS
e splitting is often chosen such that M is a Q-matrix, meaning that M belongs to the class of matrices where the corresponding LCP has a solution for all vectors ck . Clearly, if xkC1 is a solution to (2.31) and we have xkC1 D xk , then, by substitution into the subproblem given by (2.31), we see that xkC1 is a solution to the original problem (2.28). We use the minimum map reformulation on the complementarity subproblem min.xkC1 ; MxkC1 C ck / D 0
(2.32)
Subtract xkC1 and multiply by minus one, max.0; MxkC1
ck C xkC1 / D xkC1
(2.33)
Here we re-discover a fixed-point formulation. Let us perform a case-by-case analysis of the i th component. If we assume xkC1 M xkC1 ck < 0 (2.34) i
then we must have
xkC1 i
D 0. Otherwise our assumption is false and we must have xkC1 M xkC1 ck D xkC1 i i
(2.35)
at is, .M xkC1 /i D
For a suitable choice of M, we can rewrite .Mx xkC1 D i
kC1
cki
/i D
.M
1 k
c /i
(2.36) cki
as (2.37)
Not all splittings will make this rewrite possible. A trivial example that allows this is to let M to be the diagonal of A. Other common choices that allow this rewrite are listed in Table 2.1. Now, back-substitution of ck D b Nxk gives us M 1 Nxk b D xkC1 (2.38) i i
Combining it all, we have derived the closed-form solution for the complementarity subproblem, max 0; M 1 Nxk b D xkC1 (2.39) Iterative schemes like these are often termed projection methods. e reason for this is that, if we k 1 k introduce the vector z D M Nx b , then xkC1 D max 0; zk (2.40) at is, the k C 1 iterate is obtained by projecting the vector zk onto the positive octant. is is illustrated in Figure 2.2. e full algorithm, based on Equation (2.40), is shown in Algorithm 3.
2.2. PROJECTION OR SWEEPING METHODS
y
y zk
.
xkC1
57
y xkC1 D zk
xkC1
x
x
x
zk Figure 2.2: Projection of zk onto the positive octant, giving xkC1 .
Algorithm 3 Generic projected method given matrix splitting A D M
N.
Input : N; x; M; N; b 1: 2: 3: 4:
for k=1 to N do zk M 1 Nxk C b xkC1 max 0; zk end for
Table 2.1: Splittings for the three methods: Jacobi, projected Gauss-Seidel(PGS) and projected successive over-relaxation (PSOR). e matrix D is the diagonal part of the original A matrix. e matrix U is the strict upper part of the original A matrix. e matrix L is the strict lower part of the original A matrix. e relaxation parameter should be chosen such that 0 < < 2
Method Jacobi PGS PSOR
M D LCD D C L
N LCU U .1 / D U
It is a good idea to use a clever splitting such that the inversion of M is computationally cheap. Table 2.1 lists three of the more popular splittings. It should be noted that A must at least have nonzero diagonal values for these splittings to work. As far as we know, there exist no convergence proofs in the general case of A being arbitrary. However, given appropriate assumptions on A, such as being a contraction mapping or symmetric, global convergence can be proven [Cottle et al., 1992, Murty, 1988]. In Section 2.2.2 we take a different approach to deriving the same iterative schemes. Here, it follows by construction that,
58
2. NUMERICAL METHODS
if A is symmetric and PSD, then the splitting schemes will always converge. For non-symmetric matrices one may experience divergence. To see how to solve Equation (2.40) in an actual implementation, let us look at the specific case of PSOR splitting as given in the Table 2.1. e principle is a for-loop that sweeps over the vector components and updates the x-vector in place. When we use the PSOR splitting, it is useful to note that M
N
D D C L ..1 /D D .L C D C U / D A
(2.41) (2.42) (2.43)
U/
is is equivalent to a relaxation of the LCP by replacing A x C b 0 with .A x C b/ 0. For now, we assume is a positive real number. e relaxed LCP is then, (2.44a) (2.44b) (2.44c)
.Ax C b/ 0 x0 T x .Ax C b/ D 0
e faxtor can be interpreted as a scaling of the term A x C b. We now rewind the derivation of the projection method to equation (2.36). For the specific case of PSOR, this will result in . L C D/ xkC1 D ..1 / D U/ xk b (2.45) i
i
Next, we replace matrix multiplications with index notation
n X j D1
Lij xjkC1 C
n X j D1
Dij xjkC1 D .1
/
n X
Dij xjk
j D1
n X
Uij xjk
bi
(2.46)
j D1
We can use the fill pattern of the matrices L, D , and U to optimize the summation operations:
i 1 X
Lij xjkC1
j D1
C
Di i xkC1 i
D .1
/ Di i xki
n X
Uij xjk
bi
(2.47)
j Di C1
en we isolate xkC1 on the left-hand side of the equation i P 1 P bi ji D1 Lij xjkC1 Di i xki jnDi C1 Uij xjk C Di i xki kC1 xi D Di i
(2.48)
Reducing this slightly, we get xkC1 i
D
xki
bi C
Pi
1 j D1
Lij xjkC1 C Di i xki C Di i
Pn
j Di C1
Uij xjk
(2.49)
2.2. PROJECTION OR SWEEPING METHODS
59
th
We assume a sweep over the indices i in increasing order. Due to this particular sweep order, we will have solved for all xjkC1 with j < i when we start updating the i th index. Hence, there are no unknowns on the right-hand side of Equation (2.49). We can exploit this knowledge to allow in-place updating of the x-iterate. xi D xi
bi C
Pi
1 j D1
Lij xj C Di i xi C Di i
Pn
j Di C1
Uij xj
(2.50)
Noting that i 1 X j D1
Lij xj C Di i xi C
n X j DiC1
Uij xj D
n X j D1
Lij xj C
n X j D1
Di i xj C
n X j D1
Uij xj D
n X
Aij xj
j D1
and defining r D b C Ax, our derivation is reduced to xi
xi
ri Di i
(2.51)
All that remains is to substitute back into the re-written minimal map reformulation (2.40). We obtain the final update scheme ri xi max 0 ; xi for i D 1 to n (2.52) Di i Notice that the sweep order of the i th index is important. A pseudocode version of this practical approach can be found in Algorithm 5. Note that PGS is a special case of where D 1. We leave it to the reader to derive the Jacobi equivalent. On a side note, one should observe that, if the reverse sweep order is wanted such that i D n to 1, then this is possible too. e original fixed-point formulation (2.30) will then be Mxk T xkC1 .Mxk
NxkC1 C b 0 xkC1 0
NxkC1 C b/ D 0
(2.53a) (2.53b) (2.53c)
All steps in the derivations are similar for this version of the fixed-point problem. However, instead of forward PSOR update rule (2.52) we now have a backward update rule ri xi max 0 ; xi for i D n to 1 (2.54) Di i In some applications it can be convenient to let a full forward sweep be followed by a full backward sweep. is variant of the projection method is known as a symmetric projection method.
60
2. NUMERICAL METHODS
e idea of symmetry applies to both PSOR and PGS, but is pointless for Jacobi. We omit pseudocode algorithms of these variants of the basic algorithm. We note that in some applications the sweep order may cause side-effects and the symmetric variant can potentially alleviate this order-dependency issue to some extent. In the splitting algorithms derived so far, we have applied a simple maximum iteration count to guard against infinite looping. For many real-time applications, such a termination criteria will be sufficient. However, for more accurate applications it could result in inaccurate results. It is quite easy to extend the algorithm to use merit functions for both absolute and relative convergence testing or detecting divergence. Convergence of projection methods requires that the update rule—such as (2.52)—is a contraction mapping. As an example, we can use the infinity norm of x as a merit function. e infinity norm merit function is defined as 1 .x/ k x k1 D arg max jxi j i
(2.55)
is particular norm is very attractive, as it is easily calculated during the sweeping of the indices, shown in Algorithm 4. Another possibility for convergence testing is to use a measurement for Algorithm 4 Projection method testing for contraction and divergence using infinity norm of iterate as merit function.
Input : N; : : : ; x; A; b; "relative > 0 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16:
ı 1
1 for k=1 to N do
ı ı 0 for all i do xi update scheme ı max.ı; xi / end for if ı < then return divergence end if if ı < "relative then return relative convergence end if end for
2.2. PROJECTION OR SWEEPING METHODS
the complementarity condition as a merit function: ˇ ˇ ˇ ˇ compl. .x/ ˇxT .Ax C b/ˇ
61
(2.56)
In the case of x D 0, it is important to also make sure that the constraint Ax C b 0 is not violated. Any of the complementarity reformulations can in principle be used as a merit function, but be careful not to choose a merit function that dominates the computational cost of the iteration of the projection methods itself. e cost of a projection method iteration is O.nk/, where k n is the maximum number of non-zeros in any given row of A. Extending to Boxed Linear Complementarity Problem
Our starting point for deriving a splitting method for the BLCP is the minimum map reformulation of the BLCP from Section 1.2.5. From this, we can write the i th component as follows min.ui
xi ; max.li
xi ; yi // D 0
(2.57)
By adding xi we get a fixed-point formulation min.ui ; max.li ; xi
.Ax C b/i // D xi
(2.58)
What follows now is quite similar to what we derived in the LCP case. Once again, we introduce the splitting A D M N and the iteration index k . en we define ck D b Nxk . Using this we have min.ui ; max.li ; .xkC1 MxkC1 ck /i // D xkC1 (2.59) i
When xk converges, then (2.59) is equivalent to (2.57). Next we perform a case-by-case analysis. ree cases are possible, .xkC1 MxkC1 .xkC1 MxkC1 li .xkC1 MxkC1 xikC1
ck /i < li ) xkC1 D li i kC1 k c /i > ui ) xi D ui k c /i ui ) D .xkC1 MxkC1 ck /i
(2.60a) (2.60b) (2.60c)
Case (2.60c) reduces to, .MxkC1 /i D
cki
(2.61)
which, for a suitable choice of M and back substitution of ck , gives, xkC1 D .M i
1
.Nxk
b//i
(2.62)
us, our iterative splitting scheme becomes, min.ui ; max.li ; .M 1 .Nxk
b//i // D xkC1 i
(2.63)
62
2. NUMERICAL METHODS −1
10
PGS Impulse Coordinate Greedy Staggered Extreme Staggered
−2
10
−3
Error
10
−4
10
−5
10
−6
10
−7
10
0
20
40
60
80
100
Iterations
Figure 2.3: Convergence rates taken from [Poulsen et al., 2010]. Different heuristics are applied for permuting the order of variables inside the PGS loop. is specific study is for dense structured rigid body scenes, such as brick walls. Notice the non-monotone behavior.
Now let x0 D M 1 .Nxk
b/, then we have the projection method xkC1 D min.u; max.l; x0 //
(2.64)
where the .k C 1/th iterate is obtained by projecting the vector x0 onto the box given by l and u. Valid splittings of A are the same as in Table 2.1. By comparing Equation (2.63) to (2.40), we notice the the relationship between LCP and BLCP for splitting methods. In [Poulsen et al., 2010], a PGS variant of the above splitting is applied to a BLCP model of the interactive contact model from Section 1.3.5. In this work, l and u are affine functions of x. e algorithm framework we outlined can easily deal with this, simply by updating the l and u values whenever a change is made to x. Figure 2.3 shows typical convergence plots from [Poulsen et al., 2010] using different heuristics for permuting the order inside the PGS loop. Notice the non-monotone behavior that strongly suggests the missing convergence guarantees of the BLCP contact model from Section 1.3.5.
2.2.2 USING A QUADRATIC PROGRAMMING PROBLEM In the following derivation of the PGS and PSOR iterative methods, we make use of the QP reformulation. Our derivation follows in the steps of [Mandel, 1984]. e reformulation allows us to prove convergence properties of the PGS and PSOR methods. Assume that A is symmetric and PSD, then the LCP can be restated as a minimization problem of a constrained convex QP
2.2. PROJECTION OR SWEEPING METHODS
63
problem x D arg min f .x/
(2.65)
x0
where f .x/
1 T x Ax C xT b 2
(2.66)
Given the i th unit axis vector eO i defined as follows ( 1 if i D j i ej D ; 0 if i ¤ j
(2.67)
the i th relaxation step consists in solving the one-dimensional problem D arg min f .x C eO i /
(2.68)
x0
and then setting x
x C eO i
(2.69)
which is the same as the coordinate-wise update xi
(2.70)
xi C
One relaxation cycle consists of one sequential sweep over all i th components in increasing order. at is, for i D 1 to n. e one-dimensional objective function can be rewritten as 1 .x C eO i /T A.x C eO i / C .x C eO i /T b 2 1 1 D 2 Ai i C .Ax C b/i C f .x/ C xT Ax C xb 2 2 „ ƒ‚ …
f .x C eO i / D
(2.71a) (2.71b)
f .x/const
1 D 2 Ai i C .Ax C… b/i Cf .x/ „ ƒ‚ 2 r „ ƒ‚ …
(2.71c)
g. /
So, f .x C eO i / D g. / C f .x/
(2.72)
From this we observe that we only need to minimize the second-degree polynomial g. /. For now we assume Ai i > 0, then we find the unconstrained minimizer of g. / to be u D
ri Ai i
(2.73)
64
2. NUMERICAL METHODS
g./
.
g. /
g. /
Figure 2.4: Illustrating the shape of g. / for three cases: Ai i > 0, Ai i D 0, and Ai i < 0. Observe, D 0 is a trivial root of g in all three cases.
Next we consider the constraint xi C 0. Applying this we find the constrained minimizer to be c D max .u ; xi / (2.74) which yields the final update rule for the relaxation step ri xi max 0; xi Ai i
(2.75)
is is algebraically equivalent to the i th component in the PGS update (2.40). Let us briefly revisit the possibility that Ai i 0, then g. / is unbounded from below. e three cases of g./ are illustrated in Figure 2.4. If the unboundedness occurs for ! 1, then all is good, as (2.75) indicates xi D 0 is a sensible update that respects the constraint. is is the case whenever Ai i < 0. However, if unboundedness occurs as ! 1, only then have we found a direction that we cannot minimize in and our problem is ill-posed. is may occur in case Ai i D 0 and ri < 0, where g./ is a line with negative slope. If Ai i D 0 and ri 0, then we have a line with non-negative slope and xi D 0 yields a feasible minimizer, as D 1 is an unconstrained minimizer for this case. Should we have Ai i D ri D 0, then trivially we have xi D 0 as a solution too. Consider again the polynomial and assume we have Ai i > 0 1 2 Ai i C ri (2.76) 2 is means the legs of the second-order polynomial are pointing upward. e polynomial has one trivial root 1 D 0 (2.77) g. /
and a unique global minimum at
u D
ri Ai i
(2.78)
2.2. PROJECTION OR SWEEPING METHODS
where g
ri Ai i
D
e other root is found at 2 D
2
ri2 0g U fi jvi < 0g A fi jvi D 0g
(2.91a) (2.91b) (2.91c)
Again, we assume li 0 < ui for all i . e definition in (2.91) is based on the v -vector. However, we could just as well use the x-vector,
L fi jxi D li g U fi jxi D ui g A fi jli < xi < ui g
(2.92a) (2.92b) (2.92c)
When the PGS method terminates, we know x is feasible (although not necessarily the correct solution). However, v may be infeasible due to the projection on x made by the PGS method. In an actual implementation, the set definition (2.92) is practical for variable book keeping of set memberships. We use a permutation of the indices to create the imaginary partitioning of the system of linear equations (1.186) 32 3 2 3 2 3 2 bA xA AAA AAL AAU vA 4 vL 5 D 4 ALA ALL ALU 5 4 xL 5 C 4 bL 5 (2.93) bU xU AU A AU L AU U vU We know that what is true for one set is theoretically equally true for the other set. erefore, we can use a combination of the two definitions to eliminate some of the unknowns. We know that 8i 2 A ) vi D 0, as well as 8i 2 L ) xi D li and 8i 2 U ) xi D ui , which gives us 32 3 2 3 2 3 2 bA 0 AAA AAL AAU xA 4vL 5 D 4 ALA ALL ALU 5 4 lL 5 C 4 bL 5 (2.94) uU bU AU A AU L AU U vU To solve this system for the unknowns vL , vU and xA , we first compute xA from AAA xA D
.bA C AAL lL C AAU uU /
(2.95)
Observe that AAA is a symmetric principal submatrix of A. Knowing xA , we can easily compute vL and vU , vL D ALA xA C ALL lL C ALU uU C bL vU D AU A xA C AU L lL C AU U uU C bU
(2.96a) (2.96b)
72
2. NUMERICAL METHODS
Now, we check that the constraints are satisfied: vL < 0, vU > 0 and lA xA uA . If all constraints are satisfied, we have reached a solution. Rather than testing the constraints explicitly, it is simpler to perform a projection on the reduced problem: xA min.uA ; max.lA ; xA // (2.97) T T We assemble the full solution vector x xA lTL uTU before reestimating the index sets for the next iteration. e projection on the reduced problem will either leave the active set unchanged or reduce it further. See Algorithm 7 for full pseudocode. Algorithm 7 Projected Gauss-Seidel subspace minimization (PGS-SM) method).
Input : ksm ; x; A; b 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20:
while not converged do x run PGS for at least kpgs iterations if termination criteria reached then return x end if for k D 1 to ksm do L fi jxi D li g U fi jxi D ui g A fi jli < xi < ui g solve: AAA xA D .bA C AAL lL C AAU uU / vL ALA xA C ALL lL C ALU uU C bL vU AUA xA C AU L lL C AU U uU C bU update: .l; u/ xA min.uA ; max.lA ; xA // T T x xA lTL uTU if termination criteria reached then return x end if end for end while
Notice that Algorithm 7 does not specify which termination criteria to use . A particularly useful termination criterion for the PGS-SM method could be to monitor if the set A has changed from previous iteration, A.xkC1 / D A.xk / (2.98)
2.2. PROJECTION OR SWEEPING METHODS
(a)
(b)
(c)
(d)
(e)
(f )
(g)
(h)
73
Figure 2.9: Illustrated test cases used for the PGS–SM method: (a) an arched snake composed of boxes and hinge joints with limits; (b) a heavy box placed upon an arched snake; (c) a large stack of boxes of equal mass; (d) a heavy box resting on lighter smaller boxes; (e) boxes resting on an inclined surface resulting in static friction forces; (f ) a small pyramid of gears; (g) a medium-scale pyramid of gears; and (h) a large configuration of boxes stacked in a friction-inducing maner.
For more thoughts on alternative termination criteria we refer to Section 2.4.4. Figure 2.9 shows different simulation results obtained with the PGS-SM method [Silcowitz et al., 2010a]. As observed in Figure 2.10, the PGS-SM method behaves rather well for small configurations and configurations with joints. For larger configurations, we obtain convergence results similar to the PGS method.
2.2.6
THE NONSMOOTH NONLINEAR CONJUGATE GRADIENT METHOD e PGS iteration can be written in generic form using (2.63) as xkC1 D min. u.xk / ; max. l.xk / ; .D C L/ 1 .U xk C b// „ƒ‚… ƒ‚ … „ƒ‚… „ TU xk CtU
TL xk CtL
(2.99)
T xk Ct
where the lower and upper bound functions l; u W Rn 7! Rn are affine functions. e TL and TU matrices express the linear relations between the tangential friction forces and their associated normal forces. e tL and tU vectors can be used to express fixed bound constraints, such as a
74
2. NUMERICAL METHODS Arched Snake
20
10
0
PGS−SM PGS
0
10 k
k
Ψ(λ )
10 Ψ(λ )
Heavy Box On Arched Snake
5
10
PGS−SM PGS
−20
10
−40
10
0
−5
10
−10
500 1000 PGS iterations
10
1500
0
(a)
Heavy Box Resting On Light Boxes
10
10
PGS−SM PGS
0
k
Ψ(λ )
10
k
Ψ(λ )
PGS−SM PGS
0
10
−10
10
0
−10
10
−20
10
−20
500 1000 PGS iterations
10
1500
0
(c) 5
500 1000 PGS iterations
1500
(d)
Boxes Resting On Inclined Surface
10
Small Pyramid of Gears
5
10
PGS−SM PGS
0
PGS−SM PGS
0
10 k
k
Ψ(λ )
10 Ψ(λ )
1500
(b)
Large Stack of Boxes
10
10
500 1000 PGS iterations
−5
10
−10
10
−5
10
0
−10
500 1000 PGS iterations
10
1500
0
(e)
1500
(f )
Medium Pyramid Of Gears
5
10
500 1000 PGS iterations
Friction Dependant Structure
5
10
PGS−SM PGS
PGS−SM PGS
0
k
Ψ(λ )
k
Ψ(λ )
10
−5
0
10
10
−10
10
0
−5
500 1000 PGS iterations
(g)
1500
10
0
500 1000 PGS iterations
1500
(h)
Figure 2.10: Corresponding convergence plots for the test cases in Figure 2.9. Observe the jaggyness in the PGS–SM plots in (b), (c), (d), and (g). e spikes indicates that the PGS–SM method guessed a wrong active set. is can cause the merit function to rise abruptly. e funciton is the Fischer– Burmeister function from [Silcowitz et al., 2009]. e x -axis is measured in units of one PGS iteration to make comparison easier.
2.2. PROJECTION OR SWEEPING METHODS
75
normal force constraint. us, the PGS iteration can be perceived as a selector function of three affine functions. Assuming a converging sequence xk ! x for k ! 1, the solution of PGS can be written as the fixed point formulation, x D min.TU x C tU ; max.TL x C tL ; T x C t// „ ƒ‚ …
(2.100)
'Hx Ch
e right-hand side of (2.100) can be conceptually considered as the evaluation of an affine function, Hx C h. is is true if the active set of constraints is known in advance. erefore, we have 0 D .H I/x C h (2.101) Observe, explicit assembly is not needed for any of the matrices, instead, the PGS method can be used to implicitly evaluate the residual of any given iteration, r k D .H I/xk C h. us, if we write one iteration of the standard PGS method as xkC1 D PGS.xk /
(2.102)
then r k D xkC1 xk D PGS.xk / xk . is can be thought of as the gradient of a nonsmooth nonlinear quasi-quadratic function f .xk / 12 kr k k2 . We are essentially seeking a local minimizer of f , however, we only know its gradient rf .xk / D r k . e Fletcher-Reeves nonlinear conjugate gradient method is perfect for this [Nocedal and Wright, 1999]. In each iteration of the conjugate gradient method we perform the update, xkC1 D xk C k pk
(2.103)
where pk is the search direction and k can be found using a line search method. Next, a new search direction is computed by k rf kC1 k2 k rf k k2 D ˇ kC1 pk rf kC1
ˇ kC1 D
(2.104a)
pkC1
(2.104b)
In an an interactive context, the line search on f .xkC1 / is dropped. Here, the full step length, D 1, is used and the method is restarted whenever krf kC1 k2 > krf k k2 . When computing PGS.xk /, it is practical to do so in-place, meaning that a PGS step is taken implicitly on xk . erefore, the update of xk is done separately in two places in each iteration. e full algorithm is stated in Algorithm 8. An implementation of the nonsmooth nonlinear conjugate gradient (NNCG) method can be found in [Silcowitz-Hansen, 2008-2010] and [Coumans, 2005]. Detailed convergence studys may be found in [Silcowitz et al., 2010b]. Figure 2.11 shows some simulation results taken from [Silcowitz et al., 2010b]. Figure 2.12 shows corresponding convergence rates. Notice the
76
2. NUMERICAL METHODS
Algorithm 8 Nonsmooth nonlinear conjugate gradient (NNCG) method).
Input : x; A; b 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16:
x1 PGS.x0 / rf 0 .x1 x0 / 0 p rf 0 k 1 while not converged do xkC1 PGS.xk / rf k .xkC1 xk / k ˇ k rf k k2 = k rf k k if ˇ > 1 then pk 0 // restart else xkC1 xkC1 C ˇ k pk pk ˇ k pk 1 rf k end if k kC1 end while
1
k2
1
dramatic change in convergence behavior. e improved accuracy means the NNCG method is more equipped to deal with, for instance, large mass ratios. Usually PGS performs slowly on such problems and NNCG will out-perform it. Furthermore, NNCG is overall computationally cheaper than applying a direct method that otherwise would be able to handle large mass ratios.
2.3
THE INTERIOR POINT METHOD
e interior point method is an iterative method, based on a rewrite of the original LCP turning it into a root search problem. e root search problem is solved using a Newton method. Starting from the beginning, we have the LCP: Ax C b 0 x0 T x .Ax C b/ D 0
(2.105a) (2.105b) (2.105c)
2.3. THE INTERIOR POINT METHOD
(a)
(d)
(g)
(b)
(e)
(h)
77
(c)
(f )
(i)
Figure 2.11: Illustrated test cases used for the nonlinear conjugate gradient method: (a) boxes resting on an inclined surface resulting in static friction forces; (b) a large configuration of boxes stacked in a friction-inducing manor; (c) a large stack of boxes of equal mass; (d) a small pyramid of gears; (e) a medium-scale pyramid of gears; (f ) a small stack of boxes of equal mass; (g) a standing arched snake, composed of boxes and hinge joints; (h) a heavy box placed on top of lighter boxes with mass 1 , and (i) a snake suspended in the air by a terminating fixed link. Notice the resting friction ratio 100 dependence in (a) and (b).
78
2. NUMERICAL METHODS
(a)
(b)
(c)
(d)
(e)
(f )
(g)
(h)
(i)
-
Figure 2.12: Test results: Up to 5,000 iterations of both the PGS method and the NNCG method. e NNCG method clearly converges faster and often to a higher accuracy than that of the PGS method. Notice the superior rate of convergence in (f )-(i).
2.3. THE INTERIOR POINT METHOD
79
We introduce the slack variable y D Ax C b and rewrite the LCP .Ax
(2.106a) (2.106b) (2.106c)
y C b/i D 0 xi yi D 0 xi ; yi 0
e idea is to solve this iteratively while keeping xki ; yik > 0. To ensure this imposed invariant we introduce a positive value > 0 such that .Ax
(2.107a) (2.107b) (2.107c)
y C b/i D 0 xi yi D 0 xi ; yi > 0
e parameter is a relaxed complementarity measure, for ! 0 we expect to find a limiting solution to the LCP. We may now introduce the so-called Kojima mapping, which we use to reformulate the relaxed LCP formulation above Ax y C b D0 (2.108) F .x; y W / D MW e e where M and W are diagonal matrices with Mi i D xi and Wi i D yi and e is the n-dimensional vector of ones. In the k th iteration we let the complementarity measure, , be defined as:
k D
.xk /T .y k / n
(2.109)
where n is the dimension of the problem and the relaxation (centering) parameter is chosen such that 0 < < 1. In the k th iteration, we solve F .x; y W k / D 0 approximately. e solutions of the Kojima map for all positive values of define the central path, which is a trajectory that leads to the solution of the LCP as tends to zero. Using Taylor series expansion, we define the Newton equation Axk C y k b A I xk D (2.110) Mk W k e C e W k Mk y k Having solved for the Newton direction Œxk y k T , we may now compute the Newton update kC1 k x x C k xk D k (2.111) y kC1 y C k y k where k is the step length. e step length is chosen as the maximum positive value such that xkC1 >0 i
and
yikC1 > 0
for all i
(2.112)
80
2. NUMERICAL METHODS
at is, we let k D min.x ; y / where x D max 0 < < 1 and
y D max 0 < < 1
j
xk C xk .1
˛/xk
(2.113)
j
y k C y k .1
˛/y k
(2.114)
e parameter 0 < ˛ < 1 controls how far we back away from the maximum step for which the conditions in Equation (2.112) are satisfied. To accelerate convergence, we let ˛ go to 1 as the iterates approach the solution. e full algorithm is outlined in Algorithm 9. Algorithm 9 Interior Point Method.
Input : A; b; x; N 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15:
2.4
kD1 while k < N do ˛ k=.N C 1/ 1=k
.xk /T .y k /=n M diag.x/ W diag.y/ A I J W M k Ax yk C b k F MW e e k x D J 1F k solve y k x max 0 < < 1 j xk C xk .1 y max 0 < < 1 j y k C y k .1 k min. ; / k kx y k x x x D k C k y y k yk end while
˛/xk ˛/y k
NEWTON METHODS
In this section, we present three different Newton methods that share many similarities, the main differences stem from the fact that they are based on two different nonsmooth reformulations.
2.4. NEWTON METHODS
81
2.4.1 THE MINIMUM MAP NEWTON METHOD Recalling the minimum map reformulation of Equation (1.20) and (1.60), we have the root search problem where H W Rn 7! Rn is given by, 2
3 h.x1 ; y1 / H.x/ 4 : : : 5 D 0 h.xn ; yn / P Recall also that y D Ax C b so yi D Ai i xi C bi C j ¤i Aij xj , thus Hi .x/ h.yi ; xi / 00
D min @@Ai i xi C bi C
X j ¤i
1
(2.115)
1
Aij xj A ; xi A
(2.116a) (2.116b)
e idea is to use a Newton method to solve the nonsmooth equation (2.115). To do so, we need to generalize the concept of derivative [Pang, 1990]. is requires some level of mathematical details to set the scene correctly, for details we omit here we refer to Appendix A.3 as a refresher of the basic mathematics of derivatives. e nonsmooth function Hi .x/ is a selection function of the affine functions, xi and .Ax C b/i . Further, each Hi is Lipschitz continuous (See Appendix A.2) and, since each of the components fulfills this requirement, then so does H.x/ [Scholtes, 1994]. Consider any vector function F W Rn 7! Rn . If there exists a function BF .x; x/ that is positive homogeneous in x, that is, for any ˛ 0
Definition 2.2
BF .x; ˛x/ D ˛BF .x; x/
such that the limit lim
x!0
F.x C x/
F.x/ BF .x; x/ D0 k x k
(2.117)
(2.118)
exists, then we say that F is B-differentiable at x, and the function BF .x; / is called the Bderivative. Notice that, since H.x/ is Lipschitz and directionally differentiable, it is also Bdifferentiable. e B-derivative BH.x; / is continuous, piecewise linear, and positive homogeneous. Observe that the B-derivative as a function of x is a set-valued mapping. We will use the B-derivative to calculate a descent direction for the merit function, min .x/ D
1 H.x/T H.x/ 2
(2.119)
Figure 2.13 shows surface plots of the minimum map reformulation and the corresponding natural merit function. Observe the sharp, non-differential lines on the natural merit function plot.
82
2. NUMERICAL METHODS Natural merit function of minimum map function
Minimum map function
2
2 1.5
1.5 min ( x, y ) 2
1
0 −0.5
1
1 2
m in( x, y)
0.5
0.5
−1 −1.5
0
−2
2
2
1
1
0
0 −1 y
−2
−2
−1.5
−0.5
−1
1
0.5
0
1.5
2
−1 y
−2
−2
−1.5
−0.5
−1
1
0.5
0
1.5
2
x
x
Figure 2.13: Illustration of the minimum map min.x; y/ and the corresponding natural merit function. 1 2
m i n( x , a x + b) 0.3
m i n ( x , a x + b) 2
0.14 a>0, b>0 a>0, b0, b0 a>0, b0, b0, b0, b0, b0, b0, b0, b 0 and xi D 0 1
xi B pi .x/ D @ q x2i C yi2 0 yi B qi .x/ D @ q x2i C yi2
C 1A
.1
/ yi ci
(2.194a)
1
C 1A .1
/ xi
(2.194b)
for any 0 ci 1. Symmetrically we have for yi D 0 and xi > 0 0 1 xi B pi .x/ D @ q x2i C yi2 0 yi B qi .x/ D @ q x2i C yi2
C 1A
/ yi
(2.195a)
/ xi di
(2.195b)
.1
1
C 1A .1
for any 0 di 1. In [Chen and Kanzow, 2000] the choices ci D 0 and di D 0 are used in the last two cases. Applying these choices, eorem 2.8 tells us we are looking for an element of J 2 @G such that: if yi 0 or xi 0 but xi and yi are not both 0 at the same time then 0 1 xi B pi .x/ D @ q x2i C yi2 0 yi B qi .x/ D @ q x2i C yi2
C 1A
(2.196a)
C 1A
(2.196b)
1
102
2. NUMERICAL METHODS
else if yi > 0 and xi > 0
0
1
xi B pi .x/ D @ q x2i C yi2 0
C 1A
.1
/ yi
(2.197a)
1
yi B qi .x/ D @ q x2i C yi2
C 1A .1
/ xi
(2.197b)
else if yi D xi D 0 then pi .x/ D .ai qi .x/ D .bi
for any ai ; bi 2 R such that
k ai
bi
T
1/ 1/
(2.198a) (2.198b)
k 1
is has the added benefit that any one of the strategies discussed in the previous section of picking ai and bi can be re-applied to the penalized Fischer-Burmeister formulation.
2.4.4 TIPS, TRICKS AND IMPLEMENTATION HACKS e Newton equations for all Newton methods we have outlined can be solved using an iterative linear system method. We have successfully applied the two Krylov subspace methods—projected conjugate gradient (PCG) or GMRES [Saad, 2003]. GMRES is more general than PCG and can be used for any non-singular matrix, whereas PCG requires the matrix to be symmetric positive definite (PD). PCG cannot be used for the full Newton equation in case of the minimum map reformulation. However, for the Schur reduced system it may be possible if the principal submatrix is symmetric PD. GMRES is more general purpose but will cost more in storage. e iterative linear system methods are an advantage in combination with the finite difference approximation strategy that we introduced for the Fischer-Newton method. e same trick could be applied for the minimum map Newton method. e advantage from this numerical approximation is an overall numerical method that does not have to assemble the global A-matrix. Instead, the method can work with this matrix in an implicit form or using some known factorization of the matrix. In particular, for interactive rigid-body dynamics this is an advantage as a global matrix-free method holds the possibility of linear computation time-scaling rather than quadratic in the number of variables [Silcowitz et al., 2009]. Often the storage will only be linear for a factorization, whereas the global matrix could be dense in the worst case and require quadratic storage complexity. For fluid problems, the global matrix is rarely assembled, instead finite difference stencils are used on the regular fluid grid to implicitly compute matrix-vector products. us, for fluid problems, iterative linear system methods should be used. For the Newton methods, the PCG method could be used. In our implementation we use GMRES to keep our Newton methods more general.
2.4. NEWTON METHODS
103
Algorithm 13 Nonsmooth Gradient Descent Method.
Input : A; b; x 1: 2: 3: 4: 5: 6: 7: 8: 9: 10:
kD0 while forever do solve xk D JT Fk k projected-line-search.:::/ xkC1 xk C k xk if converged then return xkC1 end if k DkC1 end while
We have not experimented with preconditioners. It should be noted that PCG can handle a PSD matrix if, for instance, an incomplete Cholesky preconditioner is applied. GMRES would most likely benefit from the same type of preconditioner as the PCG method. Newton methods can benefit from a good starting iterate. For the Newton methods that we have derived, using x D 0 as the starting iterate will often be sufficient. For problem cases where PGS applies, a hybrid solution could be applied, taking advantage of the robustness and low iteration cost of PGS to compute a starting iterate for a Newton method, within a few PGS iterations. If PGS is not applicable then a nonsmooth gradient descent type of method can be used instead. To derive a general nonsmooth descent method, let us first generalize our notation by introducing F .x/ where 2 fFB; min; g. We can then rename our three reformulations (FischerBurmeister, Minimum map and Penalized Fischer-Burmeister): Fmin .x/ D H.x/ FFB .x/ D F.x/ F .x/ D G.x/
is allows us to write a single general definition for the gradient of the three nonsmooth functions: (2.199) r .x/ D JT F .x/ e negative gradient will be a descent direction and we can do a projected line search method (see Algorithm 10) along this direction. is leads to Algorithm 13. e nonsmooth gradient descent method consists of simple, sparse matrix-vector products, vector-vector additions, and vector-scalar multiplications. is is naively massively parallelizable.
104
2. NUMERICAL METHODS
When used as a warm start for another method, it may be sufficient to simply run the non-smooth gradient descent method for a small, fixed number of iterations, or until the residual norm k F k has dropped to a fraction of the initial value, say one tenth. Further, in some cases, simply using a damped search direction and setting k to some user-defined constant 0 < < 1 is sufficient for warm starting. One important observation we have made is that the warm starting method must be consistent with the method it warm starts. By this we mean that the underlying mathematical model for both must be the same. For instance, PGS is based on a minimum map reformulation and, hence, is consistent with the minimum map Newton method, but not with the Fischer-Burmeister Newton method. e above nonsmooth gradient descent method offers a generic approach, where the warm start method always will be consistent with any method that it warm starts. All the iterative methods we have introduced will at some point require termination criteria to test if a method has converged, or if some unrecoverable situation has been encountered. To monitor this convergence process, a numerical method uses a merit function. We already saw two definitions of merit functions for the Newton methods. For PGS, the QP reformulation may serve as the definition of the merit function. However, a modified merit function definition can be preferable QPC .x/ D xT jAx C bj (2.200) is has the benefit that the merit function is bounded from below by 0, but also the drawback that it does not work for x D 0. It is often a good principle to use not only one termination criterion, but rather a combination of them. For instance, an absolute termination criterion can be .xkC1 / < "abs (2.201) for some user-specified tolerance 0 < "abs 1. In some cases convergence may be too slow. In these cases a relative convergence test is convenient, ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ (2.202) ˇ .xkC1 / .xk /ˇ < "rel ˇ .xk /ˇ for some user-specified tolerance 0 < "rel 1. A simple guard against the number of iterations exceeding a prescribed maximum helps avoid infinite looping. A stagnation test helps identify numerical problems in the iterative values ˇ ˇ ˇ ˇ (2.203) max ˇxkC1 xki ˇ < "stg i i
for some user-specified tolerance 0 < "stg 1. is test usually only works “numerically” well for numbers close to 1. A rescaled version may be better for large numbers. Besides the above termination criteria, numerical properties can also be verified. For the Newton type methods it can be helpful to verify that the Newton direction is a descent direction. Global convergence of the Newton methods often implies that they will converge to an iterate
2.4. NEWTON METHODS
105
with a 0 gradient. is is not the same as having found a global minimizer. us, it may be insightful to test if the gradient of the merit function is close to 0 and halt if this is the case. What if the method does not converge to something meaningful? In off-line simulations it may be possible to restart a Newton method with a new starting iterate and hope that the “bad” behavior will be avoided. is may not be the best way to spend computing time or it may even be impossible in an interactive simulator. One remedy is to fall back on using the negative gradient direction as the search direction whenever the method fails to have a well defined Newton direction to work with. It should be noted that nondescent directions can occur in practice for real problems, for instance, in cases of ill-conditioned Newton subsystems, or in cases where iterative linear solvers are applied to solve the Newton subsystem. If the iterative linear solvers do not return an accurate enough solution for the Newton direction, or did not get enough time (iterations) to converge to a solution, then one may easily end up with a nondescent direction.
107
CHAPTER
3
Guide for Software and Selecting Methods As promised in Chapter 1, we will try and provide a more detailed road map for how to select and navigate through the mathematically dense material covered in this book.
3.1
OVERVIEW OF NUMERICAL PROPERTIES OF METHODS COVERED
For a quick overview, Table 3.1 summarizes the methods we have covered in this book, as well as the worst-case time and storage complexities of the methods. Here follows a more detailed textual overview. Direct methods are usually only computationally affordable for small-sized problem sizes like n D 2, n D 3, or n D 4. is class of methods guarantees finding a solution (if a solution exists). ese types of methods are ideal building blocks on methods that apply blocking and/or staggering. Pivoting methods are similar to direct methods in many aspects, but they often use some clever way to enumerate iterates through pivoting. e specific method we show in this book only applies to matrices that are symmetric positive semi-definite, however, more general methods do exist, as we mention in the section on pivoting methods. Both direct and pivoting methods tend to be very good at dealing with poorly conditioned problems, compared to some of the other methods we cover in this book. For certain types of problems, like the large mass ratio problems known from rigid-body simulations, a pivoting method will be a very good choice. Splitting methods are extremely robust methods, although often not very accurate, and they tend to suffer from slow convergence rates. Even so, because these methods often have a very low per-iteration cost in terms of time and complexity (and storage when sparsity is utilized), they are highly favored in interactive simulators. In this book, we also take a different approach to deriving the usual PGS/PSOR variants of splitting methods. e alternative derivation provides us with very strong convergence properties when the LCP problem is symmetric positive definite. Blocking and staggering methods build on the ideas of splitting methods and follow as natural extensions of such methods.
108
3. GUIDE FOR SOFTWARE AND SELECTING METHODS
Subspace minimization and nonlinear nonsmooth conjugate gradient methods both share the idea of using a splitting method as an internal building block. Both methods rely on a splitting method to either predict an active set or to define a residual. As we show, such concepts are rather powerful and can result in quite interesting algorithms that are simple to implement and inherit many of the robustness traits known from splitting methods. Interior point methods are, to our knowledge, not yet fully explored. In our experience, they bear many similarities to the Newton methods but tend to have nice subproblems, in the sense that they are numerically better conditioned. Newton methods are not quite as robust as the splitting methods but they come close. Newton methods tend to provide better convergence rates than the splitting methods, yielding overall very accurate methods. In our experience, Newton methods are often comparable in accuracy to that achieved by pivoting methods in interactive simulations. e Newton methods are very sensitive toward the starting iterates and it is hard to give a guarantee on the maximum needed iterations to provide a “good enough” feasible iterate. is makes Newton methods less attractive for interactive simulations. e book explores using projected line searches and nonsmooth gradient descent methods in combination with Newton methods, which altogether creates very nice numerical methods. e different convergence properties of the methods covered in this book are listed in Table 3.2. Table 3.3 summarizes the most important numerical properties of the problem class.
3.2
EXISTING PRACTICE ON MAPPING MODELS TO METHODS
Often, one problem can be reformulated as another problem, in order to find a suitable numerical method for solving the original problem. Table 3.4 contains an overview of the problem reformulations that we have covered in this book. For instance, in contact force problems, stating the model as a BLCP allows using the MLCP formulation of the dynamics, which avoids matrix inversion of the mass matrix, which is needed to condense the MLCP into an LCP, see [Bender et al., 2012] for details. It is often a tradeoff of sparsity pattern vs. numbers of variables. Condensing MLPC or BLCPs (when possible, see Section 1.1) often reduces the number of variables greatly, but increases the fill-in pattern of the coefficient matrix. is can affect conditioning, too. So, even though halving the number of variables reduces the size of A-matrix to one quarter, it can change the fill-in from being linear to near quadratic. Hence, complexity of memory goes from linear to quadratic in terms of problem size. e BLCP solvers are the most general and, as such, a more general-purpose hammer in the toolbox. However, their convergence theory can be rather non-supportive or even unknown. e LCP is, in our view, much more explored and provides essential convergence proofs in many cases, hence, it might be a heavy hammer in the tool-belt, but it is guaranteed to work whenever it can be applied.
3.2. EXISTING PRACTICE ON MAPPING MODELS TO METHODS
109
Table 3.1: Overview of methods presented in this book with a quick summary of complexity and problem classes. ./ refers to per-iteration costs as the number of iterations in cases depending on a stopping criteria. Complexities are worst-case. In many cases, known matrix properties can be used (like sparsity) to achieve better complexity. ./ Problem classes are not shown explicitly in this book, but the extension is not hard to make
Category
Book Reference
Storage Complexity O.n2 / O.n2 / O.n2 /./ O.n2 /./ O.n2 /./ O.n2 / O.n2 /
Problem Class
Section 2.1.1 Section 2.1.2 Section 2.2.1 Section 2.2.3 Section 2.2.4 Section 2.2.5 Section 2.2.6
Time Complexity O.2n / O.n3 / O.n/./ O.n/./ O.n/./ O.n/ O.n/
Direct Methods Pivoting Methods Splitting Methods Blocking Methods Staggering Method Subspace Minimization Nonlinear Nonsmooth Conjugate Gradient Interior Point Method Minimum Map Newton Method Fischer-Burmeister Newton Method Penalized Fischer Newton Method Nonsmooth Gradient Descent Projected Linear Search
Section 2.3 Section 2.4.1
O.n3 / O.n3 /
O.n2 / O.n2 /
LCP LCP, BLCP
Section 2.4.2
O.n3 /
O.n2 /
LCP, BLCP
Section 2.4.3
O.n3 /
O.n2 /
LCP
Section 2.4.4
O.n/./
O.n2 /./
LCP, BLCP./
Section 2.4.1
O.n/./
O.n2 /./
LCP, BLCP ./
LCP LCP LCP, BLCP LCP, BLCP LCP, BLCP BLCP BLCP
3.2.1 EXISTING SOFTWARE SOLUTIONS All methods covered in this book are implemented in, and freely available from, the Num4LCP library [Erleben, 2011]. is library is used by [RPI, 2014]. See Appendix D for more details on Num4LCP. As of this writing, the most general and most robust solver is considered by many to be PATH [Path, 2005]. is solver is for general nonlinear complementarity problems. e PATH solver is based on a Fischer-Burmeister reformulation and uses a Newton method with a non-monotone line-search method. Our method from Section 2.4.2 has many similarities, but is different, in that it is tailored for LCPs and BLCPs, and in using a projected line search method. A Matlab/Python implementation exists of PATH in the library known as COMPASS [Schmelzer, 2014]
110
3. GUIDE FOR SOFTWARE AND SELECTING METHODS
Table 3.2: Overview of best-known theoretical convergence behavior of methods presented in this book. A means convergence rate is not really applicable for that method. For most Newton-like method the convergence behavior depends on the starting iterate and the accuracy of the solution for the Newton system. ./ no known theoretical convergence rates for the NNCG method, empirical study shows linear to superlinear convergence, and even locally quadratic convergence
Category Direct Methods Pivoting Methods Splitting Methods Blocking Methods Staggering Method Subspace Minimization Nonlinear Nonsmooth Conjugate Gradient Interior Point Method Minimum Map Newton Method Fischer-Burmeister Newton Method Penalized Fischer Newton Method Nonsmooth Gradient Descent Projected Linear Search
Convergence Rate
Linear Linear Linear ./
Superlinear Superlinear/Quadratic Superlinear/Quadratic Superlinear/Quadratic Linear
Many simulation engines exist that have solvers implemented as subroutines. ese solvers are often not easily isolated to be used as general purpose solvers for other problems. OpenTissue www.opentissue.org contains BLCP PSOR type of solvers supporting blocking and staggering, as well as a generic minimum map Newton method using a SVD sub-system solver. e SOFA project http://www.sofa-framework.org contains dense LCP solvers for GPUs among other solvers for contact force problems. Jinngine https://code.google.com/p/jinn gine/ contains Java implementations of the PGS subspace minimization method and the NNCG method. Bullet http://bulletphysics.org/wordpress/ contains both blocked PGS style solvers for BLCPs, and NNCG and Lemke’s method are part of this library. ODE http: //www.ode.org contains similar solvers. Siconos http://siconos.gforge.inria.fr provides more general nonlinear solvers. LMGC90 https://subver.lmgc.univ-montp2.fr/t rac_LMGC90v2/ is yet another simulator, offering splitting types of methods for solving mechanics. Many more simulators exist, a great many of these are surveyed at http://homepages. rpi.edu/~luy4/survey.html.
3.3. FUTURE OF LCPS IN COMPUTER GRAPHICS
111 ./
Table 3.3: Overview of the most important numerical properties that the matrix A must fulfil. For cases the principal minor corresponding to the active set has to (in most situtations) be non-singular
Category Direct Methods Pivoting Methods Splitting Methods QP reformulation Blocking Methods Staggering Method Subspace Minimization Nonlinear Nonsmooth Conjugate Gradient Interior Point Method Minimum Map Newton Method Fischer-Burmeister Newton Method Penalized Fischer Newton Method Nonsmooth Gradient Descent
3.3
A-matrix assumptions No assumptions Symmetric Positive Definite No assumptions Symmetric No assumptions No assumptions No assumptions No assumptions No assumptions./ No assumptions./ No assumptions./ No assumptions./ No assumptions
FUTURE OF LCPS IN COMPUTER GRAPHICS
e computer graphics community seems to have decided that work on contact force modeling has matured to a point where not much more can be added. In the same spirit, most industrial production use-cases consider rigid bodies and dry isotropic coulomb friction as adequate models. Even given the approximations made, as described in Section 1.3.5, the less realistic models are not a big concern. e challenge appears more in finding numerical methods that are faster, more accurate, and robust, and which can work out of the box without too much parameter tuning. Most of the work referenced in this book explores different numerical methods. We speculate that multilevel methods will likely gain focus in the future, as they hold promise of preserving robustness from splitting methods, while obtaining convergence rates more similar to quasi-Newton methods. Multilevel methods are already being researched in the context of linear equations for fluid problems and for elastic materials. Interactive simulators from the computer graphics community are finding other uses in related fields, such as robotics and even computational contact mechanics. e fast computation times enable interactive design, virtual prototyping and online planning for highly nonlinear motion planners. ese new application domains may spark an interest in making changes in the used contact models to increase physical correctness. From a modeling perspective, more careful modeling of friction is an obvious avenue. Most current work is limited to dry isotropic Coulomb friction, which is rather limiting compared to the behavior of friction in the real world. Given the community’s interests in developing numerical methods for solving the LCP or BLCP contact-force problem, there exists an increasing need for being able to validate and
112
3. GUIDE FOR SOFTWARE AND SELECTING METHODS
Table 3.4: Overview of the problem reformulations covered in this book. Linear equations (LE), Linear complementarity problems (LCP), Mixed Linear Complementarity Problems (MLCP), Boxed Linear Complementarity Problems (BLCP), Quadratic Programming (QP), Variational Inequalities (VI), Proximal Mappings (PROX), Minimum map and Fischer-Burmeister functions
Dimension 1D 1D 1D 1D 1D 1D 1D 1D 1D 1D 1D nD nD nD nD nD nD nD nD nD nD nD
Reformulation LE to BLCP LCP to QP LCP to Minimum Map LCP to Fischer-Burmeister LCP to BLCP LCP to VI LCP to PROX BLCP to LCP BLCP to QP BLCP to Minimum Map BLCP to Fischer-Burmeister LE to BLCP LCP to QP LCP to Minimum Map LCP to Fischer-Burmeister LCP to BLCP BLCP to QP BLCP to Minimum Map BLCP to Fischer-Burmeister BLCP to LCP LCP to MLCP MLCP to BLCP
Equation (1.26) (1.9) (1.17) (1.21) (1.25) (1.50) (1.53) (1.68) (1.38) (1.43) (1.48) (1.71) (1.58) (1.60) (1.62) (1.72) (1.80) (1.92) (1.62) (1.84) (1.77) (1.77)
compare such methods. We know of no predominant methodology for comparing contact-force simulators, although attempts exist. research community is starting to collect data and contemplate how to benchmark the various methods [BPMD, 2014]. At the BIRS workshop in 2014 [Acary et al., 2014], the world’s leading researchers in contact force problems discussed and addressed the need for benchmarking. ey concluded this to be a community effort that would go on for years. e group initiated a collection of benchmark cases, but only the future will tell if this will become the benchmarking “methodology.” It is not trivial to compare the different numerical methods, it easily becomes a comparison of bananas to apples. Even for cases where models are exactly the same, two different numerical methods may use different merit functions. Hence accuracy/efficiency cannot be compared only by comparing the numerical values of merit
3.3. FUTURE OF LCPS IN COMPUTER GRAPHICS
113
functions. Nor would it make sense to force the same merit function to be used in both simulators, as that could cause an inconsistency with the problem reformulation being used. Most work on LCPs in computer graphics has revolved around contact force problems. However, as we have tried to motivate and inspire the reader in Section 1.3, complementarity is a quite general macroscopic phenomena and there may exist a whole world of problems to explore where LCPs or BLCPs form a natural and convenient model and where our book provides efficient solvers for such new emerging models. We hope the graphics community will take us up on this modeling challenge and provide interesting problem instances for further study in the future.
115
APPENDIX
A
Basic Calculus A.1
ORDER NOTATION
Order notation may appear confusing when encountered in textbooks about optimization or partial differential equations. is text is written to help students gain a better intuition about the order notation. Hopefully, the text will not do the contrary.
A.1.1 WHAT IS A LIMIT? We will start by defining what we mean by taking a limit in the general sense. Given the real vector function, f .x/ W Rn 7! Rm , and the constant, c 2 Rm . If there, for any given real number " 0, exists a real number ı 0 such that, Definition A.1
if
kx
x k "
then
kc
f .x/ k ı
(A.1)
holds then we write lim f .x/ D c
x!x
(A.2)
e above definition may seem a bit mathematical. However, what the definition expresses is that a limit of f .x/ means that, no matter how we let x approach x , we will always have that f .x/ approaches c . As such, it is a very strong statement about how f .x/ behaves near x . In terms of geometry, what this definition says is that, if we look at a ball around c of radius ı and a ball around x of radius ",
Bc .ı/ D fy Bx ."/ D fx
j j
ky kx
c k ıg x k "g ;
(A.3a) (A.3b)
then f .x/ 2 Bc .ı/ for all x 2 Bx ."/. e reader should notice that the number " is a “magic” number in the sense that it is chosen arbitrarily. Given any such "-number, one has to prove the existence of a ı that satisfies (A.1). Example A.2
Consider the statement lim x n D 0
x!0
(A.4)
116
A. BASIC CALCULUS
for any n 2 Z where n 1. By making the choice x D 0 and c D 0, we recognize Definition A.1. Let us determine if the statement is true. Given " 0 and k x k "
(A.5)
k x n k ı
(A.6)
ı D "n
(A.7)
we note that if we set
Examples like the one above lead to known limits of certain archetype functions. One can create a table of such known results lim hn D 0
(A.8a)
lim
(A.8b)
h!0
1 D1 h!0 hn
As the reader may observe, Definition A.1 works very well as a tool for proving if a given limit is true. Example A.3
Given lim f .x/ D a
(A.9)
lim g.x/ D b
(A.10)
x!x
and x!x
consider h.x/ D f .x/ C g.x/. Let x ! x , then the limit is lim h.x/ D lim .f .x/ C g.x// D a C b
x!x
x!x
(A.11)
Examples like the above lead to a set of rules of calculus we can apply when working with limits. e two most often encountered are lim .f .x/ C g.x// D lim f .x/ C lim g.x/
x!x
x!x
lim .kf .x// D k lim f .x/
x!x
Example A.4
x!x
x!x
(A.12a) (A.12b)
For the first rule, given " such that k x k "
(A.13)
A.1. ORDER NOTATION
117
then, by the existence of the limits of f .x/ and g.x/, we know k f .x/ k g.x/
(A.14a) (A.14b)
a k ıf b k ıg
Consider the limit of f .x/ C g.x/ k f .x/ C g.x/
a
b k Dk .f .x/ a/ C .g.x/ b/ k k f .x/ a k C k g.x/ b k ıf C ıg
(A.15a) (A.15b) (A.15c)
us, we can set ı D ıf C ıg . is proves the first rule. For the second rule, given " such that (A.16)
k x k "
then, by limit of f .x/ we have k f .x/
(A.17)
a k ıf
For k kf .x/
ka kD k k f .x/
a k kıf
(A.18)
us, by setting ı D kıf we have proven the second rule. Once we have built a set of rules of calculus with limits and a table of known limit values, then we may exploit the knowledge to take shortcuts when deriving and proving limits.
A.1.2 THE SMALL-o NOTATION We will use the small-o notation, o./. In the following, we will define the notation and give some examples of its usage. Definition A.5
Let .x/ W R 7! R be a real vector function. We write .x/ D o.x
x/
if lim
x!x
Example A.6
x ! x
for
.x/ D0 x x
(A.19) (A.20)
Let h 2 R and m; n 2 ZC and n > m, then let us prove hn D o.hm /
(A.21)
by the definition of small-o notation, we have hn D lim hn h!0 hm h!0
lim
m
D0
(A.22)
118
A. BASIC CALCULUS
thus, hn D o.hm / is true. Example A.7
Let W R 7! R and assume that (A.23)
.h/ D o.1/
by definition that means .h/ D lim .h/ D 0 1 h!0 In other words, .h/ D o.1/ means that ! 0 when h ! 0 Let us study another example
lim
h!0
Example A.8
(A.24)
Let .h/ W R 7! R, .h/ W R 7! R, and assume .h/ D o..h//
(A.25)
Given the constant c 2 R, we will prove that co..h// D o..h//
(A.26)
By definition co..h// D c.h/. We divide this by .h/ and check that the limit is 0. lim
h!h
.h/ c.h/ D c lim D0 .h/ h!h .h/
(A.27)
e last step can be done since .h/ D o..h//, so we know the limit is 0. e example suggests that we can discover a set of rules for making calculations with the small-o notation simply by applying the definition of the small-o notation. e following rules can be derived using the same line of thought, co..h// D o..h// o..h// C o..h// D o..h// .h/o..h// D o..h/2 / 1 o..h// D o.1/ .h/ Example A.9
(A.28a) (A.28b) (A.28c) (A.28d)
Given f .x/ W R 7! R and the Taylor series expansion around x , 1 1 f .x C h/ D f .x/ C f 0 .x/h C f 00 .x/h2 C f 000 .x/h3 C 2 6
(A.29)
By re-arranging, we have f .x C h/
f .x/
f 0 .x/h D
1 00 f .x/h2 C 2
(A.30)
A.1. ORDER NOTATION
119
We will consider the case for h ! 0. We observe that, from the right-hand side we have lim
1 00 f .x/h2 2
h
h!0
C
1 00 1 f .x/h C lim f 000 .x/h2 C D 0 h!0 6 h!0 2
D lim
(A.31)
us, by the definition of the small-o notation, we have f .x C h/ D f .x/ C f 0 .x/h C o.h/
(A.32)
for h ! 0. e last example is interesting, since it implicitly holds the definition of the derivative. Re-arranging, we have Remark A.10
f .x C h/ h
Now take the limit h ! 0
f .x/
D f 0 .x/ C o.1/
f .x C h/ h h!0
f 0 .x/ D lim
f .x/
(A.33)
(A.34)
Higher-Dimensional Definitions e small-o notation can be extended to work for vector functions as well. e general definition
can be stated as follows. Definition A.11
Let .x/ W Rn 7! Rm be a real vector function. We write .x/ D o.k x
if lim
x!x
x k/
for
x ! x
.x/ D0 k x x k
(A.35)
(A.36)
e purpose of using the norm is to create a real number, since we cannot divide by the vector, x x , for the case m D n D 1 this has a side-effect. Definition A.12
Let .x/ W R 7! R be a real vector function. We write .x/ D o.jx
x j/
if lim
x!x
for
.x/ D0 jx x j
x ! x
(A.37)
(A.38)
is is not wrong. However, for the case m D n D 1, we want to use the more general definition
120
A. BASIC CALCULUS
Let .x/ W R 7! R be a real vector function. We write
Definition A.13
x/
.x/ D o.x
if lim
x!x
Remark A.14
x ! x
for
(A.39)
.x/ D0 x x
(A.40)
e reader should observe:
• the limit is always taken for something going to 0; and • that something can go to 0 in any way. is is important since other definitions of the o notation only allow something positive to go to 0. us our definition is more general. Example A.15
Assume we are given zk
x D tk w C o.tk /
(A.41)
together with a function f W Rn 7! R. e the first order Taylor series expansion around x is T f .zk / D f .x / C f 0 .x / zk
x C o.k zk
x k/
(A.42)
substitution yields f .zk / D f .x / C f 0 .x /
T
.tk w C o.tk // C o.k tk w C o.tk / k/
(A.43)
a constant multiplied by o.tk / is still o.tk /, so, f .zk / D f .x / C tk f 0 .x /T w C o.tk / C o.k tk w C o.tk / k/
(A.44)
Assume that k D o.k tk w C o.tk / k/ for w ¤ 0 then k D o.tk //, since k 1 k k D lim D lim k w k k!1 tk k!1 tk k w C o.1/ k k!1 k tk w C o.tk / k
0 D lim
(A.45)
and since o.tk / C o.tk / D o.tk /, we have f .zk / D f .x / C tk f 0 .x /T w C o.tk /
(A.46)
A.1. ORDER NOTATION
121
Handling Sequences e small-o notation we have introduced can be specialized for sequences of real numbers. Definition A.16
Given two infinite sequences of reals, fk g and fk g. We write
if the sequence of ratios
n
k k
o
k D o.k /
(A.47)
approaches 0 as k approaches 1. at is we have k D0 k!1 k
lim
(A.48)
It is helpful to use (A.48) as the defining equation of the small-o notation. Example A.17 As an example, consider what it means if we have two infinite sequences of reals, fzk g and fhk g and we write o.hk / zk D : (A.49) hk
By algebraic manipulation we have zk hk D o.hk /:
(A.50)
By the defining equation (A.48) we have z k hk zk D lim D 0: k!1 hk k!1 1
lim
(A.51)
We reach the conclusion that, by the definition of small-o notation, we must have zk D o.1/:
(A.52)
In other words, we have discovered the calculus rule (A.28d) in this special context: o.hk / D o.1/: hk
(A.53)
Taking the limit as k goes to infinity of (A.52), we have by (A.51) lim o.1/ D lim zk D 0
k!1
k!1
(A.54)
which means that o.1/ goes to 0 in the limit as k goes to 1. As another example, consider what it means if we have two non-negative infinite sequences of scalars, fzk g and fhk g and we write Example A.18
zk D hk o.hk /
(A.55)
122
A. BASIC CALCULUS
Again, by algebraic manipulation, we have zk D o.hk / hk
(A.56)
By the definition of the small-o notation, we have lim
k!1
zk hk
hk
zk D0 k!1 h2 k
D lim
(A.57)
which, by definition of small-o notation, means zk D o.h2k /
(A.58)
From this we have derived the calculus rule (A.28c) for this special context: hk o.hk / D o.h2k /:
(A.59)
e above examples generalize, and without further proof we state them in general form as the following theorem: eorem A.19
Given two infinite sequences of scalars, fk g and fk g, we have k D kq o.kp / D o.kpCq /
(A.60)
for any p; q 2 Z . eorem A.20
Given a infinite sequences of scalars, fk g, where k D o.1/, then lim o.1/ D 0
(A.61)
k!1
Of course a similar approach can be used for dealing with sequences of vectors rather than just real numbers. We leave this for the reader.
A.1.3 THE BIG-O NOTATION We also use the big-O notation, O./. In the following, we define the notation and give some examples of its usage. e big-O notation could be phrased similarly to the small-o notation. Definition A.21
Let .x/ W R 7! R be a real vector function. We write .x/ D O.x
x/
for
x ! x
(A.62)
A.1. ORDER NOTATION
123
if there exist C 0 and " 0 such that j.x/j C jx
xj
x 2 Œx
for all
"; x C "
(A.63)
When .x/ W Rn 7! Rm , we write .x/ D O.k x
x k/
for
x ! x
(A.64)
if there exist C 0 and " 0 such that k .x/ k C k x
x k
for all x where
kx
x k "
(A.65)
If we always let the argument of the small-o and big-O notation go to 0, then we can omit writing the x ! x part in the definitions. As with the small-o notation, the big-O notation also extends straightforwardly to sequences. Definition A.22
Given two infinite sequences of real numbers, fk g and fk g, we write k D O.k /
(A.66)
k k k C k k k
(A.67)
If there is a positive constant C such that
for all k sufficiently large. In a manner of speaking, this means that k can be sandwiched by ˙C k for all k sufficiently large. Example A.23 As an example, consider what it means if we have an infinite sequences of real numbers, fzk g, and we write
zk D O.1/:
(A.68)
By definition, there exists a positive constant C such that k zk k C k 1 kD C
(A.69)
and this must hold for all k . We often use the big-O notation to say something about the best behavior of a quantity. If, for instance, we say that the error is O.h2 /, that actually means that the error can grow as bad as C h2 for some positive constant C . However, it does not say anything about how “good” the error can be. As such, the big-O notation is a very loose bound on how a sequence or function grows.
124
A. BASIC CALCULUS
A.2
LIPSCHITZ FUNCTIONS Triangle Inequality: Let x and y be vectors. en the triangle inequality is given by
eorem A.24
kxk
(A.70)
k y kk x C y kk x k C k y k
Geometrically, the right-hand part of the triangle inequality states that the sum of the lengths of any two sides of a triangle is greater than the length of the remaining side. A generalization is n n X X k ak k k ak k (A.71) kD1
kD1
In mathematics, more specifically in real analysis, Lipschitz continuity, named after Rudolf Lipschitz, is a smoothness condition for functions that is stronger than regular continuity. Intuitively, a Lipschitz continuous function is limited in how fast it can change; a line joining any two points on the graph of this function will never have a slope steeper than a certain number called the Lipschitz constant of the function. Lipschitz Continuity : Let a real function f .x/ be defined on an interval I and suppose we can find two positive constants C and ˛ such that
Definition A.25
k f .x/
y k˛
f .y/ k C k x
(A.72)
for all x and y in I . en, f is called Lipschitz continuous or is said to satisfy a Lipschitz condition of order ˛ and we say that f 2 Lip.˛/. e smallest such C is called the Lipschitz constant of the function f . e function is called locally Lipschitz continuous, if for every x in I , there exists a neighborhood N.x/ so that f restricted to N is Lipschitz continuous. A generalization of Lipschitz continuity is called Hölder continuity. Example A.26
Take f .x/ D x on the interval I D Œa; b. en, for x and y in I k f .x/
f .y/ kDk x
(A.73)
yk
From this we see that f 2 Lip.1/ with C D 1. Example A.27
Take f .x/ D x 2 on the interval I D Œa; b. en, for x and y in I k f .x/
f .y/ kDk x 2
y 2 kDk x
y kk x C y k C k x
where C D 2 max.k a k; k b k/. In conclusion f 2 Lip.1/.
yk
(A.74)
A.2. LIPSCHITZ FUNCTIONS
Example A.28
125
Take f .x/ D 1=x on the interval I D0; 1Œ. en, for x and y in I jf .x/
(A.75a) (A.75b) (A.75c) (A.75d) (A.75e)
f .y/j D j1=x 1=yj D j.y x/=xyj D j1=xyjj.y x/j D j1=xyjj.x y/j D C jx yj
If x or y goes to 0 the C goes to 1. us, f … Lip.˛/ for any value of ˛ . Example A.29
e function f .x/ D jxj defined on the interval of all the real numbers, jf .x/
f .y/j D jjxj
jyjj jx
(A.76)
yj
us, f is Lipschitz continuous with the Lipschitz constant equal to 1. is is an example of a Lipschitz continuous function that is not differentiable (f 0 .0/ does not exist). eorem A.30 Lip.˛/, then
Lipschitz Condition is Linear: Lip.˛/ is a linear space. at is, given f; g 2 (A.77)
.f C g/.x/ 2 Lip.˛/
Proof. Since f 2 Lip.˛/
jf .x/
f .y/j Cf jx
yj˛
(A.78)
jg.x/
g.y/j Cg jx
yj˛
(A.79)
and since g 2 Lip.˛/, we have If .f C g/ 2 Lip.˛/, then we must prove j.f .x/ C g.x//
.f .y/ C g.y//j C jx
yj˛
(A.80)
We re-write the left-hand side j.f .x/ C g.x//
.f .y/ C g.y//j D j.f .x/
f .y// C .g.x/
g.y//j
(A.81)
f .y//j C j.g.x/
g.y//j
(A.82)
yj˛ jCf C Cg jjx
yj˛ (A.83)
By the triangle inequality (A.70), we have j.f .x/ C g.x//
.f .y/ C g.y//j j.f .x/
Since f; g 2 Lip.˛/ we can rewrite the right-hand side to obtain j.f .x/ C g.x//
.f .y/ C g.y//j Cf jx
yj˛ C Cg jx
Setting C D jCf C Cg j proves (A.80) as wanted.
126
A. BASIC CALCULUS
eorem A.31
Constantness: If f 2 Lip.˛/ with ˛ > 1 then f D constant.
Proof. Let us examine what happens with the term jx
yj˛ as x and y approaches each other.
yj˛ D lim jx
lim jx
x!y
y!x
yj˛ D 0
(A.84)
us, the right-hand side of jf .x/
f .y/j C jx
yj˛
(A.85)
will be 0 regardless of the value of C . e only possible way the left-hand side can be less than or equal to 0 when x ¤ y is if f .x/ D f .y/. is implies that f must be constant for all values of x and y . eorem A.32
is continous on I .
Lipschitz Condition implies continouty: If f 2 Lip.˛/ on some interval I , then f
Proof. So, for any two points x and c in I , we have jf .x/
f .c/j C jx
cj˛
(A.86)
We will now prove that f is continous at the point c by taking the limit as x ! c lim jf .x/
f .c/j C lim jx
x!c
x!c
cj˛ D 0
(A.87)
is implies lim f .x/ D f .c/
(A.88)
x!c
is proves that f is continous at c and c was chosen arbitrarily in I , thus, f is continous on all of I . eorem A.33 Bounded derivative implies Lipschitz Condition: If f possesses a derivative satisfying jf 0 j C , then f 2 Lip.1/.
Proof. By the Mean-Value eorem f .x/ x
f .y/ D f 0 .c/ y
(A.89)
for some c in the interval x; yŒ. From this we have k f .x/ kx
f .y/ k Dk f 0 .c/ k yk
(A.90)
is implies k f .x/
f .y/ kDk f 0 .c/ kk x
yk
(A.91)
A.3. DERIVATIVES
127
0
If f .c/ exists and is bounded by C k f 0 .c/ k C
(A.92)
then k f .x/
f .y/ k C k x
yk
(A.93)
which implies that f 2 Lip.1/. Definition A.34
Lipschitz Function: A real function f such that jf .x/
f .y/j C jx
(A.94)
yj
for all x and y , where C is a constant independent of x and y , is called a Lipschitz function. eorem A.35
Any function with a bounded first derivative must be Lipschitz.
Proof. Follows from eorem A.33, since if f 2 Lip.1/ then f is Lipschitz.
Definition A.36 Directional Derivative: e Directional Derivative of the real-valued vector function f .x/ W Rn 7! Rm at the point x in the direction d is denoted by
f 0 .xI d / D lim
!0C
f .x C d /
f .x/
(A.95)
provided the limit exist.
A.3
DERIVATIVES
Definition of the derivative of a real-valued function, f .x/ W R 7! R. ere exist a 2 R such that lim
t!0
f .x C t/
f .x/ t
at
D0
(A.96)
exist. e G-derivative of F .x/ W Rm 7! Rn . Given h 2 Rm there exist a linear operator A 2 R such that F.x C th/ F.x/ tAh lim D0 (A.97) t!0 t exist. Observe that the G-derivative is a directional derivative. e F-derivative of F.x/ W Rm 7! Rn . For any h 2 Rm there exist a linear operator A 2 Rmn such that F.x C h/ F .x/ Ah lim D0 (A.98) khk h!0 mn
128
A. BASIC CALCULUS
exist. Note that F-derivative is uniform convergent in h, that is, the limit holds no matter how h approaches 0. In the case of the G-derivative 1 goes to x along the negative direction of h. is corresponds to simply schrinking h to 0. A function is said to be positive homogeneous of k -degree if and only if F.˛v / D ˛ k F.v /
(A.99)
where 0 < k 2 N and 0 < ˛ 2 R. Notice that for k D 1 we get “half ” the definition of a linear function F.˛v C ˇw/ D ˛F.v/ C ˇF.w/
(A.100)
e Bouligand derivative of .x/ W R2 7! R. For any x 2 R2 there exists a positive homogenous function B W R R2 7! R such that lim
kxk!0
.x C x/
.x/ B.x; x/ D0 k x k
(A.101)
exist. e Bouligand derivative is almost the same as the Frechet derivative, except that the Frechet derivative is linear and the Bouligand derivative is only positive homegeneous. Notice that the different kind of differentiabilities are related by BF
and G F
(A.102)
129
APPENDIX
B
First-Order Optimality Conditions We include this appendix to help this book be self-contained. For a thorough treatment of firstorder optimality conditions, we recommend Nocedal and Wright [1999]. We will start with the “mother” problem of all problems, the constrained minimization problem. Given a continuously differentiable objective function, f .x/ W Rn 7! R, and a continuously differentiable constraint function, c.x/ W Rn 7! Rm , the problem
Definition B.1
min f .x/
(B.1)
ci .x/ D 0 8i 2 E ci .x/ 0 8i 2 I
(B.2a) (B.2b)
x
subject to
is called a constrained minimization problem. We call f the objective function while ci , i 2 E are equality constraints and ci , i 2 I are inequality constaints. e constraint index sets, E and I , are proper subsets,
E f1; : : : ; mg I f1; : : : ; mg
(B.3a) (B.3b)
E \I D; E [ I D f1; : : : ; mg
(B.4a) (B.4b)
where
A special index set, called the active set, is often used, and it includes the indices of all constraints where equality currently holds. us, the active set is dependent on the value of x . Definition B.2
e active set, A, at x is defined as
A.x/ D E [ fi 2 I jci .x/ D 0g A point x is said to be feasible if the point satisfies the constraints of the problem,
(B.5)
130
B. FIRST-ORDER OPTIMALITY CONDITIONS
ci .x/ D 0 8i 2 E ci .x/ 0 8i 2 I
(B.6a) (B.6b)
and non-feasible otherwise. e inequality constraint i 2 I is said to be active at a feasible point x if ci .x/ D 0, and inactive if strict inequality holds, ci .x/ > 0. e set of all feasible points is called the feasible region. Definition B.3
e feasible region is defined as ˝ D fxjci .x/ D 0
8i 2 E
^
ci .x/ 0
8i 2 I g
(B.7)
e set of non-feasible points is called the non-feasible region. We can define different types of solutions, x , for the mimimization problem as follows. • A point, x , is a gobal minimizer if f .x / f .x/ for all x 2 ˝ . • A point, x , is a local minimizer if there is a neighborhood N of x such that f .x / f .x/ for all x 2 N \ ˝ . • A point, x , is a strict local minimizer if there is a neighborhood N of x such that f .x / < f .x/ for all x 2 N \ ˝ with x ¤ x . e necessary conditions for a first-order optimal solution are often used as a building block for methods that solve the minimization problem or as a tool for verifying if a given solution is truly a solution under first-order optimality. Given a local solution x for the minimization problem in Definition B.1, then there is a Lagrange multiplier vector, , such that the following conditions are satisfied at .x ; /, X rf .x / i rci .x / D 0 (B.8a) Definition B.4
i2E [I
ci .x / D 0 8i 2 E ci .x / 0 8i 2 I i 0 8i 2 I i ci .x / D 0 8i 2 E [ I
(B.8b) (B.8c) (B.8d) (B.8e)
e conditions (B.8e) are known as the complementarity conditions. ey state that, either the constraint ci .x / is active, i D 0, or possibly both. ese conditions act as a kind of switch by only turning on the constraints that have influence at x , corresponding to i ¤ 0. All other constraints are turned off, corresponding to i D 0. Remark B.5
131
Strict complementarity means that exactly one of
i
and ci .x / is 0 for each i 2 I .
Given a local solution x and a vector satisfying Definition B.4, then we have strict complementarity if i > 0 8i 2 I \ A.x / (B.9)
Definition B.6
Definition B.7
e Lagrangian function is defined as X L.x; / D f .x/ i c.x/
(B.10)
i2E [I
Using the Lagrangian function we observe that rx L.x; / D rf .x/
X i2E [I
i rci .x/
(B.11)
us, this shorthand notation is often used for writing the first condition (B.8a) for first-order optimality as rx L.x ; / D 0 (B.12) One may write the first-order optimality conditions in a more compact form using matrix-vector notation, Remark B.8
rf .x /
rc.x /T D 0 c.x / 0 0 . /T c.x / D 0
(B.13a) (B.13b) (B.13c) (B.13d)
Here we choose E D ;. Inequalities are component-wise. e first-order optimality conditions may not look very intuitive at first sight. In the following, we try to present some intuition behind the conditions. For now, we only consider the case of a single equality constraint, c.x/. We have a feasible point x and want to verify if the point is a local minimizer. We do this by determining if we can take a small step x and add the step to x such that the new point x C x is feasible and has a lower function value than x . If no such x step can be found, then x must be a local minimizer. By Taylors eorem, (B.14a) c.x C x/ D c.x/ C rc.x/T x C o k x k2 T 2 (B.14b) f .x C x/ D f .x/ C rf .x/ x C o k x k Remark B.9
132
B. FIRST-ORDER OPTIMALITY CONDITIONS
giving us the first-order approximations c.x C x/ D c.x/ C rc.x/T x f .x C x/ D f .x/ C rf .x/T x
(B.15a) (B.15b)
If we have an equality constraint, then c.x/ D 0 and we want x to be such that the constraint is not violated, meaning that c.x C x/ D 0. Inserting into the first-order approximation yields 0 D rc.x/T x
(B.16)
If x C x should have a lower function value than x , then f .x C x/ < f .x/. By our approximation f .x C x/ D f .x/ C rf .x/T x f .x C x/ f .x/ D rf .x/T x 0 < rf .x/T x
(B.17a) (B.17b) (B.17c)
We now have two conditions for x that must be fulfilled: 0 D rc.x/T x 0 < rf .x/T x
(B.18a) (B.18b)
If such x exist, then x C x would be a feasible point with a lower function value than x . Meaning that x cannot be a feasible local minimizer. e only way to make sure no such x can be found is that if rc.x/ and rf .x/ is parallel. at is if rf .x/ D rc.x/
(B.19)
for some -multiplier. To gain more intuition about the first-order optimality condition, we now consider the case of a single inequality constraint. We use the same approach as the one we used for the equality case. e first-order approximation to the inequality constraint yields Remark B.10
0 c.x C x/ c.x/ C rc.x/T x
(B.20)
0 c.x/ C rc.x/T x
(B.21)
First order feasibility means Two cases exist depending on the point x . Either c.x/ D 0 or c.x/ > 0. We will examine both cases in turn. In the first case, we have c.x/ D 0 and the first-order feasibility condition reduces to 0 rc.x/T x (B.22)
133
As in the case of the equality constraint, the first-order approximation to f .x/ and using f .x C x/ < f .x/ lead to 0 < rf .x/T x: (B.23) Here we have assumed that rf .x/ ¤ 0. e only case where we cannot find a x fulfilling both conditions is when rf .x/ and rc.x/ points in opposite directions, rf .x/ D c.x/
and > 0
(B.24)
In the second case, c.x/ > 0 means that we can find a sufficiently small neighborhood N at x such that any x 2 N fulfill (B.21). us, whenever rf .x/ ¤ 0, we can find a step x D
(B.25)
˛rf .x/
where ˛ is a sufficiently small positive number such that x 2 N . is step will result in f .x C x/ < f .x/. us, the only possibility for x to be a local minimizer is that rf .x/ D 0. In summary, we have discovered c.x/ D 0 ) rf .x/ D c.x/ c.x/ > 0 ) rf .x/ D 0
and
>0
(B.26a) (B.26b)
which is equivalent to rf .x/
rc.x/ D 0 0 c.x/ 0 c.x/ D 0
(B.27a) (B.27b) (B.27c) (B.27d)
We have omitted one important detail in our presentation of the first-order optimality conditions. We have not mentioned anything about constraint qualifications. In most of the cases in physics-based modeling and simulation one would use linear bounds or constant bounds on x . is means that all ci .x/ implicitly can be represented by a linear function and, as such, we always fulfill one type of constraint qualifications. erefore, we will usually omit saying anything about constraint qualifications unless it is needed. Constraint qualifications are assumptions that guarantee a similarity of the geometry of the feasible region with the linearized approximations of the constraints in a neighborhood of x . Basically, when looking at a point x , we need to make sure that the algebraic linearized approximations, the constraint gradients of the active constraints, can be used as a true representative of the feasible region in that neighborhood. First, we need a tool to describe the geometry of the feasible region in a neighborhood of a point x without considering any algebraic representation. For this we need the definition of a tangent vector. Remark B.11
134
B. FIRST-ORDER OPTIMALITY CONDITIONS
Definition B.12 e vector d is said to be a tangent vector of the feasible region, ˝ , at a feasible point x , if there is a sequence of feasible points fzk g converging to x and a sequence of positive scalars, ftk g, with limk!1 tk D 0, such that
lim
zk
k!1
x tk
Dd
(B.28)
Next, we define the set of all tangent vectors. Definition B.13 by T˝ .x/.
e set of all tangent vectors of ˝ at x is called the tangent cone and is denoted
e tangent cone is also pointed, meaning that 0 2 T˝ x . is is shown by choosing the sequence of feasible points as zk D x in the definition of a tangent vector. Unlike the tangent cone, the set of linearized feasible directions is based completely on the algebraic representation of the constraint functions. e definition of the set is as follows. Remark B.14
Given a feasible point x and the active constraint set, A.x/, the set of linearized feasible directions, F .x/, is ˇ ˇ d T rci .x/ D 0 8i 2 E ˇ (B.29) F .x/ D d ˇ d T rci .x/ 0 8i 2 A.x/ \ I
Definition B.15
Remark B.16 Observe that the definition of F .x/ is based on the algebraic representation of rci .x/ for all i 2 A.x/. is means that, if we write the same constraint in a different algebraic
form, then set of feasible directions may look different, even though it is the same problem we are trying to solve. Constraint qualifications are assumptions such that F .x/ is similar to T˝ .x/ in a neighborhood of x . For many constraint qualifications the sets are identical, F .x/ D T˝ .x/. If x 2 ˝ and all active constraint functions ci .x / for i 2 A.x / are linear functions, then F .x / D T˝ .x / Definition B.17
Linear independence constraint qualification (LICQ): If x 2 ˝ and the gradients of all active constraint functions, rci .x / for i 2 A.x / are linear independent, then F .x / D T˝ .x /
Definition B.18
135
APPENDIX
C
Convergence, Performance and Robustness Experiments We have implemented all the numerical methods in Matlab (R2010a) [Erleben, 2011] and run our experiments on a MacBook with 2.4 GHz Intel Core 2 Duo, and 4 GB RAM on Mac OSX 10.6.8. All tests took approximately 100 computing hours. To quickly generate a large number of test runs and automate our experiments, the Matlab suite contains a small suite of methods that can generate synthetic “fake” fluid or contact LCPs with the properties we discussed in Section 1.3. Fig. C.1 shows two examples of generated matrices for the LCPs. is has the added benefit of being able to quickly re-run experiments under varying parameters and offers a great deal of control. Our synthetic tests are no substitute for real-world problems and only serve to demonstrate the inherent convergence properties. As a remark, we note that all numerical methods have been tested on RPI’s Matlab rigid-body simulator [Williams et al., 2012] and compared against PATH [Ferris and Munson, 1999]. e Fischer-Newton method rivals PATH robustness but scales better, due to the iterative sub-solver. PATH seems slightly more robust, due to its nonmonotone line-search method. We examine the convergence rate of the iterative methods using a 1,000-variable fluid problem and 300-variable contact problem. e problem sizes are limited by the Matlab programming environment. For the PSOR and PGS methods, we use the modified merit function (2.200). e Fischer-Newton and minimum map Newton method use their natural merit functions. Our results are shown in Fig. C.2. As expected, we observe linear convergence rates for PSOR and PGS, while Newton methods show quadratic convergence rates. Due to a non-singular Newton equation matrix, the minimum map Newton method does not work on contact problems. It fails after a few iterations, where it gets into a non-descent iterate. is leaves only the Fischer-Newton method as a real alternative for contact problems. For fluid problems, the minimum map Newton method finished in lower iterations. Finally, we observe that PSOR using D 1:4 converges faster than PGS. Next, we will examine how the different Newton equation strategies for the FischerNewton method affect the overall convergence behavior. For this we have generated a fluid LCP corresponding to 1,000 variables and a contact LCP with 180 variables. Our results are shown in Fig. C.3. For contact problems, we have observed that the perturbation strategy at times uses fewer iterations than the other strategies. For fluid problems, all strategies appear similar in con-
136
C. CONVERGENCE, PERFORMANCE AND ROBUSTNESS EXPERIMENTS
Random fluid matrix
Random contact matrix
0
0
10
10
20
20
30 30
40 40
50 50
60 60 0
10
20
30 nz = 436
40
50
60
0
10
20
(a)
30 nz = 1600
40
50
60
(b)
Figure C.1: Fill patterns of a fluid matrix (a) and a contact matrix (b). Observe that the matrices are extremely sparse and that the fluid matrix is symmetric, whereas the contact matrix is non-symmetric. Convergence Rate: fluid LCP
10
0
−10
10
−20
0
−10
10
−20
10
10
−30
0
Fischer Min Map
10 Merit value
Merit value
10
Fischer Min Map PGS PSOR
10
10
Convergence Rate: contact LCP
10
10
−30
10
20 30 Iterations
(a)
40
50
10
0
10
20 30 Iterations
40
50
(b)
Figure C.2: Convergence rates for a fluid (a) and a contact problem (b). For the fluid problem we observe high accuracy of the Newton methods within few iterations. For the contact problem only the Fischer–Newton method works.
137
vergence behavior. e approximation strategy always results in a less accurate solution than the other strategies. Comparison of Newton system solver: fluid LCP
10
10
Random Perturbation Zero Approximation
0
−10
10
−20
−10
10
10
−30
0
0
−20
10
10
Random Perturbation Zero Approximation
10 Merit value
Merit value
10
Comparison of Newton system solver: contact LCP
10
10
−30
5
10
15 Iterations
(a)
20
25
30
10
0
5
10
15 Iterations
20
25
30
(b)
Figure C.3: Convergence rates of Fischer–Newton method using different strategies. A fluid (a) and a contact (b) problem are displayed. Observe that the perturbation strategy works a little better for the contact case and that the approximation strategy is less accurate.
Figure C.4: Parameter study of PSOR. Notice that a value of approximately 1:4 seems to be best.
We performed a parameter study of PSOR for a 1,000-variable fluid problem. Here we plotted convergence rate for various relaxation parameter values as shown in Fig. C.4. Our results show that a value of D 1:4 seems to work best. For all iterative methods, we have measured the wall clock time for 10 iterations. e results of our performance measurements are shown in Fig. C.5. Not surprisingly, we observe the cubic
138
C. CONVERGENCE, PERFORMANCE AND ROBUSTNESS EXPERIMENTS
scaling of Lemke’s method. is will quickly make it intractable for large problems.¹ In the case of the fluid problem, we found the Fischer-Newton method to scale worse than linear, this is unexpected as we use GMRES for solving the Newton equation. e explanation of the behavior is that in our implementation we are always assembling the Jacobian matrix J . We do this to add extra safeguards in our implementation against non-descent iterates. If we apply dense matrices, the assembly of the Jacobian will scale quadratically. If sparse matrices are used, the assembly will scale in the number of non-zero values. eoretically, if the safeguards were omitted and the approximation strategy was used, the Fischer-Newton iteration should scale linearly. Performance Comparison: contact LCP
Performance Comparison: fluid LCP
1.4
2.5 Multilevel Fischer Min Map PGS PSOR
1 Time [secs]
Time [secs]
2
1.5
1
Fischer Min Map Lemke
1.2
0.8 0.6 0.4
0.5
0.2 0 0
200
400 600 #Variables
(a)
800
1000
0 50
100
150 200 #Variables
250
300
(b)
Figure C.5: Performance measurements for fluid problems (a) and contact problems (b).
We examined the robustness of our implemented methods. For this we have generated 100 fluid problems with 1,000 variables and 100 contact problems with 180 variables. In all cases we used a relative tolerance of 10 6 , an absolute tolerance of 10 3 , and a maximum upper limit of 100 iterations. For each invocation, we recorded the final state as being relative convergence, absolute convergence, stagnation of iterates, local minimum iterate, non descent iterate, or max iteration limit reached. Table C.1 displays our results. e minimum map Newton method does not work at all for contact problems. We observe that the rest of the methods are robust for this test setup. We investigated the number of iterations required for absolute and relative convergence. e results are shown in Table C.2. We observe a low standard deviation for the Newton type methods. is suggests that it is possible to determine suitable maximum limits for these types of methods a priori. e PSOR method is, on average, twice as fast as PGS. e iterations of PGS vary wildly. is implies that PGS does not have predictable performance if accurate solutions are required. ¹A multilevel method [Erleben, 2011] is also shown. Due to space considerations and poor behavior we have omitted all details about this method in our survey of numerical methods.
139
Table C.1: Robustness for 100 fluid and contact problems (a) Final state on fluid problems.
Method Fischer Min Map PGS PSOR
Relative 0 0 0 0
Absolute 100 100 100 100
(b) Final state on contact problems
Method Fischer Min Map
Absolute 100 0
Non-descent 0 100
Table C.2: Statistics on number of iterations (a) Absolute convergence contact problem
Method Fischer
Mean 14.54
Min 9.00
Max 26.00
Std 2.79
(b) Absolute convergence fluid problem
Method Fischer Min map PGS PSOR
Mean 5.00 4.07 49.38 25.29
Min 5.00 3.00 9.00 16.00
Max 5.00 5.00 83.00 35.00
Std 0.00 0.29 17.48 3.77
Newton methods can benefit from a good initial starting iterate. To illustrate the impact of this we have generated 100 dense PD problems with 100 variables. We solved the problems using a zero-valued starting iterate and a starting iterate obtained from 10 iterations of PGS. In the tests we used a relative tolerance of 10 6 , an absolute tolerance of 10 2 , and a maximum upper limit of 30 iterations for the Newton methods. Fig. C.6 summarizes our findings. To reach absolute convergence we observed that warm starting reduces the number of iterations to less than or equal to half of the number of iterations without warm starting. Considering the computational cost of PGS, this is a good tradeoff. We have tested the numerical methods ability to deal with overdetermined systems. is is numerically equivalent to 0 eigenvalues. For an increasing ratio of 0 eigenvalues, we have generated 100 dense PSD problems. For all cases, we used a relative tolerance of 10 4 , absolute tolerance of 10 2 and a maximum iteration bound of 100. In our initial trial runs we have observed that the approximate strategy works better for PSD problems, whereas the other strategies behave similar to the minimum map Newton method. We therefore applied the approximation strategy for this experiment. We observed that, overall, the Fischer-Newton method seems better at reaching relative convergence, whereas the minimum map Newton is less successful. For large ratios of 0 eigenvalues we clearly see that both Newton methods get into trouble. A high number of 0 eigenvalues causes ill-conditioning or even singularity of the Newton equations and results in a local minimum or a non-descent iterate. In all test runs, the PGS and PSOR methods end up reaching
140
C. CONVERGENCE, PERFORMANCE AND ROBUSTNESS EXPERIMENTS Warm Starting Iterations 100 Fischer wo Min Map wo Fischer w Min Map w
80
Count
60
40
20
0 2
3
4 Iteration Number
5
6
Figure C.6: e number of iterations used to reach absolute convergence with (w) and without (wo) warm starting using PGS.
their maximum iteration limit. is is a little misleading, as it does not mean they end up with a bad iterate, rather it implies they converge slowly.
141
APPENDIX
D
Num4LCP Numerical methods (Num) for (4) linear complementarity problems (LCP) in physics-based animation—in short Num4LCP—is a library of numerical methods, hosted at Google Projects and can be accessed from, https://code.google.com/p/num4lcp/
is library contains a toolbox of Matlab functions that implements numerical methods for computing solutions for linear complementarity problems (LCPs). e intent with this library is that users can get to know about, and study, the algorithms behind the numerical methods before selecting their favorite method and porting it to whatever target language or platform they prefer. e hope is that this will allow other researchers and practitioners to easily adopt new numerical methods into their own projects. Table D.1 gives an overview of methods currently included in the project at the time of writing. e context for the numerical methods is physics-based animation, which means the library has only been verified and tested on typical LCPs encountered in physics-based animation. e library was developed for academic and educational reasons. For this reason, Matlab was chosen as the main platform. If Matlab is not available, the methods should be easily converted to NumPhy, SciPhy or Octave code. Please use this reference when citing this project @misc{ num4lcp.11, author = {Erleben, K. and Andersen, M. and Niebe S. and Silcowitz M.}, title = {num4lcp}, howpublished = {Published online at code.google.com/p/num4lcp/}, month = {October}, year = 2011, note = {Open source project for numerical methods for linear complementarity problems in physics-based animation} }
Quadratic Program
Multilevel
Interior Point
Projected Gauss-Seidel / Projected SOR (QP)
Fischer-Burmeister Newton
Minimum map Newton
Method
Quadratic Program
Multilevel
Interior Point
Projected Gauss-Seidel / Projected SOR (QP)
Fischer-Burmeister Newton
Minimum Map Newton
Method
N/A N/A
Matlab
Symmetric
Matlab
N/A
Symmetric PD
Central Trajectory***
N/A
No assumptions
Matlab, Python
N/A
Symmetric PSD
Globalization strategies Projected Armijo back-tracking line search Projected Armijo back-tracking line search N/A
Random, Zero, Perturbation, Approximation, Penalized Chen Chen and Kanzow style** N/A
No assumptions
Implementation Matlab, C++/CUSP, Python Matlab, Python C++/CUSP Matlab, C++/CUSP
Available subsolvers N/A
A-matrix properties No assumptions*
Table D.1: Overview of methods in Num4LCP numerical method library. (*) C++/CUSP implementation requires A-matrix to be PSD, because it uses PCG to find search direction. (**) Matlab implements all five subsolvers. Python implements: Random, Perturbation and Zero. C++/CUSP implements: Perturbation. (***) Newton subsystem is currently solved with a Moore Penrose pseudo-inverse if ill-conditioned and a direct solver otherwise (Matlab’s built-in).
142 D. NUM4LCP
D.1. USING NUM4LCP
D.1
143
USING NUM4LCP
e Num4LCP library can be obtained by svn: svn checkout http://num4lcp.googlecode.com/svn/trunk/num4lcp
is should create a Num4LCP folder on the machine. e folder contains a Matlab folder with all the methods in it. When running the code, one Matlab function/script may call another function inside the svn repository. For this to work, make sure that Matlab currents folder is changed to the newly checked out Matlab folder. Alternatively, that the newly checked out Matlab folder is added to the search path of Matlab by writing the following in the Matlab prompt: addpath /my/full/path/to/check-out/destination/num4lcp/matlab
is should take care of the path issues for the current Matlab session. e repository is organized in a flat structure where functionality is divided into individual files. All files named: • make_XXX.m are factory functions used to generate matrices and LCP problems • test_XXX.m are all test scripts and the XXX part of the name indicates which method or properties that are being tested. All test scripts will dump images into a subfolder named “output”. If the folder does not exist, all “print” commands will fail. If this is the case then one should just create an empty output folder. • All remaining Matlab files are functions implementing the numerical methods of the library. ere should be plenty of inline code comments to help get an overview of the individual implementations. e test scripts provide a good start in terms of boiler-plate code, showing how the numerical methods in the library should be called in the reader’s Matlab code.
145
Bibliography Vincent Acary, Robert Bridson, Danny Kaufman, Jong-Shi Pang, and Jeff Trinkle. Computational contact mechanics: Advances and frontiers in modeling contact. Workshop at Banff International Research Station (BIRS) for Mathematical Innovation and Discovery, Alberta Canda, February 2014. 112 Iván Alduán and Miguel A. Otaduy. Sph granular flow with friction and cohesion. In Proc. of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation, 2011. http://www. gmrv.es/Publications/2011/AO11. DOI: 10.1145/2019406.2019410. 1, 31 M. Anitescu and F. A. Potra. Formulating dynamic multi-rigid-body contact problems with friction as solvable linear complementarity problems. Nonlinear Dynamics, 14:231–247, 1997. ISSN 0924-090X. DOI: 10.1023/A:1008292328909. 36, 39 Mihai Anitescu and Alessandro Tasora. An iterative approach for cone complementarity problems for nonsmooth dynamics. Computational Optimization and Applications, Nov 2008. DOI: 10.1007/s10589-008-9223-4. 2, 36 David Baraff. Analytical methods for dynamic simulation of non-penetrating rigid bodies. SIGGRAPH Comput. Graph., 23(3):223–232, 1989. ISSN 0097-8930. DOI: 10.1145/74334.74356. 1 David Baraff. Issues in computing contact forces for nonpenetrating rigid bodies. Algorithmica. An International Journal in Computer Science, 10(2-4):292–352, 1993. Computational robotics: the geometric theory of manipulation, planning, and control. DOI: 10.1007/BF01891843. 1 David Baraff. Fast contact force computation for nonpenetrating rigid bodies. In SIGGRAPH ’94: Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pages 23–34, New York, NY, USA, 1994. ACM. ISBN 0-89791-667-0. DOI: 10.1145/192161.192168. 1, 51 David Baraff. Interactive simulation of solid rigid bodies. IEEE Comput. Graph. Appl., 15(3): 63–75, 1995. ISSN 0272-1716. DOI: 10.1109/38.376615. 1 Christopher Batty, Florence Bertails, and Robert Bridson. A fast variational framework for accurate solid-fluid coupling. ACM Trans. Graph., 26, July 2007. ISSN 0730-0301. DOI: 10.1145/1276377.1276502. 1, 27, 28
146
BIBLIOGRAPHY
Jan Bender, Kenny Erleben, Jeff Trinkle, and Erwin Coumans. Interactive Simulation of Rigid Body Dynamics in Computer Graphics. In Marie-Paule Cani and Fabio Ganovelli, editors, EG 2012 - State of the Art Reports, pages 95–134, Cagliari, Sardinia, Italy, 2012. Eurographics Association. 37, 108 Florence Bertails-Descoubes, Florent Cadoux, Gilles Daviet, and Vincent Acary. A nonsmooth newton solver for capturing exact coulomb friction in fiber assemblies. ACM Trans. Graph., 30:6:1–6:14, February 2011. ISSN 0730-0301. DOI: 10.1145/1899404.1899410. 36 Stephen Clyde Billups. Algorithms for complementarity problems and generalized equations. PhD thesis, University of Wisconsin at Madison, Madison, WI, USA, 1995. 2 BPMD. Benchmark problem for multibody dynamics (bpmd) database. Accessed 2014 online at https://grasp.robotics.cs.rpi.edu/bpmd/, 2014. 112 Robert Bridson. Fluid Simulation for Computer Graphics. 10.1201/b10635. 27
A K Peters, 2008. DOI:
Bintong Chen and Xiaojun Chenand Christian Kanzow. A penalized fischer-burmeister ncpfunction. Math. Program, 88:211–216, 2000. DOI: 10.1007/PL00011375. 89, 94, 101 Nuttapong Chentanez and Matthias Müller. A multigrid fluid pressure solver handling separating solid boundary conditions. In Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA ’11, pages 83–90, New York, NY, USA, 2011. ACM. ISBN 978-1-4503-0923-3. DOI: 10.1145/2019406.2019418. 1, 27, 28 F.H. Clarke. Optimization and Nonsmooth Analysis. Society for Industrial Mathematics, 1990. DOI: 10.1137/1.9781611971309. 89 Richard Cottle, Jong-Shi Pang, and Richard E. Stone. e Linear Complementarity Problem. Computer Science and Scientific Computing. Academic Press, February 1992. 7, 10, 57 Erwin Coumans. e bullet physics library. http://www.continuousphysics.com, 2005. 75 Hadrien Courtecuisse and Jérémie Allard. Parallel Dense Gauss-Seidel Algorithm on ManyCore Processors. In High Performance Computation Conference (HPCC). IEEE CS Press, jun 2009. http://allardj/pub/2009/CA09. DOI: 10.1109/HPCC.2009.51. 1 Gilles Daviet, Florence Bertails-Descoubes, and Laurence Boissieux. A hybrid iterative solver for robustly capturing coulomb friction in hair dynamics. ACM Trans. Graph., 30(6):139:1– 139:12, December 2011. ISSN 0730-0301. DOI: 10.1145/2070781.2024173. 37 Tecla De Luca, Francisco Facchinei, and Christian Kanzow. A theoretical and numerical comparison of some semismooth algorithms for complementarity problems. Computational Optimization and Applications, 16:173–205, 2000. ISSN 0926-6003. DOI: 10.1023/A:1008705425484. 83, 86, 89
BIBLIOGRAPHY
147
Christian Duriez, Frederic Dubois, Abderrahmane Kheddar, and Claude Andriot. Realistic haptic rendering of interacting deformable objects in virtual environments. IEEE Transactions on Visualization and Computer Graphics, 12(1):36–47, January 2006. ISSN 1077-2626. DOI: 10.1109/TVCG.2006.13. 1 David H. Eberly. Game Physics. Morgan Kaufmann, 2 edition, April 2010. 1 Morten Pol Engell-Nørregård. Interactive Modelling and Simulation of Human Motion. PhD thesis, Department of Computer Science, University of Copenhagen, Denmark, 2012. 34 Kenny Erleben. Stable, Robust, and Versatile Multibody Dynamics Animation. PhD thesis, Department of Computer Science, University of Copenhagen (DIKU), 2005. 27, 68 Kenny Erleben. Velocity-based shock propagation for multibody dynamics animation. ACM Transactions on Graphics (TOG), 26(2):12, 2007. ISSN 0730-0301. DOI: 10.1145/1243980.1243986. 1, 37, 38, 43, 67, 68 Kenny Erleben. num4lcp. Published online at code.google.com/p/num4lcp/, October 2011. Open source project for numerical methods for linear complementarity problems in physicsbased animation. vi, 2, 109, 135, 138 Kenny Erleben and Ricardo Ortiz. A non-smooth newton method for multibody dynamics. In ICNAAM 2008. International conference on numerical analysis and applied mathematics 2008, 2008. DOI: 10.1063/1.2990885. 86, 88 Kenny Erleben, Jon Sporring, and Henrik Dohlmann. Opentissue - an open source toolkit for physics-based animation. In Stephen Aylward, Tina Kapur, and Luis Ibanez, editors, ISC / NA-MIC / MICCAI Workshop on Open-Source Software, 2005. http://hdl.handle.net /1926/34. 67 Michael C. Ferris and Christian Kanzow. Complementarity and related problems; a survey. Technical report, University of Wisconsin – Madison, 1998. 83, 86, 89 Michael C. Ferris and Todd S. Munson. Interfaces to path 3.0: Design, implementation and usage. Comput. Optim. Appl., 12:207–227, January 1999. ISSN 0926-6003. DOI: 10.1023/A:1008636318275. 135 A. Fischer. A special newton-type optimization method. Optimization, 24(3-4):269–284, 1992. DOI: 10.1080/02331939208843795. 10, 89 D.M. Flickinger, J. Williams, and J.C. Trinkle. What’s wrong with collision detection in multibody dynamics simulation? In IEEE International Conference on Robotics and Automation (ICRA), May 2013. DOI: 10.1109/ICRA.2013.6630689. 37
148
BIBLIOGRAPHY
D.M. Flickinger, J. Williams, and J.C. Trinkle. Performance of a method for formulating geometrically exact complementarity constraints in multibody dynamic simulation. Journal of Computational and Nonlinear Dynamics, 10:011010, 2015. DOI: 10.1115/1.4027314. 37 Jorge Gascón, Javier S. Zurdo, and Miguel A. Otaduy. Constraint-based simulation of adhesive contact. In Proc. of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation, 2010. http://www.gmrv.es/Publications/2010/GSO10. DOI: 10.2312/SCA. 1 Dan Gerszewski and Adam W. Bargteil. Physics-based animation of large-scale splashing liquids. ACM Trans. Graph., 32(6):185:1–6, November 2013. Proceedings of ACM SIGGRAPH Asia 2013, Hong Kong. DOI: 10.1145/2508363.2508430. 27, 33, 47 Chris Hecker. Lemke’s algorithm: e hammer in your math toolbox? Online slides from Game Developer Conference, accessed 2011, 2004. 1 Michel Jean. e non-smooth contact dynamics method. Computer Methods in Applied Mechanics and Engineering, 177(3–4):235–257, July 1999. DOI: 10.1016/S0045-7825(98)00383-1. 67 Christian Kanzow and Helmut Kleinmichel. A new class of semismooth newton-type methods for nonlinear complementarity problems. Comput. Optim. Appl., 11(3):227–251, December 1998. ISSN 0926-6003. DOI: 10.1023/A:1026424918464. 89 Danny M. Kaufman, Shinjiro Sueda, Doug L. James, and Dinesh K. Pai. Staggered projections for frictional contact in multibody systems. ACM Trans. Graph., 27(5), 2008. DOI: 10.1145/1409060.1409117. 68 Peter Kipfer. LCP Algorithms for Collision Detection Using CUDA, chapter 33, pages 723–730. Number 3 in GPU Gems. Addison-Wesley, 2007. 1 Claude Lacoursiere and Mattias Linde. Spook: a variational time-stepping scheme for rigid multibody systems subject to dry frictional contacts. Technical report, HPC2N and Department of Computer Science, Umeaa University, Sweeden, 2011. 69 Per Lötstedt. Numerical simulation of time-dependent contact and friction problems in rigid body mechanics. SIAM journal on scientific and statistical computing, 5(2):370–393, 1984. DOI: 10.1137/0905028. 36, 68 Jan Mandel. A multilevel iterative method for symmetric, positive definite linear complementarity problems. Applied Mathematics and Optimization, 11(1):77–95, February 1984. DOI: 10.1007/BF01442171. 62 Jean Jacques Moreau. Numerical aspects of the sweeping process. Computer Methods in Applied Mechanics and Engineering, 177(3–4):329–349, July 1999. DOI: 10.1016/S00457825(98)00387-9. 67
BIBLIOGRAPHY
149
Katta G. Murty. Linear Complementarity, Linear and Nonlinear Programming. HeldermanVerlag, 1988. 7, 10, 57 Rahul Narain, Abhinav Golas, and Ming C. Lin. Free-flowing granular materials with two-way solid coupling. ACM Trans. Graph., 29(6):173:1–173:10, December 2010. ISSN 0730-0301. DOI: 10.1145/1882261.1866195. 31, 33 Sarah Maria Niebe. Rigid Bodies in Contact: and Everything in Between. PhD thesis, University of Copenhagen, 2014. 2, 19, 37 Jorge Nocedal and Stephen J. Wright. Numerical optimization. Springer Series in Operations Research. Springer-Verlag, New York, 1999. ISBN 0-387-98793-2. DOI: 10.1007/b98874. 5, 7, 16, 21, 47, 75, 85, 129 Ricardo Ortiz. Newton/AMG algorithm for solving complementarity problems arising in rigid body dynamics with frictional impacts. PhD thesis, University of Iowa, July 2007. 88 Miguel A. Otaduy, Daniel Germann, Stephane Redon, and Markus Gross. Adaptive deformations with fast tight bounds. In Proceedings of the 2007 ACM SIGGRAPH/Eurographics symposium on Computer animation, SCA ’07, pages 181–190, Aire-la-Ville, Switzerland, Switzerland, 2007. Eurographics Association. ISBN 978-1-59593-624-0. http://dl.acm.org/citatio n.cfm?id=1272690.1272715. DOI: 10.1145/1272690.1272715. 1 Miguel A. Otaduy, Rasmus Tamstorf, Denis Steinemann, and Markus Gross. Implicit contact handling for deformable objects. Computer Graphics Forum (Proc. of Eurographics), 28 (2), apr 2009. http://www.gmrv.es/Publications/2009/OTSG09. DOI: 10.1111/j.14678659.2009.01396.x. 1 Jong-Shi Pang. Newton’s method for b-differentiable equations. Math. Oper. Res., 15(2):311– 341, 1990. ISSN 0364-765X. DOI: 10.1287/moor.15.2.311. 7, 81, 83 Path. Path cpnet software, 2005. www.cs.wisc.edu/cpnet/cpnetsoftware/. 2, 109 Morten Poulsen, Sarah Niebe, and Kenny Erleben. Heuristic convergence rate improvements of the projected gauss-seidel method for frictional contact problems. In Proceedings of WSCG, 2010. 62 Liqun Qi and Jie Sun. A nonsmooth version of newton’s method. Math. Programming, 58(3): 353–367, 1993. ISSN 0025-5610. DOI: 10.1007/BF01581275. 83 RPI. rpi-matlab-simulator a physics engine and simulator in matlab. Accessed 2014 online at https://code.google.com/p/rpi-matlab-simulator/, 2014. 109 Yousef Saad. Iterative Methods for Sparse Linear Systems, 2nd edition. SIAM, Philadelpha, PA, 2003. DOI: 10.1137/1.9780898718003. 102
150
BIBLIOGRAPHY
Stefan Schmelzer. Compass mcp solver. Accessed 2014 at https://github.com/haraldsch illy/compass-solver-matlab, 2014. 109 Stefan Scholtes. Introduction to piecewise differential equations. Prepring No. 53, May 1994. 81, 84 Morten Silcowitz, Sarah Niebe, and Kenny Erleben. Nonsmooth newton method for fischer function reformulation of contact force problems for interactive rigid body simulation. In Proceedings of Virtual Reality Interaction and Physical Simulation (VRIPHYS), November 2009. 43, 69, 74, 94, 102 Morten Silcowitz, Sarah Niebe, and Kenny Erleben. Projected gauss-seidel subspace minimization method for interactive rigid body dynamics. In Proceedings of the Fifth International Conference on Computer Graphics eory and Applications, Angers, France, May 2010a. INSTICC Press. 54, 70, 73 Morten Silcowitz, Sarah Niebe, and Kenny Erleben. A nonsmooth nonlinear conjugate gradient method for interactive contact force problems. e Visual Computer, 2010b. DOI: 10.1007/s00371-010-0502-6. 37, 38, 43, 54, 75 Morten Silcowitz-Hansen. Jinngine, a physics engine written in java, 2008-2010. http://code .google.com/p/jinngine. 75 Jos Stam. Stable fluids. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, SIGGRAPH ’99, pages 121–128, New York, NY, USA, 1999. ACM Press/Addison-Wesley Publishing Co. ISBN 0-201-48560-5. DOI: 10.1145/311535.311548. 28 David E. Stewart. A Dynamics With Inequalities: Impacts and Hard Constraints. Society for Industrial & Applied Mathematics, 2011. DOI: 10.1137/1.9781611970715. 19 David E. Stewart and Jeff C. Trinkle. An implicit time-stepping scheme for rigid body dynamics with inelastic collisions and coulomb friction. International Journal of Numerical Methods in Engineering, 39(15):2673–2691, 1996. DOI: 10.1002/(SICI)10970207(19960815)39:15%3C2673::AID-NME972%3E3.0.CO;2-I. 36, 39 Jed Williams, Ying Lu, Dan M. Flickinger, and Jeff Trinkle. rpi-matlabsimulator. Published online at code.google.com/p/rpi-matlab-simulator/, 2012. DOI: 10.2312/PE.vriphys.vriphys13.071-080. 37, 94, 95, 135
151
Authors’ Biographies SARAH NIEBE Dr. Niebe is a Postdoc at the department of Computer Science, University of Copenhagen. Niebe’s research is focused on the application of numerical methods in physical simulation, in particular the use of complementarity problems when formulating the dynamics of rigid bodies. Niebe obtained her Ph.D. in 2014, at the Department of Computer Science, University of Copenhagen, Denmark.
KENNY ERLEBEN Dr. Erleben is Associate Professor at the department of Computer Science, University of Copenhagen. For more than a decade, Erleben has worked within the research field of physical simulations, during which he has taught several courses on rigid- and soft-body dynamics, game physics and numerical optimization, as well as co-authored the textbook Physics-Based Animation. Erleben obtained his Ph.D. in 2005, at the Department of Computer Science, University of Copenhagen, Denmark.