Ellipse fitting for computer vision 9781627054584, 9781627054980


310 29 2MB

English Pages 143 Year 2016

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface......Page 13
Ellipse Fitting......Page 16
Representation of Ellipses......Page 17
Least Squares Approach......Page 18
Noise and Covariance Matrices......Page 19
Supplemental Note......Page 21
Iterative Reweight and Least Squares......Page 26
Renormalization and the Taubin Method......Page 27
Hyper-renormalization and HyperLS......Page 28
Summary......Page 30
Supplemental Note......Page 31
Geometric Distance and Sampson Error......Page 34
FNS......Page 35
Geometric Distance Minimization......Page 36
Hyperaccurate Correction......Page 38
Derivations......Page 39
Supplemental Note......Page 43
Outlier Removal......Page 46
Ellipse-specific Fitting......Page 47
Supplemental Note......Page 49
Intersections of Ellipses......Page 52
Ellipse Centers, Tangents, and Perpendiculars......Page 53
Perspective Projection and Camera Rotation......Page 55
3-D Reconstruction of the Supporting Plane......Page 58
Projected Center of Circle......Page 59
Front Image of the Circle......Page 60
Derivations......Page 62
Supplemental Note......Page 65
Ellipse Fitting Examples......Page 70
Statistical Accuracy Comparison......Page 71
Ellipse-specific Methods......Page 74
Real Image Examples 2......Page 76
Ellipse-based 3-D Computation Examples......Page 77
Supplemental Note......Page 79
Formulation......Page 82
Rank Constraint......Page 85
Outlier Removal......Page 86
Formulation......Page 87
Outlier Removal......Page 91
Supplemental Note......Page 92
Error Analysis......Page 94
Covariance and Bias......Page 95
Bias Elimination and Hyper-renormalization......Page 97
Derivations......Page 98
Supplemental Note......Page 105
Maximum Likelihood and Sampson Error......Page 108
Error Analysis......Page 109
Derivations......Page 111
Supplemental Note......Page 116
KCR Lower Bound......Page 118
Derivation of the KCR Lower Bound......Page 119
Expression of the KCR Lower Bound......Page 122
Supplemental Note......Page 123
Answers......Page 126
Bibliography......Page 134
Authors' Biographies......Page 140
Index......Page 142
Recommend Papers

Ellipse fitting for computer vision
 9781627054584, 9781627054980

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Series Editors: Gérard Medioni, University of Southern California Sven Dickinson, University of Toronto

Ellipse Fitting for Computer Vision Implementation and Applications



Kenichi Kanatani, Okayama University Yasuyuki Sugaya, Toyohashi University of Technology Yasushi Kanazawa, Toyohashi University of Technology

KANATANI • SUGAYA • KANAZAWA

Series ISSN: 2153-1056

Because circular objects are projected to ellipses in images, ellipse fitting is a first step for 3-D analysis of circular objects in computer vision applications. For this reason, the study of ellipse fitting began as soon as computers came into use for image analysis in the 1970s, but it is only recently that optimal computation techniques based on the statistical properties of noise were established. These include renormalization (1993), which was then improved as FNS (2000) and HEIV (2000). Later, further improvements, called hyperaccurate correction (2006), HyperLS (2009), and hyper-renormalization (2012), were presented. Today, these are regarded as the most accurate fitting methods among all known techniques. This book describes these algorithms as well implementation details and applications to 3-D scene analysis. We also present general mathematical theories of statistical optimization underlying all ellipse fitting algorithms, including rigorous covariance and bias analyses and the theoretical accuracy limit. The results can be directly applied to other computer vision tasks including computing fundamental matrices and homographies between images. This book can serve not simply as a reference of ellipse fitting algorithms for researchers, but also as learning material for beginners who want to start computer vision research. The sample program codes are downloadable from the website: https://sites.google. com/a/morganclaypool.com/ellipse-fitting-for-computer-vision-implementation-and-applications.

ELLIPSE FITTING FOR COMPUTER VISION

About SYNTHESIS



store.morganclaypool.com

MORGAN & CLAYPOOL

This volume is a printed version of a work that appears in the Synthesis Digital Library of Engineering and Computer Science. Synthesis books provide concise, original presentations of important research and development topics, published quickly, in digital and print formats.

Ellipse Fitting for Computer Vision Implementation and Applications Kenichi Kanatani Yasuyuki Sugaya Yasushi Kanazawa

Ellipse Fitting for Computer Vision Implementation and Applications

Synthesis Lectures on Computer Vision Editors Gérard Medioni, University of Southern California Sven Dickinson, University of Toronto Synthesis Lectures on Computer Vision is edited by Gérard Medioni of the University of Southern California and Sven Dickinson of the University of Toronto. e series publishes 50- to 150 page publications on topics pertaining to computer vision and pattern recognition. e scope will largely follow the purview of premier computer science conferences, such as ICCV, CVPR, and ECCV. Potential topics include, but not are limited to:

• Applications and Case Studies for Computer Vision • Color, Illumination, and Texture • Computational Photography and Video • Early and Biologically-inspired Vision • Face and Gesture Analysis • Illumination and Reflectance Modeling • Image-Based Modeling • Image and Video Retrieval • Medical Image Analysis • Motion and Tracking • Object Detection, Recognition, and Categorization • Segmentation and Grouping • Sensors • Shape-from-X • Stereo and Structure from Motion • Shape Representation and Matching

iii

• Statistical Methods and Learning • Performance Evaluation • Video Analysis and Event Recognition

Ellipse Fitting for Computer Vision: Implementation and Applications Kenichi Kanatani, Yasuyuki Sugaya, and Yasushi Kanazawa 2016

Background Subtraction: eory and Practice Ahmed Elgammal 2014

Vision-Based Interaction Matthew Turk and Gang Hua 2013

Camera Networks: e Acquisition and Analysis of Videos over Wide Areas Amit K. Roy-Chowdhury and Bi Song 2012

Deformable Surface 3D Reconstruction from Monocular Images Mathieu Salzmann and Pascal Fua 2010

Boosting-Based Face Detection and Adaptation Cha Zhang and Zhengyou Zhang 2010

Image-Based Modeling of Plants and Trees Sing Bing Kang and Long Quan 2009

Copyright © 2016 by Morgan & Claypool

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations in printed reviews, without the prior permission of the publisher.

Ellipse Fitting for Computer Vision: Implementation and Applications Kenichi Kanatani, Yasuyuki Sugaya, and Yasushi Kanazawa www.morganclaypool.com

ISBN: 9781627054584 ISBN: 9781627054980

paperback ebook

DOI 10.2200/S00713ED1V01Y201603COV008

A Publication in the Morgan & Claypool Publishers series SYNTHESIS LECTURES ON COMPUTER VISION Lecture #8 Series Editors: Gérard Medioni, University of Southern California Sven Dickinson, University of Toronto Series ISSN Print 2153-1056 Electronic 2153-1064

Ellipse Fitting for Computer Vision Implementation and Applications

Kenichi Kanatani Okayama University, Okayama, Japan

Yasuyuki Sugaya Toyohashi University of Technology, Toyohashi, Aichi, Japan

Yasushi Kanazawa Toyohashi University of Technology, Toyohashi, Aichi, Japan

SYNTHESIS LECTURES ON COMPUTER VISION #8

M &C

Morgan & cLaypool publishers

ABSTRACT Because circular objects are projected to ellipses in images, ellipse fitting is a first step for 3D analysis of circular objects in computer vision applications. For this reason, the study of ellipse fitting began as soon as computers came into use for image analysis in the 1970s, but it is only recently that optimal computation techniques based on the statistical properties of noise were established. ese include renormalization (1993), which was then improved as FNS (2000) and HEIV (2000). Later, further improvements, called hyperaccurate correction (2006), HyperLS (2009), and hyper-renormalization (2012), were presented. Today, these are regarded as the most accurate fitting methods among all known techniques. is book describes these algorithms as well implementation details and applications to 3-D scene analysis. We also present general mathematical theories of statistical optimization underlying all ellipse fitting algorithms, including rigorous covariance and bias analyses and the theoretical accuracy limit. e results can be directly applied to other computer vision tasks including computing fundamental matrices and homographies between images. is book can serve not simply as a reference of ellipse fitting algorithms for researchers, but also as learning material for beginners who want to start computer vision research. e sample program codes are downloadable from the website: https://sites.google.com/a/morganclaypool.com/ellipse-fitting-forcomputer-vision-implementation-and-applications/.

KEYWORDS geometric distance minimization, hyperaccurate correction, HyperLS, hyperrenormalization, iterative reweight, KCR lower bound, maximum likelihood, renormalization, robust fitting, Sampson error, statistical error analysis, Taubin method

vii

Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

1

2

3

4

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1

Ellipse Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2

Representation of Ellipses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.3

Least Squares Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.4

Noise and Covariance Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.5

Ellipse Fitting Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.6

Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Algebraic Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1

Iterative Reweight and Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.2

Renormalization and the Taubin Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.3

Hyper-renormalization and HyperLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.4

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.5

Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Geometric Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.1

Geometric Distance and Sampson Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.2

FNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3.3

Geometric Distance Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3.4

Hyperaccurate Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.5

Derivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.6

Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Robust Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.1

Outlier Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4.2

Ellipse-specific Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

4.3

Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

viii

5

Ellipse-based 3-D Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8

6

Experiments and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8

7

Ellipse Fitting Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Statistical Accuracy Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Real Image Examples 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Robust Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Ellipse-specific Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Real Image Examples 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Ellipse-based 3-D Computation Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

Extension and Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 7.1

7.2

7.3

8

Intersections of Ellipses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Ellipse Centers, Tangents, and Perpendiculars . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Perspective Projection and Camera Rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3-D Reconstruction of the Supporting Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Projected Center of Circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Front Image of the Circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Derivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

Fundamental Matrix computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 7.1.1 Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 7.1.2 Rank Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 7.1.3 Outlier Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Homography Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 7.2.1 Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 7.2.2 Outlier Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

Accuracy of Algebraic Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 8.1 8.2 8.3 8.4 8.5

Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Covariance and Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Bias Elimination and Hyper-renormalization . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Derivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

ix

9

Maximum Likelihood and Geometric Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 9.1 9.2 9.3 9.4 9.5

10

Maximum Likelihood and Sampson Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Bias Analysis and Hyperaccurate Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Derivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

eoretical Accuracy Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 10.1 10.2 10.3 10.4

KCR Lower Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Derivation of the KCR Lower Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Expression of the KCR Lower Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Authors’ Biographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

xi

Preface Because circular objects are projected to ellipses in images, ellipse fitting is a first step for 3-D analysis of circular objects in computer vision applications. For this reason, the study of ellipse fitting began as soon as computers came into use for image analysis in the 1970s. e basic principle was to compute the parameters so that the sum of squares of expressions that should ideally be zero is minimized, which is today called least squares or algebraic distance minimization. In the 1990s, the notion of optimal computation based on the statistical properties of noise was introduced by researchers including the authors. e first notable example was the authors’ renormalization (1993), which was then improved as FNS (2000) and HEIV (2000) by researchers in Australia and the U.S. Later, further improvements, called hyperaccurate correction (2006), HyperLS (2009), and hyper-renormalization (2012), were presented by the authors. Today, these are regarded as the most accurate fitting methods among all known techniques. is book describes these algorithms as well as underlying theories, implementation details, and applications to 3-D scene analysis. Most textbooks on computer vision begin with mathematical fundamentals followed by the resulting computational procedures. is book, in contrast, immediately describes computational procedures after a short statement of the purpose and the principle. e theoretical background is briefly explained as Comments. us, readers need not worry about mathematical details, which often annoy those who only want to build their vision systems. Rigorous derivations and detailed justifications are given later in separate sections, but they can be skipped if the interest is not in theories. Sample program codes of the authors are provided via the website¹ of the publisher. At the end of each chapter is given a section called Supplemental Note, describing historical backgrounds, related issues, and reference literature. Chapters 1–4 specifically describe ellipse fitting algorithms. Chapter 5 discusses 3-D analysis of circular objects in the scene extracted by ellipse fitting. In Chapter 6, performance comparison experiments are conducted among the methods described in Chapters 1–4. Also, some real image applications of the 3-D analysis of Chapter 5 are shown. In Chapter 7, we point out how procedures of ellipse fitting can straightforwardly be extended to fundamental matrix and homography computation, which play a central role in 3-D analysis by computer vision. Chapters 8 and 9 give general mathematical theories of statistical optimization underlying all ellipse fitting algorithms. Finally, Chapter 10 gives a rigorous analysis of the theoretical accuracy limit. However, beginners and practice-oriented readers can skip these last three chapters. e authors used the materials in this book as student projects for introductory computer vision research at Okayama University, Japan, and Toyohashi University of Technology, Japan. By implementing the algorithms themselves, students can learn basic programming know-hows ¹https://sites.google.com/a/morganclaypool.com/ellipse-fitting-for-computer-vision-implementation-and-applications/

xii

PREFACE

and also understand the theoretical background of vision computation as their interest deepens. We are hoping that this book can serve not simply as a reference of ellipse fitting algorithms for researchers, but also as learning material for beginners who want to start computer vision research. e theories in this book are the fruit of the authors’ collaborations and interactions with their colleagues and friends for many years. e authors thank Takayuki Okatani of Tohoku University, Japan, Mike Brooks and Wojciech Chojnacki of the University of Adelaide, Australia, Peter Meer of Rutgers University, U.S., Wolfgang Förstner, of the University of Bonn, Germany, Michael Felsberg of Linköping University, Sweden, Rudolf Mester of the University of Frankfurt, Germany, Prasanna Rangarajan of Southern Methodist University, U.S., Ali AlSharadqah of University of East Carolina, U.S., and Alexander Kukush of the University of Kiev, Ukraine. Special thanks are to (late) Professor Nikolai Chernov of the University of Alabama at Birmingham, U.S., without whose inspiration and assistance this work would not have been possible. Kenichi Kanatani, Yasuyuki Sugaya, and Yasushi Kanazawa March 2016

1

CHAPTER

1

Introduction is chapter describes the basic mathematical formulation to be used in subsequent chapters for fitting an ellipse to observed points. e main focus is on the description of statistical properties of noise in the data in terms of covariance matrices. We point out that two approaches exist for ellipse fitting: “algebraic” and “geometric.” Also, some historical background is mentioned, and related mathematical topics are discussed.

1.1

ELLIPSE FITTING

Ellipse fitting means fitting an ellipse equation to points extracted from an image. is is one of the fundamental tasks of computer vision for various reasons. First, we observe many circular objects in man-made scenes indoors and outdoors, and a circle is projected as an ellipse in camera images. If we extract elliptic segments, say, by an edge detection filter, and fit an ellipse equation to them, we can compute the 3-D position of the circular object in the scene (we will discuss such applications in Chapter 5). Figure 1.1a shows edges extracted from an indoor scene, using an edge detection filter. is scene contains many elliptic arcs, as indicated there. Figure 1.1b shows ellipses fitted to them superimposed on the original image. We observe that fitted ellipses are not necessarily exact object shapes, in particular when the observed arc is only a small part of the circumference or when it is continuously connected to a non-elliptic segment (we will discuss this issue in Chapter 4).

(a)

(b)

Figure 1.1: (a) An edge image and selected elliptic arcs. (b) Ellipses are fitted to the arcs in (a) and superimposed on the original image.

2

1. INTRODUCTION

Ellipse fitting is also used for detecting not only circular objects in the image but also objects of approximately elliptic shape, e.g., human faces. An important application of ellipse fitting is camera calibration for determining the position and internal parameters of a camera by taking images of a reference pattern, for which circles are often used for the ease of image processing. Ellipse fitting is also important as a mathematical prototype of various geometric estimation problems for computer vision. Typical problems include the computation of fundamental matrices and homographies (we will briefly describe these in Chapter 7).

1.2

REPRESENTATION OF ELLIPSES

e equation of an ellipse has the form Ax 2 C 2Bxy C Cy 2 C 2f0 .Dx C Ey/ C f02 F D 0;

(1.1)

where f0 is a constant for adjusting the scale. eoretically, it can be 1, but for finite-length numerical computation it should be chosen so that x=f0 and y=f0 have approximately the order of 1; this increases the numerical accuracy, avoiding the loss of significant digits. In view of this, we take the origin of the image xy coordinate system at the center of the image, rather than the upper-left corner as is customarily done, and take f0 to be the length of the side of a square which we assume to contain the ellipse to be extracted. For example, if we know that an ellipse exists in a 600  600 pixel region, we let f0 = 600. Since Eq. (1.1) has scale indeterminacy, i.e., the same ellipse is represented if A, B , C , D , E , and F are simultaneously multiplied by a nonzero constant, we need some kind of normalization. Various types of normalizations have been considered in the past, including F D 1;

(1.2)

A C C D 1;

(1.3)

A2 C B 2 C C 2 C D 2 C E 2 C F 2 D 1;

(1.4)

A2 C B 2 C C 2 C D 2 C E 2 D 1;

(1.5)

A2 C 2B 2 C C 2 D 1;

(1.6)

AC

B 2 D 1:

(1.7)

Among these, Eq. (1.2) is the simplest and most familiar one, but Eq. (1.1) with F = 1 cannot express an ellipse that passes through the origin .0; 0/. Equation (1.3) remedies this. Each of the above normalization equations has its own reasoning, but in this book we adopt Eq. (1.4) (see Supplemental Note (page 6) below for the background).

1.3. LEAST SQUARES APPROACH

If we define the 6-D vectors 0 B B B B ξDB B B @

x2 2xy y2 2f0 x 2f0 y f02

1

C C C C C; C C A

0

B B B B θDB B B @

A B C D E F

1

C C C C C; C C A

(1.8)

Eq. (1.4) can be written as (1.9)

.ξ ; θ / D 0;

where and hereafter we denote the inner product of vectors a and b by .a; b/. Since the vector θ in Eq. (1.9) has scale indeterminacy, it must be normalized in correspondence with Eqs. (1.2)–(1.7). Note that the left sides of Eqs. (1.2)–(1.7) can be seen as quadratic forms in A, ..., F ; Eqs. (1.2) and (1.3) are linear equations, but we may regard them as F 2 = 1 and .A C C /2 = 1, respectively. Hence, Eqs. (1.2)–(1.7) are all written in the form (1.10)

.θ ; N θ / D 1;

for some normalization matrix N . e use of Eq. (1.4) corresponds to N = I (the identity), in which case Eq. (1.10) is simply kθ k = 1, i.e., normalization to unit norm.

1.3

LEAST SQUARES APPROACH

Fitting an ellipse in the form of Eq. (1.1) to a sequence of points .x1 ; y1 /, ..., .xN ; yN / in the presence of noise (Fig. 1.2) is to find A, B , C , D , E , and F such that Ax˛2 C 2Bx˛ y˛ C Cy˛2 C 2f0 .Dx˛ C Ey˛ / C f02 F  0;

˛ D 1; :::; N:

(1.11)

If we write ξ˛ for the value obtained by replacing x and y in the 6-D vector ξ of Eq. (1.8) by x˛ and y˛ , respectively, Eq. (1.11) can be equivalently written as .ξ˛ ; θ /  0; (x α , y α)

Figure 1.2: Fitting an ellipse to a noisy point sequence.

˛ D 1; :::; N:

(1.12)

3

4

1. INTRODUCTION

Our task is to compute such a unit vector θ . e simplest and the most naive method is the following least squares. Procedure 1.1

(Least squares)

1. Compute the 6  6 matrix

MD

N 1 X ξ˛ ξ˛> : N ˛D1

(1.13)

2. Solve the eigenvalue problem

M θ D θ ;

(1.14)

and return the unit eigenvector θ for the smallest eigenvalue . Comments. is is a straightforward generalization of line fitting to a point sequence (,! Problem 1.1); we minimize the sum of the squares J D

N N N 1 X  1 X 1 X > .ξ˛ ; θ /2 D θ ξ˛ ξ˛> θ D .θ ; ξ˛ ξ˛> θ / D .θ ; M θ /; N ˛D1 N ˛D1 N ˛D1

(1.15)

subject to kθ k = 1. As is well known in linear algebra, the minimum of this quadratic form in θ is given by the unit eigenvector θ of M for the smallest eigenvalue. Equation (1.15) is often called the algebraic distance, and Procedure 1.1 is also known as algebraic distance minimization. It is sometimes called DLT (direct linear transformation). Since the computation is very easy and the solution is immediately obtained, this method has been widely used. However, when the input point sequence covers only a small part of the ellipse circumference, it often produces a small and flat ellipse very different from the true shape (we will see such examples in Chapter 6). Still, this is a prototype of all existing ellipse fitting algorithms. How we can improve this method is the main theme of this book.

1.4

NOISE AND COVARIANCE MATRICES

e reason for the poor accuracy of Procedure 1.1 is that the properties of image noise are not considered; for accurate fitting, we need to take the statistical properties of noise into consideration. Suppose the data x˛ and y˛ are disturbed from their true values xN ˛ and yN˛ by x˛ and y˛ : x˛ D xN ˛ C x˛ ; y˛ D yN˛ C y˛ : (1.16) Substituting this into ξ˛ , we can write

ξ˛ D ξN˛ C 1 ξ˛ C 2 ξ˛ ;

(1.17)

1.4. NOISE AND COVARIANCE MATRICES

where ξN˛ is the value of ξ˛ obtained by replacing x˛ and y˛ by their true values xN ˛ and yN˛ , respectively, while 1 ξ˛ and 2 ξ˛ are, respectively, the first-order noise term (the linear expression in x˛ and y˛ ) and the second-order noise term (the quadratic expression in x˛ and y˛ ). From Eq. (1.8), we obtain the following expressions: 1 1 0 0 2xN ˛ x˛ x˛2 B 2x yN C 2xN y C B 2x y C ˛ ˛ ˛ ˛ C ˛ ˛ C B B B C B C 2yN˛ y˛ y˛2 B C B C 1 ξ˛ D B 2 ξ˛ D B (1.18) C; C: 2f0 x˛ C 0 C B B B C B C @ A @ A 2f0 y˛ 0 0 0 We regard the noise terms x˛ and y˛ as random variables and define the covariance matrix of ξ˛ by V Œξ˛  D EŒ1 ξ˛ 1 ξ˛> ; (1.19) where EŒ   denotes expectation over the noise distribution. If we assume that x˛ and y˛ are sampled from independent Gaussian distributions of mean 0 and standard deviation  , we obtain EŒx˛  D EŒy˛  D 0;

EŒx˛2  D EŒy˛2  D  2 ;

EŒx˛ y˛  D 0:

(1.20)

Substituting Eq. (1.18) and using this relationship, we obtain the covariance matrix in Eq. (1.19) in the following form: 1 0 xN ˛2 xN ˛ yN˛ 0 f0 xN ˛ 0 0 B xN yN xN 2 C yN 2 xN yN f yN N˛ 0 C ˛ ˛ 0 ˛ f0 x C B ˛ ˛ ˛ ˛ C B 2 0 x N y N y N 0 f y N 0 C B ˛ ˛ 0 ˛ ˛ V Œξ˛  D  2 V0 Œξ˛ ; V0 Œξ˛  D 4 B C : (1.21) 2 f0 yN˛ 0 f0 0 0 C B f0 xN ˛ C B @ 0 f0 xN ˛ f0 yN˛ 0 f02 0 A 0 0 0 0 0 0 Since all the elements of V Œξ˛  have the multiple  2 , we factor it out and call V0 Œξ˛  the normalized covariance matrix. We also call the standard deviation  the noise level . e diagonal elements of the covariance matrix V Œξ˛  indicate the noise susceptibility of each component of ξ˛ , and the off-diagonal elements measure their pair-wise correlation. e covariance matrix of Eq. (1.19) is defined in terms of the first-order noise term 1 ξ˛ alone. It is known that incorporation of the second-order term 2 ξ˛ has little influence over final results. is is because 2 ξ˛ is very small as compared with 1 ξ˛ . Note that the elements of V0 Œξ˛  in Eq. (1.21) contain true values xN ˛ and yN˛ . ey are replaced by observed values x˛ and y˛ in actual computation. It is known that this replacement has practically no effect in the final results.

5

6

1. INTRODUCTION

1.5

ELLIPSE FITTING APPROACHES

In the following chapters, we describe typical ellipse fitting methods that incorporate the above noise properties. We will see that all the methods we consider do not require knowledge of the noise level  , which is very difficult to estimate in real problems. e qualitative properties of noise are all encoded in the normalized covariance matrix V0 Œξ˛ , which gives sufficient information for designing high accuracy fitting schemes. In general terms, there exist two approaches for ellipse fittings: algebraic and geometric. Algebraic methods: We solves some algebraic equation for computing θ . e resulting solution may or may not minimize some cost function. In other words, the equation need not have the form of rθ J = 0 for some cost function J . Rather, we can modify the equation in any way so that the resulting solution θ is as close to its true value θN as possible. us, our task is to find a good equation to solve. To this end, we need detailed statistical error analysis. Geometric methods: We minimize some cost function J . Hence, the solution is uniquely determined once the cost J is defined. us, our task is to find a good cost to minimize. For this, we need to consider the geometry of the ellipse and the data points. We also need to devise a convenient minimization algorithm, since minimization of a given cost is not always easy.

e meaning of these two approaches will be better understood by seeing the actual procedures described in the subsequent chapters. ere are, however, a lot of overlaps between the two approaches.

1.6

SUPPLEMENTAL NOTE

e study of ellipse fitting began as soon as computers came into use for image analysis in the 1970s. Since then, numerous fitting techniques have been proposed, and even today new methods appear one after another. Since we are unable cite all the literature, we mention only some of the earlest work: Albano [1974], Bookstein [1979], Cooper and Yalabik [1979], Gnanadesikan [1977], Nakagawa and Rosenfeld [1979], Paton [1970]. Beside fitting an ellipse to data points, a voting scheme called Hough transform for accumulating evidences in the parameter space was also studied as a means of ellipse fitting [Davis, 1989]. In the 1990s, a paradigm shift occurred. It was first thought that the purpose of ellipse fitting was to find an ellipse that approximately passes near observed points. However, some researchers, including the authors, turned their attention to finding an ellipse that exactly passes through the true points that would be observed in the absence of noise. us, the problem turned to a statistical problem for estimating the true points subject to the constraint that they are on some ellipse. It follows that the goodness of the fitted ellipse is measured not by how close it is to the observed points but by how close it is to the true shape. is type of paradigm shift has also occurred in other problems including fundamental matrix and homography computation for 3-D

1.6. SUPPLEMENTAL NOTE

analysis. Today, statistical analysis is one of the main tools for accurate geometric computation for computer vision. e ellipse fitting techniques discussed in this book are naturally extended to general curves and surfaces in a general space in the form 1 1 C 2 2 C    C n n D 0;

(1.22)

where 1 , ..., n are functions of coordinates x1 , x2 , ..., and 1 , ..., n are unknowns to be determined. is equation is written in the form of .ξ ; θ / = 0 in terms of the vector ξ = .i / of observations and the vector θ = .i / of unknowns. en, all techniques and analysis for ellipse fitting can apply. Evidently, Eq. (1.22) includes all polynomial curves in 2-D and all polynomial surfaces in 3-D, but all algebraic functions also can be expressed in the form of Eq. (1.22) after canceling denominators. For general nonlinear surfaces, too, we can usually write the equation in the form of Eq. (1.22) after an appropriate reparameterization, as long as the problem is to estimate the “coefficients” of linear/nonlinear terms. Terms that are not multiplied by unknown coefficients are regarded as being multiplied by 1, which is also regarded as an unknown. e resulting set of coefficients can be viewed as a vector of unknown magnitude, or a “homogeneous vector,” and we can write the equation in the form .ξ ; θ / = 0. us, the theory in this book has wide applicability beyond ellipse fitting. In Eq. (1.1), we introduce the scaling constant f0 to make x=f0 and y=f0 have the order of 1, and this also make the vector ξ in Eq. (1.8) have magnitude O.f02 / so that ξ =f02 is approximately a unit vector. e necessities and effects of such scaling for numerical computation is discussed by Hartley [1997] in relation to fundamental matrix computation (we will discuss this in Chapter 7), which is also a fundamental problem of computer vision and has the same mathematical structure as ellipse fitting. In this book, we introduce the scaling constant f0 and take the coordinate origin at the center of the image based on the same reasoning. e normalization using Eq. (1.2) was adopted by Albano [1974], Cooper and Yalabik [1979], and Rosin [1993]. Many authors used Eq. (1.4), but some authors preferred Eq. (1.5) [Gnanadesikan, 1977]. e use of Eq. (1.6) was proposed by Bookstein [1979], who argued that it leads to “coordinate invariance” in the sense that the ellipse fitted by least squares after the coordinate system is translated and rotated is the same as the originally fitted ellipse translated and rotated accordingly. In this respect, Eqs. (1.3) and (1.7) also have that invariance. e normalization using Eq. (1.7) was proposed by Fitzgibbon et al. [1999] so that the resulting least-squares fit is guaranteed to be an ellipse, while other equations can theoretically produce a parabola or a hyperbola (we will discuss this in Chapter 4). Today, we need not worry about the coordinate invariance, which is a concern of the past. As long as we regard ellipse fitting as statistical estimation and use Eq. (1.10) for normalization, all statistically meaningful methods are automatically invariant to the choice of the coordinate system. is is because the normalization matrix N in Eq. (1.10) is defined as a function of the covariance matrix V Œξ˛  of Eq. (1.19). If we change the coordinate system, e.g., adding translation, rotation, and other arbitrary mapping, the covariance matrix V Œξ˛  defined by Eq. (1.19)

7

8

1. INTRODUCTION

also changes, and the fitted ellipse after the coordinate change using the transformed covariance matrix is the same as the ellipse fitted in the original coordinate system using the original covariance matrix and transformed afterwards. e fact that the all the normalization equations of Eqs. (1.2)–(1.7) are written in the form of Eq. (1.10) poses an interesting question: What N is the “best,” if we are to minimize the algebraic distance of Eq. (1.15) subject to .θ ; N θ / = 1? is problem was studied by the authors’ group [Kanatani and Rangarajan, 2011, Kanatani et al., 2011, Rangarajan and Kanatani, 2009], and the matrix N that gives rise to the highest accuracy was found after a detailed error analysis. e method was named HyperLS (this will be described in the next chapter). Minimizing the sum of squares in the form of Eq. (1.15) is a natural idea, but readP 2 ers may wonder why we minimize N ˛D1 .ξ˛ ; θ / . Why not minimize, say, the absolute sum PN N ˛D1 j.ξ˛ ; θ /j or the maximum max˛D1 j.ξ˛ ; θ /j? is is the issue of the choice of the norm. A class of criteria, called Lp -norms, exist for measuring the magnitude of an n-D vector x = .xi /: kxkp 

n X iD1

jxi jp

1=p

(1.23)

:

e L2 -norm, or the square norm, v u n uX kxk2  t jxi j2 ;

(1.24)

iD1

is widely used for linear algebra. If we let p ! 1 in Eq. (1.23), it approaches n

kxk1  max jxi j;

(1.25)

iD1

called the L1 -norm, or the maximum norm, where components with large jxi j have a dominant effect and those with small jxi j are ignored. Conversely, if let p ! 0 in Eq. (1.23), those components with large jxi j are ignored. e L1 -norm kxk1 

n X iD1

(1.26)

jxi j

is often called the average norm, because it effectively measures the average .1=N / the limit of p ! 0, we obtain kxk0  jfi j jxi j ¤ 0gj;

Pn

iD1

jxi j. In

(1.27)

where the right side means the number of nonzero components. is is called the L0 -norm, or the Hamming distance. Least squares for minimizing the L2 -norm is the most widely used approach for statistical optimization for two reasons. One is the computational simplicity: differentiation of a sum

1.6. SUPPLEMENTAL NOTE

of squares leads to linear expressions, so the solution is immediately obtained by solving linear equations or an eigenvalue problem. e other reason is that it corresponds to maximum likelihood estimation (we will discuss this in Chapter 9), provided the discrepancies to be minimized arise from independent and identical Gaussian noise. Indeed, the scheme of least squares was invented by Gauss, who introduced the “Gaussian distribution,” which he himself called the “normal distribution,” as the standard noise model (physicists usually use the former term, while statisticians prefer the latter). In spite of added computational complexity, however, minimization of the Lp norm for p < 2, typically p = 1, has its merit, because the effect of those terms with large absolute values are suppressed. is suggests that terms irrelevant for estimation, called outliers, are automatically ignored. Estimation methods that are not very susceptible to outliers are said to be robust (we will discuss robust ellipse fitting in Chapter 4). For this reason, L1 -minimization is frequently used in some computer vision applications.

PROBLEMS 1.1.

Fitting a line in the form n1 x C n2 y C n3 f0 = 0 to points .x˛ ; y˛ /, ˛ = 1, ..., N , can be viewed as the problem for computing n = .ni /, which can be normalized to a unit vector, such that .ξ˛ ; n/  0, ˛ = 1, .., N , with 1 1 0 0 n1 x˛ (1.28) n D @ n2 A : ξ ˛ D @ y˛ A ; n3 f0 (1) Write down the least-squares procedure for this computation. (2) If the noise terms x˛ and y˛ are sampled from independent Gaussian distributions of mean 0 and variance  2 , how is their covariance matrix V Œξ˛  defined?

9

11

CHAPTER

2

Algebraic Fitting We now describe computational procedures for typical algebraic fitting methods: “iterative reweight,” the “Taubin method,” “renormalization,” “HyperLS,” and “hyper-renormalization.” We point out that all these methods reduce to solving a generalized eigenvalue problem of the same form; different choices of the matrices involved result in different methods.

2.1

ITERATIVE REWEIGHT AND LEAST SQUARES

e following iterative reweight is an old and well-known method: Procedure 2.1

(Iterative reweight)

1. Let θ0 = 0 and W˛ = 1, ˛ = 1, ..., N . 2. Compute the 6  6 matrix

MD

N 1 X W˛ ξ˛ ξ˛> ; N ˛D1

(2.1)

where ξ˛ is the vector defined in Eq. (1.8) for the ˛ th point. 3. Solve the eigenvalue problem (2.2)

M θ D θ ; and compute the unit eigenvector θ for the smallest eigenvalue . 4. If θ  θ0 up to sign, return θ and stop. Else, update W˛

1 ; .θ ; V0 Œξ˛ θ /

θ0

θ;

(2.3)

and go back to Step 2. Comments. is method is motivated to minimize the weighted sum of squares PN 2 ˛D1 W˛ .ξ˛ ; θ / ; the approach known as weighted least squares. In fact, we see that N N N 1 X  1 X 1 X 2 > W˛ .ξ˛ ; θ / D W˛ .θ ; ξ˛ ξ˛ θ / D .θ ; W˛ ξ˛ ξ˛> θ / D .θ ; M θ /; N ˛D1 N ˛D1 N ˛D1

(2.4)

12

2. ALGEBRAIC FITTING

and this quadratic form is minimized by the unit eigenvector of M for the smallest eigenvalue. According to the theory of statistics, the weights W˛ are optimal if they are inversely proportional to the variance of each term, being small for uncertain terms and large for certain terms. Since .ξ˛ ; θ / = .ξN˛ ; θ / C .1 ξ˛ ; θ / C .2 ξ˛ ; θ / and .ξN˛ ; θ / = 0, the variance is EŒ.ξ˛ ; θ /2  D EŒ.θ ; 1 ξ˛ 1 ξ˛> θ / D .θ ; EŒ1 ξ˛ 1 ξ˛> θ / D  2 .θ ; V0 Œξ˛ θ /;

(2.5)

omitting higher-order noise terms. us, we should let W˛ = 1=.θ ; V0 Œξ˛ θ /, but θ is not known yet. So, we instead use the weights W˛ determined in the preceding iteration to compute θ and update the weights as in Eq. (2.3). Let us call the solution θ computed in the initial iteration the P 2 initial solution. Initially, we let W˛ = 1, so Eq. (2.4) implies that we are minimizing N ˛D1 .ξ˛ ; θ / . In other words, the initial solution is the least-squares solution, from which the iterations start. e phrase “up to sign” in Step 4 reflects the fact that eigenvectors have sign indeterminacy. So, we align θ and θ0 by reversing the sign θ θ whenever .θ ; θ0 / < 0.

2.2

RENORMALIZATION AND THE TAUBIN METHOD

It is well known that the iterative reweight method has a large bias, in particular when the input elliptic arc is short. As a result, we often obtain a smaller ellipse than expected. e following renormalization was introduced to remedy this. Procedure 2.2

(Renormalization)

1. Let θ0 = 0 and W˛ = 1, ˛ = 1, ..., N . 2. Compute the 6  6 matrices

MD

N 1 X W˛ ξ˛ ξ˛> ; N ˛D1

ND

N 1 X W˛ V0 Œξ˛ : N ˛D1

(2.6)

3. Solve the generalized eigenvalue problem (2.7)

M θ D N θ ;

and compute the unit generalized eigenvector θ for the generalized eigenvalue  of the smallest absolute value. 4. If θ  θ0 up to sign, return θ and stop. Else, update W˛

and go back to Step 2.

1 ; .θ ; V0 Œξ˛ θ /

θ0

θ;

(2.8)

2.3. HYPER-RENORMALIZATION AND HYPERLS

13

Comments. is method was motivated as follows. Since the true values ξN˛ satisfy .ξN˛ ; θ / = 0, we N θ = 0, where M N is the true value of the matrix M in Eq. (2.6) defined can immediately see that M N , the solution θ is obtained as its unit eigenvector for by the true values ξN˛ . us, if we know M N eigenvalue 0. However, M is unknown, so we estimate it. e expectation of M is EŒM  D EŒ

N N X 1 X N C EŒ 1 W˛ .ξN˛ C ξ˛ /.ξN˛ C ξ˛ />  D M W˛ ξ˛ ξ˛>  N ˛D1 N ˛D1

N N 1 X 1 X > N N DMC W˛ EŒξ˛ ξ˛  D M C W˛  2 V0 Œξ˛  N ˛D1 N ˛D1 N C  2N : DM

(2.9)

N = EŒM   2 N  M  2 N , we solve .M  2 N /θ = 0, or M θ =  2 N θ , for Since M some unknown  2 , which is supposed to be very small. us, we solve Eq. (2.7) for the smallest absolute value . As is well known in linear algebra, solving Eq. (2.7) is equivalent to minimizing the quadratic form .θ ; M θ / subject to the constraint .θ ; N θ / = constant. Since we start from P PN N 2 W˛ = 1, the initial solution minimizes ˛D1 .ξ˛ ; θ / subject to .θ ; ˛D1 V0 Œξ˛  θ / = constant. is is known as the Taubin method (,! Problem 2.1). Standard numerical tools for solving the generalized eigenvalue problem in the form of Eq. (2.7) assume that N is positive definite (some eigenvalues are negative). However, Eq. (1.21) implies that the sixth column and the sixth row of the matrix V0 Œξ˛  all consist of zero, so N is not positive definite. On the other hand, Eq. (2.7) is equivalently written as

Nθ D

1 M θ: 

(2.10)

If the data contain noise, the matrix M is positive definite, so we can apply a standard numerical tool to compute the unit generalized eigenvector θ for the generalized eigenvalue 1= of the largest absolute value. e matrix M is not positive definite only when there is no noise. We need not consider that case in practice, but if M happens to have eigenvalue 0, which implies that the data are all exact, the corresponding unit eigenvector θ is exactly the true solution.

2.3

HYPER-RENORMALIZATION AND HYPERLS

According to experiments, the accuracy of the Taubin method is higher than iterative reweight, and renormalization has even higher accuracy. e accuracy can be further improved by the following hyper-renormalization. Procedure 2.3

(Hyper-renormalization)

1. Let θ0 = 0 and W˛ = 1, ˛ = 1, ..., N .

14

2. ALGEBRAIC FITTING

2. Compute the 6  6 matrices N 1 X MD W˛ ξ˛ ξ˛> ; N ˛D1 N   1 X ND W˛ V0 Œξ˛  C 2S Œξ˛ e>  N ˛D1 N  1 X 2 > W . ξ ; M ξ /V Œ ξ  C 2 S ŒV Œ ξ  M ξ ξ  ; ˛ ˛ 0 ˛ 0 ˛ ˛ 5 5 ˛ N 2 ˛D1 ˛

(2.11)

where S Œ   is the symmetrization operator (S ŒA = .A C A> /=2), and e is the vector

e D .1; 0; 1; 0; 0; 0/> :

(2.12)

e matrix M5 is the pseudoinverse of M of truncated rank 5. 3. Solve the generalized eigenvalue problem (2.13)

M θ D N θ ;

and compute the unit generalized eigenvector θ for the generalized eigenvalue  of the smallest absolute value. 4. If θ  θ0 up to sign, return θ and stop. Else, update W˛

1 ; .θ ; V0 Œξ˛ θ /

θ0

θ;

(2.14)

and go back to Step 2. Comments. is method was obtained in an effort to improve renormalization by modifying the matrix N in Eq. (2.7) so that the resulting solution θ has the highest accuracy. It has been found by rigorous error analysis that the choice of N in Eq. (2.11) attains the highest accuracy (the derivation of Eq. (2.11) will be given in Chapter 8). e vector e in Eq. (2.12) is defined in such a way that EŒ2 ξ˛  D  2 e (2.15)

holds for the second-order noise term 2 ξ˛ . From Eqs. (1.18) and (1.20), we obtain Eq. (2.12). e pseudoinverse M5 of truncated rank 5 is computed from the eigenvalues 1      6 of M and the corresponding unit eigenvectors θ1 , ..., θ6 of M in the form

M5 D

1 1 θ1 θ1> C    C θ5 θ5> ; 1 5

(2.16)

2.4. SUMMARY

15

where the term for 6 and θ6 is removed. e matrix N in Eq. (2.11) is not positive definite, but we can use a standard numerical tool by rewriting Eq. (2.13) in the form of Eq. (2.10) and computing the unit generalized eigenvector for the generalized eigenvalue 1= of the largest absolute P 2 value. e initial solution minimizes N ˛D1 .ξ˛ ; θ / subject to .θ ; N θ / = constant for the matrix N obtained by letting W˛ = 1 in Eq. (2.11). is corresponds to the method called HyperLS (,! Problem 2.2).

2.4

SUMMARY

We have seen that all the above methods compute the θ that satisfies

M θ D N θ ;

(2.17)

where the matrices M and N are defined in terms of the observations ξ˛ and the unknown θ . Different choices of them lead to different methods: 8 N ˆ 1X ˆ ˆ ξ˛ ξ˛> ; (least squares, Taubin, HyperLS) ˆ < N ˛D1 MD N ˆ 1X ξ˛ ξ˛> ˆ ˆ ˆ ; (iterative reweight, renormalization, hyper-renormalization) : N .θ ; V0 Œξ˛ θ / ˛D1 (2.18) 8 I (identity), (least squares, iterative reweight) ˆ ˆ ˆ ˆ N ˆ X 1 ˆ ˆ ˆ V0 Œξ˛ ; (Taubin) ˆ ˆ N ˛D1 ˆ ˆ ˆ ˆ N ˆ X ˆ V0 Œξ˛  ˆ ˆ 1 (renormalization) ˆ ˆ ˆ N ˛D1 .θ ; V0 Œξ˛ θ /; ˆ ˆ ˆ ˆ N  ˆ ˆ 1 X ˆ > ˆ V Œ ξ  C 2 S Œ ξ e  0 ˛ ˛ ˆ ˆ ˆ N ˛D1 < N  ND 1 X > ˆ ˆ . ξ ; M ξ /V Œ ξ  C 2 S ŒV Œ ξ  M ξ ξ  ; ˛ ˛ 0 ˛ 0 ˛ ˛ ˆ ˛ 5 5 ˆ N 2 ˛D1 ˆ ˆ ˆ ˆ ˆ (HyperLS) ˆ ˆ ˆ N ˆ   X ˆ 1 1 ˆ ˆ ˆ V0 Œξ˛  C 2S Œξ˛ e>  ˆ ˆ N ˛D1 .θ ; V0 Œξ˛ θ / ˆ ˆ ˆ ˆ N   ˆ ˆ 1 X 1 ˆ > ˆ . ξ ; M ξ /V Œ ξ  C 2 S ŒV Œ ξ  M ξ ξ  : ˆ ˛ ˛ 0 ˛ 0 ˛ ˛ ˛ 5 5 ˆ ˆ N 2 ˛D1 .θ ; V0 Œξ˛ θ /2 ˆ ˆ : (hyper-renormalization) (2.19)

16

2. ALGEBRAIC FITTING

For least squares, Taubin, and HyperLS, the matrices M and N do not contain the unknown θ , so Eq. (2.17) is a generalized eigenvalue problem, which can be directly solved without iterations. For other methods (iterative reweight, renormalization, and hyper-renormalization), the unknown θ is contained in the denominators in the expressions of M and N . Letting the part that contains θ be 1=W˛ , we compute W˛ using the value of θ obtained in the preceding iteration and solve the generalized eigenvalue problem in the form of Eq. (2.17). We then use the resulting θ to update W˛ and repeat this process. According to experiments, HyperLS has comparable accuracy to renormalization, and hyper-renormalization has even higher accuracy. Since the hyper-renormalization iteration starts from the HyperLS solution, the convergence is very fast; usually, three to four iterations are sufficient. Numerical comparison of the accuracy and efficiency of these methods is shown in Chapter 6.

2.5

SUPPLEMENTAL NOTE

e iterative reweight was first presented by Sampson [1982]. e renormalization scheme was proposed by Kanatani [1993b]; the details are discussed in Kanatani [1996]. e method of Taubin was proposed by Taubin [1991], but his derivation was rather heuristic without considering the statistical properties of image noise. As mentioned in the Supplemental Note of Chapter 1 (page 6), the HyperLS was obtained in an effort to improve the algebraic distance minimization P 2 of N .θ ; N θ / = 1. e least squares is for N = I , while ˛D1 .ξ˛ ; θ / subject to the normalization P the Taubin method is for N = .1=N / N V Œ ˛D1 0 ξ˛ . en, what N is the best? After detailed error analysis, the optimal N found by the authors’ group [Kanatani and Rangarajan, 2011, Kanatani et al., 2011, Rangarajan and Kanatani, 2009] turned out to be the value of N in Eq. (2.11) with W˛ = 1. is HyperLS method was then generalized to the iterative hyper-renormalization scheme also by the authors’ group [Kanatani et al., 2012, 2014]. e fact that all known algebraic methods can be written in the form of Eq. (2.17) was pointed out in Kanatani et al. [2014]. Since noise is regarded as random, the observed data are random variables having probability distributions. Hence, the computed value θ is also a random variable having its probability distribution. e choice of the matrices M and N affects the probability distribution of the computed θ . According to the detailed error analysis of Kanatani et al. [2014] (which we will discuss in Chapter 8), the choice of M controls the covariance of θ (Fig. 2.1a), while the choice of N controls its bias (Fig. 2.1b). If we choose M to be the second row of Eq. (2.18), the covariance matrix of θ attains the theoretical accuracy limit called the KCR lower bound (this will be discussed in Chapter 10) except for O. 4 / terms. If we choose N to be the fifth row of Eq. (2.19), the bias of θ is 0 except for O. 4 / terms (this will be shown in Chapter 8). Hence, the accuracy of hyper-renormalization cannot be improved any further except for higher order terms in  . It indeed fits a very accurate ellipse, as will be demonstrated in the experiments in Chapter 6.

2.5. SUPPLEMENTAL NOTE

θ

θ

(a)

θ

θ

(b)

Figure 2.1: (a) e matrix M controls the covariance of θ. (b) e matrix N controls the bias of θ.

PROBLEMS 2.1.

Write down the procedure of the Taubin method.

2.2.

Write down the HyperLS procedure.

17

19

CHAPTER

3

Geometric Fitting We consider geometric fitting, i.e., computing an ellipse that passes near data points as closely as possible. We first show that the closeness to the data points is measured by a function called the “Sampson error.” en, we give a computational procedure, called “FNS,” that minimizes it. Next, we describe a procedure for exactly minimizing the sum of squares from the data points, called the “geometric distance,” iteratively using the FNS procedure. Finally, we show how the accuracy can be further improved by a scheme called “hyperaccurate correction.”

3.1

GEOMETRIC DISTANCE AND SAMPSON ERROR

Geometric fitting refers to computing the ellipse that minimizes the sum of squares of the distance d˛ from observed points .x˛ ; y˛ / to the ellipse (Fig. 3.1). Let .xN ˛ ; yN˛ / be the point on the ellipse closest to .x˛ ; y˛ /. We compute the ellipse that minimizes N 1 X SD .x˛ N ˛D1

2

xN ˛ / C .y˛

2

yN˛ /



N 1 X 2 D d ; N ˛D1 ˛

(3.1)

which is known as the geometric distance. Strictly speaking, it should be called the “square geometric distance,” because it has the dimension of square length. However, the term “geometric distance” is commonly used for simplicity. Here, the notations xN ˛ and yN˛ are used in a slightly different sense from the previous chapters. In the geometric fitting context, they are the variables for estimating the true values of x˛

(x α , yα)

(x α , yα )

Figure 3.1: We fit an ellipse such that the geometric distance, i.e., the sum of the square distances of the data points to the ellipse, is minimized.

20

3. GEOMETRIC FITTING

and y˛ , rather than their true values themselves. We use this convention for avoiding overly introducing new symbols is also applies to Eq. (1.16), i.e., we now regard x˛ and y˛ as the variables for estimating the discrepancy between observed and estimated positions, rather than P 2 2 stochastic quantities. Hence, Eq. (3.1) is also written as S = .1=N / N ˛D1 .x˛ C y˛ /. 2 If the point .x˛ ; y˛ / is close to the ellipse, the square distance d˛ is written except for high order terms in x˛ and y˛ as follows (see Section 3.5 below): d˛2 D .x˛

xN ˛ /2 C .y˛

yN˛ /2 

.ξ˛ ; θ /2 : .θ ; V0 Œξ˛ θ /

(3.2)

Hence, the geometric distance in Eq. (3.1) can be approximated by J D

N 1 X .ξ˛ ; θ /2 ; N ˛D1 .θ ; V0 Œξ˛ θ /

(3.3)

which is known as the Sampson error .

3.2

FNS

A well-known method for minimizing the Sampson error of Eq. (3.3) is the FNS (Fundamental Numerical Scheme). It goes as follows. Procedure 3.1

(FNS)

1. Let θ = θ0 = 0 and W˛ = 1, ˛ = 1, ..., N . 2. Compute the 6  6 matrices

MD

N 1 X W˛ ξ˛ ξ˛> ; N ˛D1

3. Compute the 6  6 matrix

LD

XDM

N 1 X 2 W .ξ˛ ; θ /2 V0 Œξ˛ : N ˛D1 ˛

(3.4)

L:

(3.5)

4. Solve the eigenvalue problem (3.6)

Xθ D θ ; and compute the unit eigenvector θ for the smallest eigenvalue . 5. If θ  θ0 up to sign, return θ and stop. Else , update W˛

and go back to Step 2.

1 ; .θ ; V0 Œξ˛ θ /

θ0

θ;

(3.7)

3.3. GEOMETRIC DISTANCE MINIMIZATION

21

Comments. is scheme solves rθ J = 0 for the Sampson error J , where rθ J is the gradient of J , i.e., the vector whose i th component is @J =@i . Differentiating Eq. (3.3), we obtain the following expression (,! Problem 3.1): rθ J D 2.M

(3.8)

L/θ D 2Xθ :

Here, M , L, and X are the matrices given by Eqs. (3.4) and (3.5). After the iterations have converged,  = 0 holds (see Proposition 3.5 below). Hence, Eq. (3.8) implies that we obtain the value θ that satisfies rθ J = 0. Since we start with θ = 0, the matrix L in Eq. (3.4) is initially the zero matrix O (all elements are 0), so Eq. (3.6) reduces to M θ = θ . us, the FNS iterations start from the least-squares solution.

3.3

GEOMETRIC DISTANCE MINIMIZATION

Once we have computed θ by the above FNS, we can iteratively modify it so that it strictly minimizes the geometric distance of Eq. (3.1). To be specific, we modify the data ξ˛ , using the current solution θ , and minimize the resulting Sampson error to obtain the new solution θ . After a few iterations, the Sampson error coincides with the geometric distance. e procedure goes as follows. Procedure 3.2

(Geometric distance minimization)

1. Let J0 = 1 (a sufficiently large number), xO ˛ = x˛ , yO˛ = y˛ , and xQ ˛ = yQ˛ = 0, ˛ = 1, ..., N . 2. Compute the normalized covariance matrix V0 ŒξO˛  obtained by replacing xN ˛ and yN˛ in the definition of V0 Œξ˛  in Eq. (1.21) by xO ˛ and yO˛ , respectively. 3. Computing the following modified data ξ˛ : 0 xO ˛2 C 2xO ˛ xQ ˛ B 2.xO yO C yO xQ C xO yQ / ˛ ˛ ˛ ˛ ˛ ˛ B B 2 y O C 2 y O y Q B ˛ ˛ ξ˛ D B ˛ B 2f0 .xO ˛ C xQ ˛ / B @ 2f0 .yO˛ C yQ˛ / f02

1

C C C C C: C C A

(3.9)

4. Compute the value θ that minimizes the following modified Sampson error : N 1 X .ξ˛ ; θ /2 J D : N ˛D1 .θ ; V0 ŒξO˛ θ / 

(3.10)

22

3. GEOMETRIC FITTING

5. Update xQ ˛ and yQ˛ as follows: 

xQ ˛ yQ˛



2.ξ˛ ; θ / .θ ; V0 ŒξO˛ θ /



1 2

2 3

4 5



0

1 xO ˛ @ yO˛ A : f0

(3.11)

6. Update xO ˛ and yO˛ as follows: xO ˛



xQ ˛ ;

yO˛



(3.12)

yQ˛ :

7. Compute J D

N 1 X 2 .xQ C yQ˛2 /: N ˛D1 ˛

If J   J0 , return θ and stop. Else, let J0

(3.13)

J  , and go back to Step 2.

Comments. Since xO ˛ = x˛ , yO˛ = y˛ , and xQ ˛ = yQ˛ = 0 in the initial iteration, the value ξ˛ of Eq. (3.9) is the same as ξ˛ . Hence, the modified Sampson error J  in Eq. (3.10) is the same as the Sampson error J in Eq. (3.3). Now, we rewrite Eq. (3.1) in the form N 1 X .xO ˛ C .x˛ xO ˛ / xN ˛ /2 C .yO˛ C .y˛ yO˛ / N ˛D1 N  1 X .xO ˛ C xQ ˛ xN ˛ /2 C .yO˛ C yQ˛ yN˛ /2 ; D N ˛D1

SD

yN˛ /2



(3.14)

where xQ ˛ D x˛

xO ˛ ;

yQ˛ D y˛

yO˛ :

(3.15)

In the next iteration, we regard the corrected values .xO ˛ ; yO˛ / as the input data and minimize Eq. (3.14). Let .xOO ˛ ; yOO˛ / be the resulting solution. Ignoring high-order small terms in xO ˛ xN ˛ and yO˛ yN˛ and rewriting Eq. (3.14), we obtain the modified Sampson error J  in Eq. (3.10) (see Proposition 3.7 below). We minimize this, regarding .xOO ˛ ; yOO˛ / as .xO ˛ ; yO˛ /, and repeat the same process. Since the current .xO ˛ ; yO˛ / are the best approximation of .xN ˛ ; yN˛ /, Eq. (3.15) implies that xQ ˛2 C yQ˛2 is the corresponding approximation of .xN ˛ x˛ /2 C .yN˛ y˛ /2 . So, we use (3.13) to evaluate the geometric distance S . Since the ignored high order terms decrease after each iteration, we obtain in the end the value θ that minimizes the geometric distance S , and Eq. (3.13) coincides with S . According to experiments, however, the correction of θ by this procedure is very small: the three or four significant digits are unchanged in typical problems; the corrected ellipse is indistinguishable from the original one when displayed or plotted. us, we can practically identify FNS with a method to minimize the geometric distance.

3.4. HYPERACCURATE CORRECTION

3.4

23

HYPERACCURATE CORRECTION

e geometric method, whether it is geometric distance minimization or Sampson error minimization, is known to have small statistical bias, meaning that the expectation EŒθ  does not completely agree with the true value θN . We express the computed solution θO in the form

θO D θN C 1 θ C 2 θ C    ;

(3.16)

where k θ is a k th order expression in x˛ and y˛ . Since 1 θ is linear in x˛ and y˛ , we have EŒ1 θ  = 0. However, we have EŒ2 θ  ¤ 0 in general. If we are able to evaluate EŒ2 θ  in an analytic form by doing error analysis, it is expected that the subtraction

θQ D θO

EŒ2 θ 

(3.17)

will result in a higher accuracy than θO ; its expectation is EŒθQ  = θN C O. 4 /, where  is the noise level. Note that 3 θ is a third-order expression in x˛ and y˛ , so EŒ3 θ  = 0. is operation for increasing accuracy by subtracting EŒ2 θ  is called hyperaccurate correction. e procedure goes as follows. Procedure 3.3

(Hyperaccurate correction)

1. Compute θ by FNS (Procedure 3.1). 2. Estimate  2 in the form

.θ ; M θ / ; (3.18) 1 5=N where M is the value of the matrix M in Eq. (3.4) after the FNS iterations have converged. O 2 D

3. Compute the correction term c θ D

N N X X O 2 O 2 W˛ .e; θ /ξ˛ C 2 M5 W˛2 .ξ˛ ; M5 V0 Œξ˛ θ /ξ˛ ; M5 N N ˛D1 ˛D1

(3.19)

where W˛ is the value of W˛ in Eq. (3.7) after the FNS iterations have converged, and e is the vector in Eq. (2.12). e matrix M5 is the pseudoinverse of M of truncated rank 5 given by Eq. (2.16). 4. Correct θ into

θ

N Œθ

c θ ;

(3.20)

where N Œ   denotes normalization to unit norm (N Œa = a=kak). Comments. e derivation of Eq. (3.19) is given in Chapter 9. According to experiments, the accuracy of geometric distance minimization is higher than renormalization but lower than hyperrenormalization. It is known, however, that after the above hyperaccurate correction the accuracy improves and is comparable to hyper-renormalization, as will be demonstrated in Chapter 6.

24

3. GEOMETRIC FITTING

3.5

DERIVATIONS

If we define the vector x and the symmetric matrix Q by 0

1 x=f0 x D @ y=f0 A ; 1

0

A QD@ B D

B C E

1 D E A; F

(3.21)

the ellipse equation of Eq. (1.1) is written in the form .x; Qx/ D 0:

(3.22)

Note that the scaling factor f0 in Eq. (3.21) makes the three component have the order O.1/, stabilizing finite-length numerical computation. In the following, we define 1 0 k  @ 0 A; 1 0

0

1 1 0 0 Pk  @ 0 1 0 A : 0 0 0

(3.23)

us, k is the unit vector orthogonal to the xy plane, and Pk v for any vector v is the projection of v onto the xy plane, replacing the third component of v by 0. Evidently, Pk k = 0.

e square distance d˛2 of point .x˛ ; y˛ / from the ellipse of Eq. (3.22) is written, up to high-order small noise terms, in the form Proposition 3.4 (Distance to ellipse)

d˛2 

f02 .x˛ ; Qx˛ / : 4 .Qx˛ ; Pk Qx˛ /

(3.24)

Proof . Let .xN ˛ ; yN˛ / be the closest point on the ellipse from .x˛ ; y˛ /. Let xN ˛ and x˛ be their vector representations as in Eq. (3.21). If we write x˛ D x˛

xN ˛ ;

(3.25)

the distance d˛ between the two points is f0 kx˛ k. We compute the value x˛ that minimizes kx˛ k2 . Since the point xN ˛ is on the ellipse, .x˛

x˛ ; Q.x˛

x˛ // D 0

(3.26)

holds. Expanding the left side and ignoring second-order terms in x˛ , we obtain .Qx˛ ; x˛ / D

1 .x˛ ; Qx˛ /: 2

(3.27)

3.5. DERIVATIONS

25

Since the third components of x˛ and xN ˛ are both 1, the third component of x˛ is 0. is constraint is written as .k; x˛ / = 0, using the vector k in Eq. (3.23). Introducing Lagrange multipliers, we differentiate kx˛ k2

 ˛ .Qx˛ ; x˛ /

 1 .x˛ ; Qx˛ / 2

.k; x˛ /;

(3.28)

with respect to x˛ . Letting the result be 0, we obtain 2x˛

˛ Qx˛

(3.29)

k D 0:

Multiplying Pk on both sides and noting that Pk x˛ = x˛ and Pk k = 0, we obtain ˛ Pk Qx˛ : 2

(3.30)

1 ˛ Pk Qx˛ / D .x˛ ; Qx˛ /; 2 2

(3.31)

.x˛ ; Qx˛ / : .Qx˛ ; Pk Qx˛ /

(3.32)

.x˛ ; Qx˛ /Pk Qx˛ : 2.Qx˛ ; Pk Qx˛ /

(3.33)

x˛ D

Substituting this into Eq. (3.27), we have .Qx˛ ;

from which ˛ is given in the form ˛ D

Hence, x˛ is given by x˛ D

us, we obtain kx˛ k2 D

.x˛ ; Qx˛ /2 kPk Qx˛ k2 .x˛ ; Qx˛ /2 ; D 2 4.Qx˛ ; Pk Qx˛ / 4.Qx˛ ; Pk Qx˛ /

(3.34)

where we have noted that Pk2 = Pk from the definition of Pk . We have also used the identity kPk Qx˛ k2 D .Pk Qx˛ ; Pk Qx˛ / D .Qx˛ ; Pk2 Qx˛ / D .Qx˛ ; Pk Qx˛ /:

Hence, d˛2 = f02 kx˛ k2 is expressed in the form of Eq. (3.24).

(3.35) 

From the definition of the vectors ξ and θ and the matrices Q and V0 Œξ˛  (.xN ˛ ; yN˛ / in Eq. (1.21) are replaced by .x˛ ; y˛ /), we can confirm after substitution and expansion that the following identities hold: .x˛ ; Qx˛ / D

1 .ξ˛ ; θ /; f02

.Qx˛ ; Pk Qx˛ / D

1 .θ ; V0 Œξ˛ θ /: 4f02

(3.36)

26

3. GEOMETRIC FITTING

is means that Eq. (3.24) is equivalently written as Eq. (3.2).

e eigenvalue  in Eq. (3.6) is 0 after the FNS iterations have converged. Proof . Computing the inner product of Eq. (3.6) and θ on both sides, we see that .θ ; Xθ / = kθ k2 = . If the iterations have converged, Eq. (3.7) means that W˛ = 1=.θ ; V0 Œξ˛ θ / holds. Hence, Proposition 3.5 (Eigenvalue of FNS)

.θ ; Xθ / D .θ ; M θ / D

N 1 X .θ ; ξ˛ ξ˛> θ / .θ ; Lθ / D N ˛D1 .θ ; V0 Œξ˛ θ /

N 1 X .ξ˛ ; θ /2 N ˛D1 .θ ; V0 Œξ˛ θ /

N 1 X .ξ˛ ; θ /2 .θ ; V0 Œξ˛ θ / N ˛D1 .θ ; V0 Œξ˛ θ /2

N 1 X .ξ˛ ; θ /2 D 0; N ˛D1 .θ ; V0 Œξ˛ θ /

which means  = 0.



Lemma 3.6 (Geometric distance minimization) For the ellipse given by Eq. (3.22), the position of the point .xN ˛ ; yN˛ / that minimizes Eq. (3.14), which we write .xOO ˛ ; yOO˛ /, is written as

xOO ˛ yOO˛

!

D



x˛ y˛



.xO ˛ ; QxO 0˛ / C 2.QxO 0˛ ; xQ ˛ / 2.QxO ˛ ; Pk QxO ˛ /



A B B C

D E



1 xO ˛ @ yO˛ A ; f0 0

(3.37)

if high-order terms in xO ˛ xN ˛ and yO˛ yN˛ are ignored, where A, B , C , D , and E are the elements of the matrix Q in Eq. (3.21). e vectors xO ˛ and xQ ˛ are defined by 1 0 1 0 xQ ˛ =f0 xO ˛ =f0 (3.38) xQ ˛ D @ yQ˛ =f0 A ; xO ˛ D @ yO˛ =f0 A ; 0 1 where .xQ ˛ ; yQ˛ / are defined by Eq. (3.15). Proof . If we let xO ˛ D xO ˛ xN ˛ ; (3.39) PN the sum of squares ˛D1 kxQ ˛ C xO ˛ k2 equals the geometric distance S of Eq. (3.14) divided by f02 . Since xN ˛ satisfies the ellipse equation, we have .xO ˛

xO ˛ ; Q.xO ˛

xO ˛ // D 0:

(3.40)

Expanding this and ignoring quadratic terms in xO ˛ , we obtain .QxO ˛ ; xO ˛ / D

1 .xO ˛ ; QxO 0˛ /: 2

(3.41)

3.5. DERIVATIONS

27

Since the third components of xO ˛ and xN ˛ are both 1, the third component of xO ˛ is 0, i.e., .k; xO ˛ / = 0. Introducing Lagrange multipliers, we differentiate N X ˛D1

kxQ ˛ C xO ˛ k

2

N X ˛D1

 ˛ .QxO ˛ ; xO ˛ /

 1 .xO ˛ ; QxO ˛ / 2

N X ˛D1

˛ .k; xO ˛ /:

(3.42)

with respect to xO ˛ . Letting the result be 0, we obtain 2.xQ ˛ C xO ˛ /

˛ QxO ˛

(3.43)

˛ k D 0:

Multiplying both sides by the matrix Pk of Eq. (3.23) and noting that Pk xQ ˛ = xQ ˛ from the definition of xQ ˛ , we obtain 2xQ ˛ C 2xO ˛

from which we obtain xO ˛ D

(3.44)

˛ Pk QxO ˛ D 0;

˛ Pk QxO ˛ 2

xQ ˛ :

(3.45)

1 xQ ˛ / D .xO ˛ ; QxO ˛ /; 2

(3.46)

Substituting this into Eq. (3.41), we obtain .QxO ˛ ;

˛ Pk QxO ˛ 2

from which ˛ is given in the form ˛ D

Hence,

 xO ˛ D

.xO ˛ ; QxO ˛ / C 2.QxO ˛ ; xQ ˛ / : .QxO ˛ ; Pk QxO ˛ /

 .xO ˛ ; QxO ˛ / C 2.QxO ˛ ; xQ ˛ / Pk QxO ˛ 2.QxO ˛ ; Pk QxO ˛ /

(3.47)

xQ ˛ :

It follows that xN ˛ is estimated to be   .xO ˛ ; QxO ˛ / C 2.QxO ˛ ; xQ ˛ / Pk QxO ˛ xOO ˛ D x˛ : 2.QxO ˛ ; Pk QxO ˛ / Rewriting this, we obtain Eq. (3.37).

(3.48)

(3.49) 

If we replace .xN ˛ ; yN˛ / in Eq. (3.1) by .xOO ˛ ; yOO˛ / and use the 9-D vector in Eq. (3.9), the geometric distance S is written in the form of Eq. (3.10). Proof . In correspondence with Eq. (3.36), we can confirm, by substitution and expansion, the identities 1 .xO ˛ ; QxO ˛ / C 2.QxO ˛ ; xQ ˛ / D 2 .ξ˛ ; θ / (3.50) f0 Proposition 3.7 (Modified Sampson error)

ξ˛

28

3. GEOMETRIC FITTING

.QxO ˛ ; Pk QxO ˛ / D

1 .θ ; V0 ŒξO˛ θ / 4f02

(3.51)

hold, where V0 ŒξO˛  is the normalized covariance matrix obtained by replacing .x; N y/ N in Eq. (1.21) by .xO ˛ ; yO˛ /. Hence, Eq. (3.37) can be written in the form 0 1 !     xO ˛  OxO ˛ 2.ξ˛ ; θ / x˛ 1 2 4 @ D (3.52) yO˛ A : O O y  yO˛ ˛ 2 3 5 .θ ; V0 Œξ˛ θ / f0 Note that 0 1 0 10 1   xO ˛ 1 2 4 xO ˛ =f0 1 2 4 @ yO˛ A D f0 @ 2 3 5 A @ yO˛ =f0 A D f0 Pk QxO ˛ : 2 3 5 f0 0 0 0 1

(3.53)

From Eq. (3.51), the square norm of this is f02 kPk QxO ˛ k2 D f02 .Pk QxO ˛ ; Pk QxO ˛ / D f02 .QxO ˛ ; Pk2 QxO ˛ / 1 D f02 .QxO ˛ ; Pk QxO ˛ / D .θ ; V0 ŒξO˛ θ /: (3.54) 4 Hence, if we replace .xN ˛ ; yN˛ / in Eq. (3.1) by .xOO ˛ ; yOO˛ / and substitute Eq. (3.37), we can approxi-

mate the geometric distance S in the form N 1 X S .x˛ N ˛D1

D

xOO ˛ /2 C .y˛

yOO˛ /2



N 1 X

D

N ˛D1

xOO ˛ yOO˛

!

N N 1 X 4.ξ˛ ; θ /2 .θ ; V0 ŒξO˛ θ / 1 X .ξ˛ ; θ /2 D : N ˛D1 .θ ; V0 ŒξO˛ θ /2 4 N ˛D1 .θ ; V0 ŒξO˛ θ /

us, we obtain Eq. (3.35).

3.6



x˛ y˛



2

(3.55) 

SUPPLEMENTAL NOTE

e FNS scheme for minimizing the Sampson error was introduced by Chojnacki et al. [2000]. ey called Eq. (3.3) the AML (approximated maximum likelihood ), but later the term “Sampson error” came to be used widely. is name originates from Sampson [1982], who proposed the iterative reweight scheme (Procedure 2.1 in Chapter 2) for minimizing it. It was later pointed out, however, that iterative reweight does not minimize Eq. (3.3), because it uses the value of W˛ computed in the preceding step of the iteration, which means that we are minimizing the numerator part with the denominators fixed. Hence, after the convergence of iterative reweight, we obtain a solution θ such that for any θ 0 ¤ θ N N 1 X .ξ˛ ; θ 0 /2 1 X .ξ˛ ; θ /2  : N ˛D1 .θ ; V0 Œξ˛ θ / N ˛D1 .θ ; V0 Œξ˛ θ /

(3.56)

3.6. SUPPLEMENTAL NOTE

29

However, this does not mean that N N 1 X .ξ˛ ; θ /2 1 X .ξ˛ ; θ 0 /2  : N ˛D1 .θ ; V0 Œξ˛ θ / N ˛D1 .θ 0 ; V0 Œξ˛ θ 0 /

(3.57)

e FNS solution θ is guaranteed to satisfy this inequality. In the Step 4 of Procedure 3.1, we can compute the unit eigenvector for the eigenvalue of the smallest absolute value, but it was experimentally found by Chojnacki et al. [2000] and Kanatani and Sugaya [2007b] that the iterations converge faster using the (possibly negative) smallest eigenvalue. is is explained as follows. Equation (3.6) is written as .M

L

I /θ D 0:

(3.58)

e matrix L in Eq. (3.4) is generally positive semidefinite and approaches the zero matrix O in the course of the iterations; L is equal to O if the constraint .ξ˛ ; θ / = 0 exactly holds. Initially, however, L can be very different from O . is effect is better canceled if we choose a negative . ere exists an alternative iterative scheme closely resembling FNS called HEIV (heteroscedastic errors-in-variables method ) [Leedan and Meer, 2000, Matei and Meer, 2006] for minimizing the Sampson error of Eq. (3.3); the computed solution is the same as FNS. e general strategy of minimizing the geometric distance by repeating Sampson error minimization was presented by Kanatani and Sugaya [2010a], and its use for ellipse fitting was presented in Kanatani and Sugaya [2008]. When the noise distribution is Gaussian, the geometric distance minimization is equivalent to what is called in statistics maximum likelihood estimation (ML) (we discuss this in detail in Chapter 9). e fact that the maximum likelihood solution has statistical bias has been pointed out by many people. e cause of the bias is the fact that an ellipse is a convex curve: if a point on the curve is randomly displaced, the probability of falling outside is larger than the probability of falling inside. Given a data point, its closest point on the ellipse is the foot of the perpendicular line from it to the ellipse, and the probability as to on which side on the curve its true position is more likely to be depends on the shape and the curvature of the ellipse in the neighborhood. Okatani and Deguchi [2009a] analyzed this for removing the bias. ey also used a technique called projected score to remove bias [Okatani and Deguchi, 2009b]. e hyperaccurate correction described in this chapter was introduced by Kanatani [2006, 2008], and Kanatani and Sugaya [2013].

PROBLEMS 3.1.

Show that the derivative of Eq. (3.3) is given by Eq. (3.8).

31

CHAPTER

4

Robust Fitting We consider two typical cases where standard ellipse fitting techniques fail: existence of superfluous data and scarcity of information. In the former case, we discuss how to remove unwanted segments, or “outliners,” that do not belong to the ellipse under consideration. In the latter case, which occurs when the segment is too short or too noisy, a hyperbola can be fit. We describe methods that force the fit to be an ellipse, although accuracy is compromised.

4.1

OUTLIER REMOVAL

For fitting an ellipse to real image data, we must first extract a sequence of edge points from an elliptic arc. However, edge segments obtained by an edge detection filter may not belong to the same ellipse; some may be parts of other object boundaries. We call points of such segments outliers; those coming from the ellipse under consideration are called inliers. Note that we are not talking abou random dispersions of edge pixels due to image processing inaccuracy, although such pixels, if they exist, are also-called outliers. Our main concern is the parts of the extracted edge sequence that do not belong to the ellipse we are interested in (see Fig. 1.1). Methods that are not sensitive to the existence of outliers are said to be robust . e general idea of robust fitting is to find an ellipse such that the number of points close to it is as large as possible. en, those points that are not close to it are removed as outliers. A typical such method is RANSAC (Random Sample Consensus). e procedure goes as follows. Procedure 4.1

(RANSAC)

1. Randomly select five points from the input sequence, and let ξ1 , ξ2 , ..., ξ5 be their vector representations in the form of Eq. (1.8). 2. Compute the unit eigenvector θ of the matrix

M5 D

5 X

ξ˛ ξ˛> ;

(4.1)

˛D1

for the smallest eigenvalue, and store it as a candidate. 3. Let n be the number of points in the input sequence that satisfy .ξ ; θ /2 < d 2; .θ ; V0 Œξ θ /

(4.2)

32

4. ROBUST FITTING

where ξ is the vector representation of the point in the sequence in the form of Eq. (1.8), V0 Œξ  is the normalized covariance matrix defined by Eq. (1.21), and d is a threshold for admissible deviation from the fitted ellipse, e.g., d = 2 (pixels). Store that n. 4. Select a new set of five points from the input sequence, and do the same. Repeat this many times, and return from among the stored candidate ellipses the one for which n is the largest.

Comments. In Step 1, we select five integers over Œ1; N , using uniform random sampling; overlapped values are discarded. Step 2 computes the ellipse that passes through the five point. We may solve the simultaneous linear equations obtained by substituting the point coordinates to Eq. (1.1) for A, B , ..., F ; they are determined up to scale (and normalized afterward). However, using least squares as shown above is easier to program: we compute the unit eigenvector of the matrix M in the first row of Eq. (2.18); the coefficient 1=N is omitted, but the computed eigenvectors are the same. In Step 3, we measure the distance of the points from the ellipse by Eq. (3.2); by definition, both ξ and V0 Œξ  have the dimension of square length, so the left side of Eq. (4.2) also has the dimension of square length. We can directly go on to the next sampling if the count n is smaller than the stored count, and we can stop if the current value of n is not updated over a fixed number of sampling. In the end, those pixels that do not satisfy Eq. (4.2) for the chosen ellipse are removed as outliers. In Chapter 6, some image examples are shown to demonstrate how the RANSAC procedure fits a correct ellipse to partly elliptic (i.e., inlying) and partly non-elliptic (i.e., outlying) edge segments.

4.2

ELLIPSE-SPECIFIC FITTING

Equation (1.1) does not necessarily represent an ellipse; depending on the coefficients, it can represent an hyperbola or a parabola. In special cases, it may degenerate into two lines, or no real point .x; y/ may satisfy Eq. (1.1). It can be shown that Eq. (1.1) represents an ellipse if and only if AC

B 2 > 0:

(4.3)

We will discuss this further in Chapter 5. Usually, an ellipse is obtained when Eq. (1.1) is fitted to an edge point sequence obtained from an ellipse, but if the sequence does not have sufficient information, typically if it is too short or too noisy, a hyperbola may result; a parabola results only when the solution exactly satisfies AC B 2 = 0, but this possibility can be excluded in real situations. If a hyperbola results, we can ignore it, because it indicates data insufficiency. From a theoretical interest, however, schemes to force the fit to be an ellipse have been studied in the past. e best known is the following method of Fitzgibbon et al. [1999].

4.2. ELLIPSE-SPECIFIC FITTING

Procedure 4.2

33

(Method of Fitzgibbon et al.)

1. Compute the 6  6 matrices

0

B B B B N DB B B @

N 1 X MD ξ˛ ξ˛> ; N ˛D1

0 0 1 0 0 0

0 2 0 0 0 0

1 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

1

C C C C C: C C A

(4.4)

2. Solve the generalized eigenvalue problem

M θ D N θ ;

(4.5)

and return the unit generalized eigenvector θ for the generalized eigenvalue  of the smallest absolute value. P 2 Comments. is is an algebraic method, minimizing the algebraic distance N ˛D1 .ξ˛ ; θ / subject to the normalization AC B 2 = 1 so that Eq. (4.3) is always satisfied. is normalization is written as .θ ; N θ / = 1 using the matrix N in Eq. (4.4). Hence, the solution is obtained by solving Eq. (4.5), as pointed out in Chapter 2. However, the normalization matrix N is not positive definite. So, as in the case of renormalization and hyper-renormalization, we rewrite Eq. (4.5) in the form of Eq. (2.10) and use a standard numerical tool to compute the unit eigenvector θ for the generalized eigenvalue 1= of the largest absolute value. It is known that the accuracy of this method is rather low; often a small and flat ellipse is returned, as demonstrated in the experiments in Chapter 6.

Another way to always obtain an ellipse is first to apply a standard method, e.g., by hyperrenormalization, and modify the result to an ellipse if it is not an ellipse. A simple and effective method is the use of random sampling in the same way as RANSAC: we randomly sample five points from the input sequence, fit an ellipse to them, and repeat this many times to find the best ellipse. e actual procedure goes as follows. Procedure 4.3

(Random sampling)

1. Fit an ellipse without considering the ellipse condition. If the solution θ satisfies 1 3

22 > 0;

(4.6)

return it as the final result. 2. Else, randomly select five points from the input sequence, and let ξ1 , ξ2 , ..., ξ5 be the corresponding vectors of the form of Eq. (1.8).

34

4. ROBUST FITTING

3. Compute the unit eigenvector θ of the following matrix for the smallest eigenvalue:

M5 D

5 X

ξ˛ ξ˛> :

(4.7)

˛D1

4. If that θ does not satisfy Eq. (4.6), discard it and select a new set of five points randomly for fitting an ellipse. 5. If θ satisfies Eq. (4.6), store it as a candidate. Also, evaluate the corresponding Sampson error J of Eq. (3.3), and store its value. 6. Repeat this process many times, and return the value θ whose Sampson error J is the smallest. Comments. e random sampling in Step 2 is done in the same way as RANSAC. Step 2 computes the ellipse that passes through the five point as in RANSAC. In theory, this could fail in a pathological case, e.g., all the points are on a hyperbola so that any five points define a hyperbola. We ignore such an extreme case. In Step 5, if the evaluated J is larger than the stored J , we can directly go on to the next sampling without storing θ and J . We can stop the sampling if the current value of J is not updated consecutively after a fixed number of iterations. is method fits a better ellipse than the method of Fitzgibbon, et al., as shown in the experiment in Chapter 6, but still the resulting fit is not very close to the true shape when a standard method, such as hyper-renormalization, returns a hyperbola.

4.3

SUPPLEMENTAL NOTE

e RANSAC technique for outlier removal was proposed by Fischler and Bolles [1981]. Its principle is to randomly select a minimum set of data points to fit a curve (or any shape) many times, each time counting the number of the data points nearby, and to output the curve to which the largest number of data points are close. In other words, a small number of data points are allowed to be far from the curve; they are regarded as outliers. Alternatively, instead of counting the number of points nearby, we may sort the square distances of all the data points from the curve, evaluate the median, and output the curve for which the median is the smallest. is is called LMedS (least median of squares) [Rousseeuw and Leroy, 1987]. is also ignores those data points that are far from the curve. Instead of ignoring far apart points, we may use a distance function that does not penalize far apart points very much. Such a method is generally called M-estimation, for which various types of distance function have been studied [Huber, 2009]. An alternative approach for the same purpose is the use of the Lp -norm for p < 2, typically p = 1, as mentioned in the Supplemental Note of Chapter 1 (page 6). e ellipse-specific method of Fitzgibbon et al. was proposed by Fitzgibbon et al. [1999], and the method of random sampling was proposed by Masuzaki et al. [2013]. As an alternative

4.3. SUPPLEMENTAL NOTE

35

ellipse-specific method, Szpak et al. [2015] minimized the Sampson error of (3.3) with a penalty term such that it diverges to infinity if the solution approaches a hyperbola. As the experiment examples in Chapter 6 show, however, all these methods do not produce a satisfactory fit when standard methods return a hyperbola. If a hyperbola is obtained by, say, hyper-renormalization, it indicates the lack of information so that it does not make sense to fit an ellipse in the first place.

37

CHAPTER

5

Ellipse-based 3-D Computation Circular objects in the scene are projected as ellipses in the image. We show here how we can compute the 3-D properties of the circular objects from their images. We start with techniques for computing attributes of ellipses such as intersections, centers, tangents, and perpendiculars. en, we describe how to compute the position and orientation of a circle and its center in the scene from its image. is allows us to generate an image of the circle seen from the front. e underlying principle is the analysis of image transformations induced by hypothetical camera rotations around its viewpoint.

5.1

INTERSECTIONS OF ELLIPSES

e ellipse equation Ax 2 C 2Bxy C Cy 2 C 2f0 .Dx C Ey/ C f02 F D 0

(5.1)

can be written in the following vector form: 1 x=f0 x D @ y=f0 A ; 1 0

.x; Qx/ D 0;

0

A @ QD B D

B C E

1 D E A: F

(5.2)

However, Eq. (5.1) may not always represent an ellipse. It defines a smooth curve only when the matrix Q is nonsingular, i.e., jQj ¤ 0, where jQj denotes the determinant of Q; if jQj = 0, two real or imaginary lines result. If jQj ¤ 0, Eq. (5.1) describes an ellipse (possibly an imaginary ellipse such as x 2 C y 2 = 1), a parabola, or a hyperbola (,! Problem 5.1). In a general term, these curves are called conics. Suppose two ellipses .x; Q1 x/ = 0 and .x; Q2 x/ = 0 intersect. Here, we consider only real intersections; in the following, we do not consider imaginary points or imaginary curves/lines. e intersections of two ellipses are computed as follows. Procedure 5.1

(Ellipse intersections)

1. Compute one solution of the following cubic equation in : jQ1 C Q2 j D 0:

(5.3)

38

5. ELLIPSE-BASED 3-D COMPUTATION

D C A

B

Figure 5.1: e three pairs of lines {AC , BD }, {AB , CD }, and {AD , BC } passing through the four intersections A, B , C , and D of two ellipses.

2. Compute the two lines represented by the following quadratic equation in x and y for that : (5.4)

.x; .Q1 C Q2 /x/ D 0:

3. Return the intersection of each of the lines with the ellipse .x; Q1 x/ = 0 (or .x; Q2 x/ = 0). Comments. If two ellipses .x; Q1 x/ = 0 and .x; Q2 x/ = 0 intersect at x, it satisfies .x; Q1 x/ C .x; Q2 x/ = 0 for an arbitrary . is is a quadratic equation in x in the form of Eq. (5.4), which describes a curve or a line pair (possibly imaginary) that passes through all the intersections of the two ellipses. If we choose the  so that Eq. (5.3) is satisfied, Eq. (5.4) represents two real or imaginary lines (,! Problem 5.2). Since we are assuming that real intersections exist, we obtain two real lines by factorizing Eq. (5.4) (,! Problem 5.3). Hence, we can locate the intersections by computing the intersections of each line with either of the ellipses (,! Problem 5.4). If the two ellipses intersect at four points (Fig. 5.1), the cubic equation of Eq. (5.3) has three real roots, each of which defines a pair of lines; a pair of lines intersecting inside the ellipses and two pairs of lines intersecting outside the ellipses (or being parallel to each other). If the two ellipses intersect at two points, at least one real line among the six computed lines pass through them. Evidently, the above procedure can be applied if one (or both) of .x; Q1 x/ = 0 and .x; Q2 x/ = 0 is a hyperbola or a parabola, returning their intersections if they exist.

5.2

ELLIPSE CENTERS, TANGENTS, AND PERPENDICULARS

e center .xc ; yc / of the ellipse of Eq. (5.1) has the following coordinates: x c D f0

CD C BE ; AC B 2

y c D f0

BD AC

AE : B2

(5.5)

5.2. ELLIPSE CENTERS, TANGENTS, AND PERPENDICULARS

39

(x, y)

n1 x+n2 y+n3 f = 0 (x0, y0)

(xc, yc)

Figure 5.2: e center .xc ; yc / of the ellipse, the tangent line n1 x C n2 y C n3 f = 0 at point .x0 ; y0 / on the ellipse, and the perpendicular line from point .a; b/ to point .x0 ; y0 /.

ese are obtained by differentiating the ellipse equation F .x; y/ = 0 of Eq. (5.1) and solving @F=@x = 0 and @F=@y = 0: Ax C By C f0 D D 0;

(5.6)

Bx C Cy C f0 E D 0:

Using Eq. (5.5), we can write Eq. (5.1) in the form A.x

xc /2 C 2B.x

xc /.y

yc / C C.y

yc /2 D Axc2 C 2Bxc yc C Cyc2

f0 F:

(5.7)

e tangent line n1 x C n2 y C n3 f0 = 0 to the ellipse of Eq. (5.1) at point .x0 ; y0 / on it is given as follows (Fig. 5.2): n1 D Ax0 C By0 C Df0 ;

n2 D Bx0 C Cy0 C Ef0 ;

n3 D Dx0 C Ey0 C Ff0 :

(5.8)

Let x0 be the vector obtained from the vector x of Eq. (5.2) by replacing x and y by x0 and y0 , respectively, and let n be the vector with components n1 , n2 , and n3 . We can see that Eq. (5.8) implies n ' Qx0 , where ' denotes equality up to a nonzero constant; note that the same line is represented if multiplied by arbitrary nonzero constant. e expressions in Eq. (5.8) are easily obtained by differentiating Eq. (5.1) (,! Problem 5.5). e foot of the perpendicular line from point .a; b/ to the ellipse of Eq. (5.1), i.e., the closest point on it, is computed as follows (Fig. 5.2). Procedure 5.2

(Perpendicular to Ellipse)

1. Define the matrix 0

B DD@ .C A/=2 .Ab Ba C Ef0 /=2f0

.C .Bb

A/=2 B C a Df0 /=2f0

.Ab .Bb

1 Ba C Ef0 /=2f0 C a Df0 /=2f0 A : .Db Ea/=f0 (5.9)

40

5. ELLIPSE-BASED 3-D COMPUTATION

2. Compute the intersections .x0 ; y0 / of the ellipse .x; Qx/ = 0 and the quadratic curve .x; Dx/ = 0 by Procedure 5.1. 3. From among the computed intersections, return the point .x0 ; y0 / for which .a .b y0 /2 is the smallest.

x0 /2 C

Comments. Let .x0 ; y0 / be the foot of the perpendicular line from .a; b/ to the ellipse. e line passing through .a; b/ and .x0 ; y0 / is .b

y0 /x C .x0

a/y C .ay0

bx0 / D 0:

(5.10)

Let n1 x C n2 y C n3 f0 = 0 be the tangent line to the ellipse at .x0 ; y0 / on it. e condition that this tangent line is orthogonal to the above line is .b

y0 /n1 C .x0

a/n2 D 0:

(5.11)

Substituting Eq. (5.8), this condition is rearranged in the form Bx02 C .C A/x0 y0

By02 C .Ab BaCEf0 /x0 C .Bb C a Df0 /y0 C .Db Ea/f0 D 0: (5.12) is means that if we define the matrix D of Eq. (5.9), Eq. (5.12) states that the point .x0 ; y0 / is on the quadratic curve .x; Dx/ = 0. Since .x0 ; y0 / is a point on the ellipse .x; Qx/ = 0, we can compute it as the intersection of the two quadratic curves by Procedure 5.1, which applies even if .x; Dx/ = 0 does not represent an ellipse. e above procedure is an analytical computation, but we can also compute the same result iteratively by modifying the geometric distance minimization procedure for ellipse fitting described in Chapter 3 (,! Problem 5.6).

5.3

PERSPECTIVE PROJECTION AND CAMERA ROTATION

For analyzing the relationship between a circle in the scene and its image, we need to consider the camera imaging geometry. It is modeled by perspective projection as idealized in Fig. 5.3. (X, Y, Z)

X x

Z (x, y) o O

y

f Y

Figure 5.3: Idealized perspective projection.

5.3. PERSPECTIVE PROJECTION AND CAMERA ROTATION

41

(X, Y, Z) R (x, y) (x', y')

f O

Figure 5.4: If the camera is rotated by R around its viewpoint O , the scene appears to rotate relative to the camera by R

1

(= R> ). As a result, the ray direction of .x; y/ is rotated to the ray direction of .x 0 ; y 0 /.

We regard the origin O of the XY Z coordinate system as the center of lens, which we call the viewpoint , and the Z -axis as the optical axis, i.e., the axis of symmetry through the lens. We identify the image plane as Z = f , i.e., the plane orthogonal to the optical axis apart from the view point by distance f , which is commonly called the focal length. e intersection of the image plane with the optical axis is called the principal point . We define an xy image coordinate system such that its origin o is at the principal point and the x - and the y -axes are parallel to the X - and Y -axes, respectively. us, a point .X; Y; Z/ in the scene is projected to a position .x; y/ in the image given by X Y xDf ; yDf : (5.13) Z Z Suppose the camera is rotated around the viewpoint O . As shown in Fig. 5.3, the vector .x; y; f /> points to the point .X; Y; Z/ in the scene that we are viewing. If the camera is rotated by R, the scene appears to rotate relative to the camera in the opposite sense by R 1 (= R> ). It follows that the direction of the vector .x; y; f /> rotates to the direction of R> .x; y; f /> (Fig. 5.4), which is the ray direction .x 0 ; y 0 ; f /> of the point .x 0 ; y 0 / we observe after the rotation. Hence, we obtain 1 0 1 10 0 1 0 0 0 1 0 1=f0 0 0 x 1=f0 0 0 x x =f0 @ y 0 =f0 A D @ 0 1=f0 0 A R> @ y A 1=f0 0 A @ y0 A ' @ 0 f f 0 0 1=f 0 0 1=f 1 0 1 0 10 1 0 1 1=f0 0 0 f0 0 0 x=f0 x=f0 D@ 0 1=f0 0 A R> @ 0 f0 0 A @ y=f0 A D H @ y=f0 A ; 0 0 1=f 0 0 f 1 1 (5.14) where

0

1=f0 @ HD 0 0

0 1=f0 0

1 0 0 f0 >@ A R 0 0 1=f 0

0 f0 0

1 0 0 A: f

(5.15)

42

5. ELLIPSE-BASED 3-D COMPUTATION

roughout this book, we follow our convention that image coordinates are always normalized by the scaling factor f0 . In general, an image transformation that maps a point .x; y/ to a point .x 0 ; y 0 / is called a homography (or projective transformation) if

x0 ' Hx;

(5.16)

for some nonsingular matrix H , where x and x0 are the vector representations of points .x; y/ and .x 0 ; y 0 / as defined in Eq. (5.2). Evidently, compositions and inverses of homographies are also homographies, and the set of all homographies constitue a group of transformations. From Eq. (5.14), we see that rotation of the camera induces a homography in the image if the image plane is sufficiently large. Lemma 5.3 (Conic mapping by homography) A conic is mapped to a conic by a homography. Proof . Consider a conic .x; Qx/ = 0, which may be an ellipse, a parabola, or a hyperbola. We map it by a homography x0 ' Hx. Since x ' H 1 x0 , we have .H 1 x0 ; QH 1 x/ = .x0 ; H > QH 1 x/ = 0, where H > denotes .H 1 /> (= .H > / 1 ) (,! Problem 5.7). is means the point x0 is on the conic .x0 ; Q0 x0 / = 0 with

Q0 ' H

>

QH

1

Since H is nonsingular, we see that jQ0 j ¤ 0 if jQj ¤ 0.

:

(5.17) 

Consider a circle in the scene. We call the plane on which it lies the supporting plane of the circle.

A circle in the scene is projected to a conic in the image. Proof . Let n be the unit surface normal to the supporting plane of the circle. Suppose we rotate the camera so that its optical axis becomes parallel to n (Fig. 5.5). Now that the supporting plane is front parallel to the image plane, we should observe the circle as a circle in the image if the image plane is sufficiently large. is camera rotation induces a homography, which maps a circle to a general conic. Hence, a circle is generally imaged as a conic by a camera in general position. Proposition 5.4 (Projection of a circle)

n

O

R

Figure 5.5: If we rotate the camera so that the optical axis is perpendicular to the supporting plane, the image of the circle becomes a circle.

5.4. 3-D RECONSTRUCTION OF THE SUPPORTING PLANE

5.4

43

3-D RECONSTRUCTION OF THE SUPPORTING PLANE

Using the camera imaging geometry described above, we can compute from an ellipse image the 3-D position of the supporting plane of the circle. Let n be its unit surface normal, and h its distance from the origin O (= the viewpoint). If we know the radius r of the circle, we can compute n and h as follows. Procedure 5.5

(Supporting plane reconstruction)

1. Transform the coefficient matrix Q of the observed ellipse .x; Qx/ = 0 as follows: 1 0 1 0 1=f0 0 0 1=f0 0 0 QN D @ 0 (5.18) 1=f0 0 A: 1=f0 0 AQ@ 0 0 0 1=f 0 0 1=f en, normalize it to determinant 1:

QN

q 3

QN

:

(5.19)

Nj jQ

N and arrange them in the order 2  1 > 0 2. Compute the eigenvalues 1 , 2 , and 3 of Q > 3 . Let u1 , u2 , and u3 be the corresponding unit eigenvectors.

3. Compute the unit surface normal n to the supporting plane in the form p p n D N Œ 2 1 u2 C 1 3 u3 ;

(5.20)

where N Œ   is normalization to unit norm. 4. Compute the distance h to the supporting plane in the form h D 3=2 1 r: N is defined by Eq. (5.18), we see from Eq. (5.2) that Comments. If the matrix Q 0 1 0 1 A=f02 B=f02 D=f0 f A B .f0 =f /D QN D @ B=f02 C =f02 E=f0 f A ' @ B C .f0 =f /E A : 2 D=f0 f E=f0 f F=f .f0 =f /D .f0 =f /E .f0 =f /2 F

(5.21)

(5.22)

N as AN, BN , ..., it is easy to see that the equation If we write the elements of Q N 2 C 2Bxy N N C Ey/ N C f 2 FN D 0 Ax C CN y 2 C 2f .Dx

(5.23)

44

5. ELLIPSE-BASED 3-D COMPUTATION

N instead of Q means replacing the constant is identical to Eq. (5.1). Namely, using the matrix Q f0 with the focal length f . Suppose the camera is rotated by R. From Eq. (5.15), we see that

H

1

0

1=f0 D@ 0 0

0 1=f0 0

1 0 0 f0 0 AR@ 0 1=f 0

0 f0 0

1 0 0 A; f

(5.24)

which maps the ellipse in the form of Eq. (5.17). Hence, 0

f0 Q0 ' @ 0 0

0

f0 D@ 0 0

0 f0 0

0 f0 0

1 0 1 0 1=f0 0 0 0 A R> @ 0 1=f0 0 A f 0 0 1=f 0 1 0 1=f0 0 0 f0 @ A @ Q R 0 1=f0 0 0 0 0 1=f 0 1 0 1 f0 0 0 0 N @ 0 f0 0 A ; 0 A R> QR 0 0 f f

0 f0 0

1 0 0 A f

(5.25)

N . However, Q N and Q N 0 are both normalized to determinant 1, N 0 ' R> QR and consequently Q and the determinant is unchanged by multiplication of rotation matrices R and R> . Hence, we have N : QN 0 D R> QR

(5.26)

Procedure 5.5 is obtained, using this relationship. e derivation is given in Section 5.7 below, where we first consider the case where the ellipse is in canonical form (Lemma 5.8). en, we rotate the camera so that the ellipse has the canonical form and apply the result (Proposition 5.9). e distance to the supporting plane is determined by using the radius r of the circle we observe. If r is unknown, the supporting plane is determined up to the distance from the origin O . e 3-D position of the circle can be computed by back-projection, i.e., as the intersection of the ray of individual points on the ellipse with the supporting plane (,! Problem 5.8).

5.5

PROJECTED CENTER OF CIRCLE

If we know the unit surface normal n = .ni / to the supporting plane, we can compute the image position of the center .xC ; yC / of the circle, which does not necessarily coincide with the center .xc ; yc / of the observed ellipse given by Eqs. (5.5). Let .x; Qx/ = 0 be the observed ellipse. e procedure is as follows.

5.6. FRONT IMAGE OF THE CIRCLE

Procedure 5.6

45

(Projected center of circle)

1. Compute the following vector m = .mi /: 0 1 f0 0 0 m D @ 0 f0 0 A Q 0 0 f

0

f0 1@ 0 0

0 f0 0

10 1 0 n1 0 A @ n2 A : f n3

(5.27)

2. e projected center .xC ; yC / of the circle is given by xC D f

m1 ; m3

yC D f

m2 : m3

(5.28)

N in Eq. (5.18), the right side of Eq. (5.27) can be written as Comments. In terms of the matrix Q 1 N Q n. is implies from Eq. (5.28) the following relationship: 1 xC n ' QN @ yC A : f 0

(5.29)

e proof of Procedure 5.6 is given in Section 5.7 below, where we first show that Eq. (5.29) holds if the camera is rotated so that the optical axis is perpendicular to the supporting plane (Lemma 5.10). en, we show that the relation holds if the camera is rotated arbitrarily (Proposition 5.11).

5.6

FRONT IMAGE OF THE CIRCLE

If we compute the supporting plane of the circle in the scene, we can generate its image as if seen from the front in such a way that the center of the circle is at the image origin. Note that translation of the image is also a homography. In fact, translation x 0 = x C a and y 0 = y C b by a and b in the x and y directions, respectively, is the following homography: 10 1 1 0 1 0 0 0 1 0 a=f0 x=f0 .x C a/=f0 x =f0 @ y 0 =f0 A D @ .y C b/=f0 A D @ 0 1 b=f0 A @ y=f0 A : (5.30) 0 0 1 1 1 1 Its inverse is obtained by replacing a and b by a and b , respectively. e hypothetical front image of the circle is computed by the following procedure. Procedure 5.7

(Front image of the circle)

1. Compute the projected center .xC ; yC / of the circle by Procedure 5.6.

46

5. ELLIPSE-BASED 3-D COMPUTATION

2. Compute a camera rotation R that makes the supporting plane parallel to the image plane, and determine the corresponding homography matrix H by Eq. (5.15). 3. Compute the projected center .xC0 ; yC0 / of the circle after that camera rotation as follows: 0

1 0 1 xC0 =f0 xC =f0 @ y 0 =f0 A ' H @ yC =f0 A : C 1 1

(5.31)

4. Compute the following homography matrix that translates the point .xC0 ; yC0 / to the image origin: 1 0 1 0 xC =f0 H0 D @ 0 1 (5.32) yC =f0 A : 0 0 1 5. Define a new image buffer, and compute for each pixel .x; y/ of it the point .x; N y/ N that satisfies 1 1 0 0 x=f0 x=f N 0 @ y=f (5.33) N 0 A ' H 1 H0 1 @ y=f0 A : 1 1 en, copy the image value of the pixel .x; N y/ N of the input image to the pixel .x; y/ of the buffer. If the resulting coordinates .x; N y/ N are not integers, its image value is interpolated from surrounding pixels. Comments. e camera rotation that rotates the unit vector n to the optical axis direction is not unique, since there is an indeterminacy of rotations around the optical axis (Fig. 5.5). We can choose any such rotation, but the simplest one is the rotation around an axis perpendicular to both n and the optical axis direction (= the Z -axis) by the angle ˝ made by n and the Z -axis (,! Problem 5.9). Combining it with the translation of Eq. (5.32), we can obtain the homography matrix H0 H , which maps the ellipse to a circle centered at the image origin as if we are viewing the circle from the front. As is well known, in order to generate a new image from a given image by image processing, the transformation equation that maps an old image to a new image is not sufficient. For digital image processing, we first define a new image buffer in which the generated image is to be stored and then give a computational procedure to define the image value at each pixel of this buffer. is is done by first computing the inverse of the image generating transformation and then copying to the buffer the image value of the inversely transformed pixel position. In the above procedure, the image generating transformation is the composite homography H0 H , whose inverse is H 1 H0 1 . e inverse H 1 is given by Eq. (5.24), and the inverse H0 1 is obtained from Eq. (5.32) by changing the signs of xC and yC . When the computed image coordinates are not

5.7. DERIVATIONS

47

integers, we may simply round them to integers, but a more accurate way is to use bilinear interpolation, i.e., proportional allocation in the x and y directions, of the values of the surrounding pixels. (,! Problem 5.10).

5.7

DERIVATIONS

We first consider the 3-D analysis of an ellipse in canonical form, i.e., centered at the image origin o having the major and minor axes in the x and y direction. Lemma 5.8 (Reconstruction in canonical form)

If the projected circle in the image is an ellipse

in the canonical form of x 2 C ˛y 2 D ;

˛  1;

> 0;

the inclination angle  of the supporting plane is given by s s ˛ 1 1 C =f 2 sin  D ˙ ; cos  D ; 2 ˛ C =f ˛ C =f 2

(5.34)

(5.35)

and the distance h to the supporting plane is fr hD p : ˛

(5.36)

N have the following form: Proof . e matrix Q and its normalization Q 1 0 1 0 1 0 0 1 0 0 p A; A;  D .f = ˛ /2=3 : QD@ 0 ˛ QN D  @ 0 ˛ 0 0 0 0

=f02 0 0

=f 2

(5.37)

Since the major axis is along the x -axis, the supporting plane is inclined along the y -axis. If the inclination angle is  , the unit surface normal to the supporting plane is n = .0; sin ; cos  /> with the sign of  indeterminate. If the camera is rotated around the X -axis by  , the supporting plane becomes parallel to the image plane. Such a camera rotation is given by 0 1 1 0 0 RD@ 0 (5.38) cos  sin  A : 0 sin  cos  After the camera rotation, we observe a circle in the form x 2 C .y C c/2 D 2 ;

c  0;

 > 0:

(5.39)

48

5. ELLIPSE-BASED 3-D COMPUTATION

e corresponding normalized matrix is 0 1 1 0 0 A; QN 0 D  0 @ 0 1 c=f 2 2 2 0 c=f .c  /=f

 0 D .f =/2=3 :

Hence, we obtain from Eq. (5.26) the equality 1 0 1 0 1 0 0 1 0 0 A D @ 0 cos  0 @ 0 1 c=f sin  A 2 2 2 0 c=f .c  /=f 0 sin  cos  0 10 1 0 0 1 @ 0 ˛ A@ 0 0 0 0

=f 2 0

1 0 0 cos  sin  A : sin  cos 

(5.40)

(5.41)

p Comparing the .1; 1/ elements on both sides, we see that  =  0 and hence  = ˛ . Comparing the remaining elements, we obtain       cos  sin  ˛ 0 cos  sin  1 c=f : (5.42) D sin  cos  0

=f 2 sin  cos  c=f .c 2 2 /=f 2

is implies that the matrix on the left side has eigenvalues ˛ and =f 2 with corresponding unit eigenvectors .cos ; sin /> and . sin ; cos  /> . Since the trace and the determinant are invariant under the congruent transformation of Eq. (5.42), which is also the singular value decomposition, we obtain 1C

c2

2 f

2



; f2

c2

2 f

2

c2 D f2

˛ : f2

(5.43)

p p Hence,  = ˛ and c = .˛ 1/. C f 2 /. Since .cos ; sin /> is the eigenvector for eigenvalue ˛ , the following holds:      cos  cos  1 c=f : (5.44) D ˛ c=f .c 2 2 /=f 2 sin  sin 

us, we obtain ˛ 1 tan  D D c=f

s

˛ 1 ; 1 C =f 2

(5.45)

from which we obtain Eq. (5.35). Comparing the radius  of the circle on the image plane, which is in distance f from the viewpoint O (Fig. 5.3), and the true radius r on the supporting plane, the distance h to the supporting plane is h = f r=. Hence, we obtain Eq. (5.36). 

5.7. DERIVATIONS

49

Next, we consider the general case. We rotate the camera so that the ellipse has the canonical form. is is done by first rotating the camera so that the optical axis passes through the center of the ellipse and next rotating the camera around the optical axis so that the major and minor axes are in the x and y direction. en, we apply Lemma 5.8. Proposition 5.9 (Reconstruction in general position)

Equations (5.20) and (5.21) hold in the

general case. Proof . If the observed ellipse is not in the from of Eq. (5.34), we rotate the camera by some rotation R so that it has the form of Eq. (5.34). is process corresponds to the following diagN in the form of Eq. (5.26): onalization of the normalized matrix Q 0

1 N D@ 0 R> QR 0

0 2 0

1 0 0 A: 3

(5.46)

N is normalized to 1, we have 1 2 3 = 1. We are assuming that Since the determinant of Q the three eigenvalues are arranged in the order 2  1 > 0 > 3 , so Eq. (5.23) implies that this ellipse has the form 2 3 x2 C y2 D f 2 : (5.47) 1 1

Comparing this with Eq. (5.34), we see that ˛ = 2 =1 ( 1) and = f 2 3 =1 (> 0). us, the inclination angle  is given by Eq. (5.35). e y - and the z -axes after the camera rotation are, respectively, in the directions u2 and u3 , which are the second and the third columns of the matrix R in Eq. (5.46). Hence, the unit surface normal n to the supporting plane is given by n = u2 sin  C u3 cos  . Rewriting the ˛ and in Eqs. (5.35) and (5.36) in terms of 1 , 2 , and 3 , we obtain Eq. (5.21).  We now consider the projection of the center of the circle in the scene. We first consider the front parallel case and then the general case by applying the camera rotation homography. Lemma 5.10 (Front parallel projection)

Equation (5.29) holds when the supporting plane of the

circle is parallel to the image plane. Proof . If the supporting plane is parallel to the image plane, the circle is imaged in the form of N in .x xc /2 C .y yc /2 = 2 for some .xc ; yc / and . is circle corresponds to the matrix Q the form 0 1 1 0 xc =f A: QN ' @ (5.48) 0 1 yc =f 2 2 2 2 xc =f yc =f .xc C yc  /=f

50

5. ELLIPSE-BASED 3-D COMPUTATION

Since .xc ; yc / = .xC ; yC /, i.e., the center of the circle in the scene is projected to the center of the imaged circle, the right side of Eq. (5.29) is .0; 0; 2 =f /> , which is a vector orthogonal to the image plane.  Proposition 5.11 (General projection)

Equation (5.29) holds if the camera is arbitrarily rotated

around the viewpoint. Proof . Suppose we observe a circle on the supporting plane with unit surface normal n. If the camera is rotated by R around the viewpoint, the unit surface normal is n0 = R> n relative to the camera after the rotation. Suppose the projected center .xC ; yC / moves to .xC0 ; yC0 / after the camera rotation. eir ray directions .xC ; yC ; f /> and .xC0 ; yC0 ; f /> are related by 0 0 1 0 1 xC xC @ y 0 A ' R> @ yC A (5.49) C f f (see Fig. 5.4). Multiplying Eq. (5.29) by R> on both sides and noting Eq. (5.26), we obtain 1 0 0 1 1 0 0 xC xC xC 0@ > N >@ 0 > > N @ N A A 'Q (5.50) ' .R QR/R n D R n ' R Q yC yC0 A ; yC f f f which means that Eq. (5.29) also holds after the camera rotation.

5.8



SUPPLEMENTAL NOTE

As described in most geometry textbooks, the figures that Eq. (5.1) represents include: (i) a real quadratic curve (ellipse, hyperbola, or parabola); (ii) an imaginary ellipse; (iii) two real lines that are intersecting or parallel; (iv) a real line; and (v) imaginary two lines that are intersecting at a real point or parallel. From the standpoint of projective geometry, however, two ellipses always intersect at four points, which may be real or imaginary. Similarly, an ellipse and a line always intersect at two points; a tangent point is regarded as a degenerate double intersection, and non-existence of intersections is interpreted to be intersecting at imaginary points. A well-known textbook on classical projective geometry is Semple and Roth [1949]. e analysis of general algebraic curves in the complex number domain is called algebraic geometry; see the classical textbook of Semple and Kneebone [1952].

5.8. SUPPLEMENTAL NOTE Poo

51

l oo

P A

P1

l l1

C

P2 l2

B (a)

(b)

Figure 5.6: (a) e pole P and the polar l of an ellipse. (b) e ellipse center, tangent lines, vanishing point, and vanishing line.

e expression of the tangent line in Eq. (5.8) holds for a general conic. If the point .x0 ; y0 / is not on the conic of Eq. (5.1), then Eq. (5.8), or n ' Qx0 , defines a line, called the polar of the conic of Eq. (5.1) at .x0 ; y0 /; the point .x0 ; y0 / is called the pole of the line n1 x C n2 y C n1 f0 = 0. For a point .x0 ; y0 / outside an ellipse, there exist two lines passing through it and tangent to the ellipse (Fig. 5.6a). e line passing through the two tangent points is the polar of .x0 ; y0 /. e pole is on the polar, if and only if the pole is on the conic and the polar is the tangent line there. e tangent line (and the polar in general) is obtained by the polar decomposition of the conic, the process of replacing x 2 , xy , y 2 , x , and y of the conic equation by x0 x , .x0 y C xy0 /=2, y0 y , .x C x0 /=2, and .y C y0 /=2, respectively. e poles and polars of an ellipse are closely related to the 3-D interpretation of the ellipse. Suppose an ellipse in the image is an oblique view of an ellipse (or a circle) in scene. Let C be the true center of the ellipse, where by “true” we mean the projection of 3-D properties. Consider an arbitrary chord P1 P2 passing through C (Fig. 5.6b). Let l1 and l2 be the tangent line to the ellipse at P1 and P2 , respectively. ese are parallel tangent lines to the “true” ellipse. eir intersection P1 in the image is the vanishing point , which is infinitely far away on the “true” supporting plane. e chord P1 P2 is arbitrary, so each choice of it defines its vanishing point. All such vanishing points are on a line called the vanishing line, which is the “horizon” or the infinitely far away boundary of the supporting plane. In the image, the vanishing line l1 is the polar of the center C , which is the pole of the vanishing line l1 . Procedure 5.6 is a consequence of this interpretation. Let A and B be the intersection of the ellipse with the line passing through the center C and the vanishing point P1 . en, the set ŒP1 , A, C , B is an harmonic range, meaning that it has cross ratio 1; the cross-ratio of four distinct collinear (i.e., on a common line) points A, B , C , D in that order is defined by  AC AD ; (5.51) BC BD where AC , BD , etc. are signed distance along an arbitrarily fixed direction (hence CA = AC , etc.). A similar relation holds for the vanishing line l1 , the tangent lines l1 and l2 , and the axis P1 C , and the set Œl1 , l1 , P1 C , l2  is a harmonic pencil , meaning that it has cross-ratio 1; the

52

5. ELLIPSE-BASED 3-D COMPUTATION

cross-ratio of four distinct concurrent (i.e., having a common intersection) lines is defined by the cross-ratio of their intersections with an arbitrary line (the value does not depend on that line as long as it intersects at four distinct points). For more details, see Kanatani [1993a]. Procedure 5.5 for 3-D reconstruction of a circle was first presented by Forsyth et al. [1991]. e result was extended to 3-D reconstruction of an ellipse by Kanatani and Liu [1993]. ese analyses are based on the image transformation induced by camera rotation, for which see Kanatani [1990, 1993a] for the details.

PROBLEMS 5.1.

Show that when jQj ¤ 0, the equation .x; Qx/ = 0 represents an ellipse (including an imaginary ellipse) if and only if AC B 2 > 0: (5.52) Also, show that AC bola, respectively.

5.2.

B 2 = 0 and AC

B 2 < 0 correspond to a parabola and a hyper-

(1) Show that for an arbitrary square matrix A, the identity jI C Aj D 3 C 2 trŒA C trŒAŽ  C jAj

(5.53)

holds, where trŒ   denotes the matrix trace and AŽ is the cofactor matrix of A, i.e., its .i; j / equals the determinant of the matrix obtained from A by removing the row and the column containing Aj i and multiplying it by . 1/iCj : 0

A22 A33 AŽ D @ A31 A23 A21 A32

A32 A23 A21 A33 A31 A22

A32 A13 A11 A33 A31 A12

A12 A33 A31 A13 A11 A32

A12 A23 A12 A23 A11 A22

1 A22 A13 A21 A13 A : (5.54) A21 A12

(2) Show that the following identity holds for arbitrary square matrices A and B : jA C B j D 3 jAj C 2 trŒAŽ B  C trŒAB Ž  C jB j:

5.3.

Show that if jQj = 0 and B 2

AC > 0, Eq. (5.1) defines the following two lines:

p B2 p Ax C .B C B 2 Ax C .B

5.4.

(5.55)

 AC /y C D  AC /y C D C

BD AE  f0 D 0; p B 2 AC  BD AE f0 D 0: p B 2 AC

(5.56)

Describe the procedure for computing the two intersections .x1 ; y1 / and .x2 ; y2 / of the ellipse of Eq. (5.1) with the line n1 x C n2 y C n3 f0 = 1.

5.8. SUPPLEMENTAL NOTE

53

5.5.

Show that the tangent line n1 x C n2 y C n3 f0 = 0 to the quadratic curve of Eq. (5.1) at .x0 ; y0 / is given by Eq. (5.8).

5.6.

Describe the procedure for iteratively computing the foot of the perpendicular to an ellipse drawn from a given point by modifying the ellipse fitting procedure by geometric distance minimization given in Chapter 3.

5.7.

Show that .A 1 /> = .A> /

5.8.

Show that a point .x; y/ in the image is back-projected onto a plane with unit surface normal n and distance h from the origin O in the following form: 1 0 0 1 X x h @ Y AD @ y A: (5.57) n1 x C n2 y C n3 f Z f

5.9.

Find a rotation matrix R that rotates a unit vector n into the direction of the Z -axis.

1

holds for an arbitrary nonsingular matrix A.

5.10. Show how the value of the pixel with non-integer image coordinates .x; y/ is determined from surrounding pixels by bilinear interpolation.

55

CHAPTER

6

Experiments and Examples We show some simulation examples of different ellipse fitting methods described in Chapters 1–3 and do statistical accuracy evaluation, repeating a large number of trials using different artificial noise. We also see some real image examples. Next, we show how the RANSAC procedure described in Chapter 4 can robustly fit an ellipse to edge segments that contain non-elliptic parts. We then see how the ellipse-specific methods described in Chapter 4 enforce the fit to be an ellipse in the presence of large noise and conclude that they do not have much practical value. Finally, we show some application examples of the ellipse-based 3-D computation described in Chapter 5.

6.1

ELLIPSE FITTING EXAMPLES

We first show some simulated ellipse fitting examples to see how in the presence of noise different methods described in Chapters 1–3 can produce different ellipses for the same point sequence. We also compare the number of iterations for different iterative methods. We define 30 equidistant points on the ellipse shown in Fig. 6.1a. e major and minor axis are set to 100 and 50 pixels, respectively. We add independent Gaussian noise of mean 0 and standard deviation  (pixels) to the x and y coordinates of each point and fit an ellipse by the following methods: 1. least squares (LS) (Procedure 1.1), 2. iterative reweight (Procedure 2.1), 3. Taubin method (,! Problem 2.1),

1

1

2 7 5

(a)

2

3

8

5

6

3 (b)

4

4

6 7

8

(c)

Figure 6.1: (a) 30 points on an ellipse. (b), (c) Fitted ellipses for different noise instances of  = 0.5 (pixels): (1) LS, (2) iterative reweight, (3) Taubin, (4) renormalization, (5) HyperLS, (6) hyper-renormalization, (7) FMS, (8) hyperaccurate correction. e dotted lines indicate the true shape.

56

6. EXPERIMENTS AND EXAMPLES

4. renormalization (Procedure 2.2), 5. HyperLS (,! Problem 2.2), 6. hyper-renormalization (Procedure 2.3), 7. FNS (Procedure 3.1), and 8. hyperaccurate correction (Procedure 3.3). Since the solution of geometric distance minimization (Procedure 3.2) is nearly identical to the FNS solution, it is not picked out as a different method. Figure 6.1b,c shows fitted ellipses for different noise instances of  = 0.5 (pixels); different ellipses result for different noise, even though the noise level is the same. e dotted lines indicate the true shape. We see that least squares and iterative reweight have large bias, producing much smaller ellipses than the true shape. e closest ellipse is given by hyper-renormalization in Fig. 6.1b and by hyperaccurate correction in Fig. 6.1c. e number of iterations of these methods is: method # of iterations

(b) (c)

2

4

6

7/8

4 4

4 4

4 4

9 8

Note that least squares, the Taubin method, and HyperLS are analytically computed without iterations. Also, hyperaccurate correction is an analytical procedure, so it does not add any iterations. We see that FNS and its hyperaccurate correction require about twice as many iterations as iterative reweight, renormalization, and hyper-renormalization.

6.2

STATISTICAL ACCURACY COMPARISON

Since every method sometimes returns a good fit and sometimes a poor fit, we need for fair comparison statistical evaluation over a large number of repeated trials, each time using different noise. From Fig. 6.1, we see that all the ellipses fit fairly well to the data points, meaning that not much difference exists among their geometric distances, i.e., the sums of the square distances to the data points from the fitted ellipse. Rather, the deviation is large in the part where no data points exists. is implies that the geometric distance is not a good measure of ellipse fitting. On the other hand, θ expresses the coefficients of the ellipse equation, so the error θ evaluates how the ellipse equation, i.e., the ellipse itself, differs. We are also interested in the accuracy of the center, the radii, and the area of the fitted ellipse specified by θ as well as the accuracy of its 3-D interpretation computed from θ (see Chapter 5). In view of this, we evaluate the error in θ . Since the computed θ and the true θN are both normalized to unit norm, we measure their discrepancy θ by the orthogonal component to θN (Fig. 6.2), ? θ D PθN θ ;

PθN  I

θN θN > ;

(6.1)

6.2. STATISTICAL ACCURACY COMPARISON

57

Δθ θ

θ O

Figure 6.2: e true value θN , the computed value θ, and its orthogonal component ? θ of θN .

where PθN is the projection matrix onto the plane perpendicular to θN (,! Problem 6.1). We generate M independent noise instances and evaluate the bias B and the root-mean-square (RMS) error D defined by v u M M

1 X

u 1 X

? .a/ DDt BD  θ ; k? θ .a/ k2 ; (6.2) M aD1 M aD1 where θ .a/ is the solution in the ath trial. e KCR lower bound, which we will discuss in Chapter 10, implies that the RMS error D is bounded by q  N ; Dp trŒM (6.3) N N is the true value of M in the second row where trŒ   denotes the matrix trace. e matrix M N is the pseudoinverse of M N , which has rank 5 with the true θN as its of Eq. (2.18), and M 0.3

0.1

0.08

2

8

0.2 7

0.06

2

1 5

6 4

6

5

1

0.04

3

0.1

7 4 0.02

3

0

0.2

8

0.4

(a)

0.6

σ

0.8

0

0.2

0.4

0.6

σ

0.8

(b)

Figure 6.3: e bias (a) and the RMS error (b) of the fitted ellipse for the standard deviation  of the noise added to the data in Fig. 6.1a over 10,000 independent trials. (1) LS, (2) iterative reweight, (3) Taubin, (4) renormalization, (5) HyperLS, (6) hyper-renormalization, (7) FNS, (8) hyperaccurate correction. e dotted line in (b) indicates the KCR lower bound.

58

6. EXPERIMENTS AND EXAMPLES 0.02

0.1 2

0.08

2

6

1

4

0.06

0.01

3

8

7

5

1 0.04

7 4 3

8

5 0

0.1

0.2

0.3

0.02

6

σ

0.4

0

0.1

(a)

0.2

0.3

σ

0.4

(b)

Figure 6.4: (a) Enlargement of Fig. 6.3a. (b) Enlargement of Fig. 6.3b.

null vector (we discuss these in more details in Chapters 8–10). Figure 6.3a,b plots the bias B and the RMS error D , respectively, over 10,000 independent trials for each  . e dotted line in Fig. 6.3b is the KCR lower bound of Eq. (6.3). e interrupted plots for iterative reweight, FNS, and hyperaccurate correction indicate that the iterations did not converge beyond that noise level. Our convergence criterion is kθ θ0 k < 10 6 for the current value θ and the value θ0 in the preceding iteration; their signs are adjusted before subtraction. If this criterion is not satisfied after 100 iterations, we stopped. For each  , we regarded the iterations as not convergent if any among the 10,000 trials did not converge. We see from Fig. 6.3a that least squares and iterative reweight have very large bias, while the bias of the Taubin method and renormalization is very small (pay attention to the scale of the vertical axis; the unit is pixels). e bias of HyperLS, hyper-renormalization, and hyperaccurate correction is still smaller and is even smaller than FNS. eoretically, their bias is O. 4 /, so their plots touch the horizontal axis near   0 with a fourthorder contact. In contrast, the bias of the Taubin method, renormalization, and FNS is O. 2 /, so their plots make a second-order contact. It can be shown that the leading covariance is common to iterative reweight, renormalization, and hyper-renormalization (we will discuss this in Chapters 8 and 9). Hence, the RMS error directly reflects the influence of the bias, as shown in Fig. 6.3b. Figure 6.4 enlarges Fig. 6.3 for the small  part. A close examination reveals that hyper-renormalization outperforms FNS. e highest accuracy is achieved, although the difference is very small, by hyperaccurate correction. However, it first requires the FNS solution, but the FNS iterations may not always converge above a certain noise level, as shown in Fig. 6.3. On the other hand, hyper-renormalization is robust to noise in the sense that it requires a much smaller number of iterations, as demonstrated in the example of the preceding section. is is because the initial solution is HyperLS, which is itself highly accurate as seen from Figs. 6.3 and 6.4. For this reason, it is concluded that hyperrenormalization is the best method for practical computations in terms of accuracy and noise robustness.

6.3. REAL IMAGE EXAMPLES 1

7

6 8

1

59

3 5

2 4

(a)

(b)

Figure 6.5: (a) An edge image of a scene with a circular object. An ellipse is fitted to the 160 edge points indicated. (b) Fitted ellipses superimposed on the original image. e occluded part is artificially composed for visual ease. (1) LS, (2) iterative reweight, (3) Taubin method, (4) renormalization, (5) HyperLS, (6) hyperrenormalization, (7) FNS, (8) hyperaccurate correction.

6.3

REAL IMAGE EXAMPLES 1

Figure 6.5a shows an edge image of a scene with a circular object. We fitted an ellipse to the 160 edge points indicated there, using different methods. Figure 6.5b shows the fitted ellipses superimposed on the original image, where the occluded part is artificially composed for visual ease. We see that least squares and iterative reweight produce much smaller ellipses than the true shape as in Fig. 6.1b,c. All other fits are very close to the true ellipse, and FNS gives the best fit in this particular instance. e number of iterations before convergence was: method # of iterations

2 4

4 3

6 3

7/8 6

Again, FNS and hyperaccurate correction required about twice as many iterations as other methods.

6.4

ROBUST FITTING

We now see how the RANSAC procedure described in Chapter 4 can robustly fit an ellipse to edge segments that are not necessarily elliptic arcs. e upper row of Fig. 6.6 shows examples of edge segments that do not entirely consist of elliptic arcs. Regarding those non-elliptic parts as outliers, we fit an ellipse to inlier arcs, using RANSAC. e lower row of Fig. 6.6 depicts fitted ellipses. We see that the segments that constitute an ellipse are automatically selected.

6.5

ELLIPSE-SPECIFIC METHODS

We test the ellipse-specific methods discussed in Chapter 4, i.e., the methods that enforce the fit to be an ellipse when standard methods produce a hyperbola in the presence of large noise. We compare the following four methods:

60

6. EXPERIMENTS AND EXAMPLES

Figure 6.6: Upper row: Input edge segments. Lower row: Ellipses fitted by RANSAC.

(a)

(b)

(c)

(d)

Figure 6.7: Point sequences for our experiment. e number of points is 30, 30, 15, and 30 and the average distance between neighboring points is 2.96, 3.31, 2.72, and 5.72 pixels for (a), (b), (c), and (d), respectively.

1. the method of Fitzgibbon et al. [1999] (Procedure 4.2), 2. hyper-renormalization (Procedure 2.3), 3. the method of Szpak et al. [2015], and 4. random sampling (Procedure 4.3). As mentioned in the Supplemental Note in Chapter 4 (page 34), the method of Szpak et al. minimizes the Sampson error with a penalty term that diverges as the solution approach a hyperbola. We generated four-point sequences shown in Fig. 6.7. e points have equal arc length separations on each ellipse. Random Gaussian noise of mean 0 and standard deviation  is added independently to the x and y coordinates of each point, and an ellipse is fitted. Figure 6.8 shows fitting examples for a particular noise when hyper-renormalization returns a hyperbola. e relative noise level at which that occurred is indicated in the caption, where we define it to be the noise level  divided by the average distance between neighboring points. What is conspicuous is that the method of Fitzgibbon et al. always fits a very small and flat ellipse, just as

6.6. REAL IMAGE EXAMPLES 2

61

2

1

2

1

3

4 4 3

(a)

(b)

2

2

4

1

1

3 4

3

(c)

(d)

Figure 6.8: Fitting examples for a particular noise when hyper-renormalization returns a hyperbola. (1) Fitzgibbon et al., (2) hyper-renormalization, (3) Szpak et al., (4) random sampling. e dotted lines indicate the true shapes. e relative noise level  is 0.169, 0.151, 0.092, and 0.087 for (a), (b), (c), and (d), respectively.

least squares and iterative reweight in Sections 6.1 and 6.3. In contrast, the method of Szpak et al. fits a large ellipse close to the fitted hyperbola. Random sampling returns an ellipse is in between. However, the fact that hyper-renormalization returns a parabola indicates that the segment does not have sufficient information for fitting an ellipse. Even though random sampling returns a relatively reasonable ellipse, we cannot expect to obtain a good fit by forcing the solution to be an ellipse using ellipse-specific methods.

6.6

REAL IMAGE EXAMPLES 2

Figure 6.9a shows edge images of real scenes. We fitted an ellipse to the edge points indicated there (200 pixels above and 140 pixels below) by different methods. In Fig. 6.9b, fitted ellipses are superimposed on the original images. In the above example, hyper-renormalization returns an ellipse, in which case random sampling returns the same ellipse. e fit is very close to the true shape as expected. e method of Szpak et al. [2015] also fits an ellipse close to it. However, the method of Fitzgibbon et al. [1999] fits a small and flat ellipse. In the lower example, hyperrenormalization returns a hyperbola, and the method of Szpak et al. fits a very large ellipse close

62

6. EXPERIMENTS AND EXAMPLES

2 4

1

3

3 4 1 2

(a)

(b)

Figure 6.9: (a): Edges extracted from real images. We used 200 pixels above and 140 pixels below indicated there for fitting an ellipse. (b) Fitted ellipses superimposed on the original image. (1) Fitzgibbon et al., (2) hyper-renormalization, (3) Szpak et al., (4) random sampling.

to that hyperbola. Again, the method of Fitzgibbon et al. fits a very small and flat ellipse. e random sampling fit is somewhat in between. Generally speaking, if hyper-renormalization fits an ellipse, we should adopt it as the best fit. If a hyperbola is returned, it indicates that the data points do not have sufficient information for fitting an ellipse. Hence, the effort to fit an ellipse to them does not make sense; forcing the solution to be an ellipse, using ellipse-specific method, does not produce any meaningful result. If, for some reason, we want an ellipse by all means, perhaps random sampling may give the least intolerable result. As mentioned in Section 4.2, pathological cases in which any five points define a hyperbola could be conceivable in theory. In practice, we can ignore such points as incapable of approximating an ellipse. In fact, it is even not easy to find an elliptic segment obtained by applying an edge filter to a real image such that a standard method fits a hyperbola, unless parts of it are artificially cut out (Fig. 6.9(b) was created in that way), much less a segment incapable of approximating an ellipse (we have never encountered one).

6.7

ELLIPSE-BASED 3-D COMPUTATION EXAMPLES

We show some applications of the ellipse-based 3-D computation techniques described in Chapter 5.

6.7. ELLIPSE-BASED 3-D COMPUTATION EXAMPLES

(a)

63

(b)

Figure 6.10: (a) An image of a circle on a planar surface. (b) A 3-D graphics object is displayed in a position compatible to the surface in (a).

(a)

(b)

Figure 6.11: (a) An image of a planar board, on which a circular mark (encircled by a white line) is painted. (b) A front image of the board generated by computing its relative orientation to the camera by Procedure 5.5, using the circular mark, and virtually rotating the camera by Procedure 5.7.

Figure 6.10a is an image of a circle drawn on a planar surface. Detecting the circle boundary by edge detection and fitting an ellipse to it, we can compute the position and orientation of the surface relative to the camera by Procedure 5.5. Using that knowledge, Fig. 6.10b displays a 3-D graphics object placed as if it were placed on the planar surface. Figure 6.11a is an outdoor scene of a planar board, on which a circular mark (encircled by a white line in the figure) is painted. Detecting its boundary by edge detection and fitting an ellipse to it, we can compute the position and orientation of the board relative to the camera by

64

6. EXPERIMENTS AND EXAMPLES

Procedure 5.5. Using that knowledge, we can generate its front image by virtual camera rotation, using Procedure 5.7; for visual ease, the image is appropriately translated.

6.8

SUPPLEMENTAL NOTE

Possible non-convergence is the main disadvantage of iterative methods, and efforts have been made to guarantee convergence. For minimization search, however, non-convergence is in reality an indication that the cost function does not have a clear minimum. If the FNS for Sampson error minimization does not converge, for example, we tend to think that we should use instead, say, the Levenberg-Marquadt method [Press et al., 2007], which is guaranteed to reduce the cost at each step. According to the authors’ experiences, however, if FNS does not converge in the presence of large noise, we do not obtain a solution in most cases even by the Levenberg-Marquadt method, because convergence is too slow. For example, although the cost is constantly decreasing by a tiny amount at each step, the step size, which is also very small, does not reduce much, as if always proceeding at a snail’s pace after hundreds and thousands of steps. In such a case, we have observed that the FNS iterations oscillate around some value back and forth indefinitely. Such a phenomenon is not the defect of the algorithm but rather the property of the problem itself. If the cost function has a very flat bottom without a clear minimum, any iterative search has a problem. If that is the case, we should use a non-iterative algebraic method and accept the value as an approximation to the true minimum. One interesting issue of HyperLS, hyper-renormalization, and hyperaccurate correction is the effect of the vector e defined by Eq. (2.12). According to the authors’ observation, letting e = 0 did not cause differences in that the plots in Figs. 6.3 and 6.4 did not show any visible change. We can see from Fig. 6.6 that using RANSAC elliptic arcs are automatically selected in the presence of non-elliptic segments, i.e., outliers. However, this does not always work when nonelliptic segments are dominant. e limitation of RANSAC is that it only deals with individual points without considering the fact that inliers and outliers are both contiguous segments. e RANSAC principle is the same if the data points are continuously connected or not. In view of this, various techniques have been proposed for classifying edge segments into elliptic and non-elliptic arcs, e.g., see Masuzaki et al. [2015].

PROBLEMS 6.1.

Show that the projection of a vector v onto a plane with unit surface normal u is given by Pu v (Fig. 6.12), where Pu is the projection matrix given by

Pu  I 6.2.

uu> :

(6.4)

e projection matrix Pu of Eq. (6.4) is symmetric by definition. Show that it is also idempotent , i.e., Pu2 D Pu ; (6.5)

6.8. SUPPLEMENTAL NOTE

65

v (u, v) u

θ Pu v

Figure 6.12: Projection Pu v of vector v onto the plane with unit surface normal u.

which states that once projected, things remain there if projected again, as geometric interpretation implies.

67

CHAPTER

7

Extension and Generalization Study of ellipse fitting is important not merely for extracting elliptic shapes in the image. It is also a prototype of many geometric estimation problems for computer vision applications. To illustrate this, we show two typical problems that have the same mathematical structure as ellipse fitting: computing the fundamental matrix from two images and computing the homography between two planar surface images. ey are both themselves indispensable tasks for 3-D scene analysis by computer vision. We show how they are computed by extending and generalizing the ellipse fitting procedure.

7.1

FUNDAMENTAL MATRIX COMPUTATION

7.1.1 FORMULATION Suppose we take images of the same scene using two cameras. If a particular point in the scene is imaged at .x; y/ by one camera and at .x 0 ; y 0 / by the other, it can be shown that the two points satisfy 11 1 0 0 x =f0 x=f0 @@ y=f0 A ; F @ y 0 =f0 AA D 0; 1 1 00

(7.1)

where f0 is a constant for adjusting the scale, just as in the ellipse fitting case. e 3  3 matrix F is called the fundamental matrix. It is determined solely by the relative position of the two cameras and their internal parameters, such as the focal length, independent of the scene content. Equation (7.1) is called the epipolar constraint or the epipolar equation. Since any nonzero scalar multiple of the fundamental matrix F defines the same epipolar equation, we normalize it to unit matrix norm: 0 1 s X kF k @ Fij2 A D 1: (7.2) i;j D1;3

If we define 9-D vectors

68

7. EXTENSION AND GENERALIZATION

(x α', y α')

(x α , yα )

Figure 7.1: Computing the fundamental matrix from noisy point correspondences.

0

B B B B B B B ξDB B B B B B @

xx 0 xy 0 f0 x yx 0 yy 0 f0 y f0 x 0 f0 y 0 f02

1

C C C C C C C C; C C C C C A

0

B B B B B B B θDB B B B B B @

F11 F12 F13 F21 F22 F23 F31 F32 F33

1

C C C C C C C C; C C C C C A

(7.3)

it is easy to see that the left side of Eq. (7.1) is .ξ ; θ /=f02 . Hence, the epipolar equation of Eq. (7.1) is written in the form .ξ ; θ / D 0: (7.4) e vector θ has scale indeterminacy, and Eq. (7.2) is equivalent to normalizing θ to unit norm: kθ k = 1. us, the epipolar equation has the same form as the ellipse equation of Eq. (1.9). Computing the fundamental matrix F from point correspondences over two image is the first step of many computer vision applications. In fact, from the computed F , we can determine the relative position and orientation of the two cameras and their focal lengths. Finding a matrix F that satisfies Eq. (7.1) for corresponding points .x˛ ; y˛ / and .x˛0 ; y˛0 /, ˛ = 1, ..., N , in the presence of noise (Fig. 7.1) can be mathematically formulated as finding a unit vector θ such that .ξ˛ ; θ /  0;

(7.5)

˛ D 1; :::; N;

where ξ˛ is the value of the 9-D vector ξ obtained by replacing x , y , x 0 , and y 0 in Eq. (7.3) by x˛ , y˛ , x˛0 , and y˛0 , respectively. We regard the data x˛ , y˛ , x˛0 , and y˛0 as the corrupted values of their true values xN ˛ , yN˛ , xN ˛0 , and yN˛0 by noise terms x˛ , y˛ , x˛0 , and y˛0 , respectively, and write x˛ D xN ˛ C x˛ ;

y˛ D yN˛ C y˛ ;

x˛0 D xN ˛0 C x˛0 ;

y˛0 D yN˛0 C y˛0 :

(7.6)

Substituting these into ξ˛ obtained from Eq. (7.3), we obtain

ξ˛ D ξN˛ C 1 ξ˛ C 2 ξ˛ ;

(7.7)

7.1. FUNDAMENTAL MATRIX COMPUTATION

69

where ξN˛ is the value of ξ˛ obtained by replacing x˛ , y˛ , x˛0 , and y˛0 in its components by their true values xN ˛ , yN˛ , xN ˛0 , and yN˛0 , respectively, while 1 ξ˛ and 2 ξ˛ are, respectively, the first- and the second-order noise terms. After expansion, we obtain 1 0 0 0 1 x˛ x˛0 0 xN ˛ x˛ C xN ˛ x˛ B x˛ y 0 C ˛ C B B yN 0 x˛ C xN ˛ y 0 C C B ˛ B ˛ C 0 C B C B f0 x˛ B C B y x 0 C C B 0 B ˛ ˛ C C B xN y C yN˛ x˛0 C B 1 ξ˛ D B ˛ ˛ 2 ξ˛ D B y˛ y˛0 C : (7.8) C; B f0 y˛ C B C C B C B 0 C C B B f0 x˛0 C B C B 0 0 B C @ A f0 y˛ @ A 0 0 0 We regard x˛ and y˛ as random variables and define the covariance matrix of ξ˛ by V Œξ˛  D EŒ1 ξ˛ 1 ξ˛> ;

(7.9)

where EŒ   denotes expectation over their distribution. If x˛ , y˛ , x˛0 , and y˛0 are subject to an independent Gaussian distribution of mean 0 and variance  2 , we have EŒx˛  D EŒy˛  D EŒx˛0  D EŒy˛0  D 0; 2

2

EŒx˛2  D EŒy˛2  D EŒx˛0  D EŒy˛0  D  2 ;

EŒx˛ y˛  D EŒx˛0 y˛0  D EŒx˛ y˛0  D EŒx˛0 y˛  D 0:

(7.10)

Hence, we can write from Eq. (7.8) the covariance matrix of Eq. (7.9) in the form V Œξ˛  D  2 V0 Œξ˛ ;

(7.11)

where we have taken out the common multiplier  2 and written the remaining part as V0 Œξ˛ . is matrix, which we call the normalized covariance matrix as in ellipse fitting, has the form 0 2 1 xN ˛ C xN ˛02 xN ˛0 yN˛0 f0 xN ˛0 xN ˛ yN˛ 0 0 f0 xN ˛ 0 0 B xN 0 yN 0 xN ˛2 C yN˛02 f0 yN˛0 0 xN ˛ yN˛ 0 0 f0 xN ˛ 0 C ˛ ˛ B C B f xN 0 f0 yN˛0 f02 0 0 0 0 0 0 C 0 ˛ B C B xN yN 0 0 yN˛2 C xN ˛02 xN ˛0 yN˛0 f0 xN ˛0 f0 yN˛ 0 0 C B C ˛ ˛ B C V0 Œξ˛  D B 0 xN ˛ yN˛ 0 xN ˛0 yN˛0 yN˛2 C yN˛02 f0 yN˛0 0 f0 yN˛ 0 C : B C B 0 0 0 f0 xN ˛0 f0 yN˛0 f02 0 0 0 C B C B f0 xN ˛ 0 0 f0 yN˛ 0 0 f02 0 0 C B C @ 0 f0 xN ˛ 0 0 f0 yN˛ 0 0 f02 0 A 0 0 0 0 0 0 0 0 0 (7.12)

70

7. EXTENSION AND GENERALIZATION

As in the ellipse fitting case, we need not consider the second-order term 2 ξ˛ in the covariance matrix of Eq. (7.9). Also, the true values xN ˛ , yN˛ , xN ˛0 , and yN˛0 in Eq. (7.12) are, in actual computation, replaced by the observed values x˛ , y˛ , x˛0 , and y˛0 , respectively. us, computing the fundamental matrix F from noisy correspondences can be formalized as computing the unit vector θ that satisfies Eq. (7.5) by considering the noise properties described by the covariance matrices V Œξ˛ . Mathematically, this problem is identical to ellipse fitting. Hence, the algebraic methods (least squares, the Taubin method, HyperLS, iterative reweight, renormalization, and hyper-renormalization) and the geometric methods (FNS, geometric distance minimization, and hyperaccurate correction) can be applied directly.

7.1.2 RANK CONSTRAINT ere is, however, one aspect for fundamental matrix computation that does not exist for ellipse fitting: it can be shown from a geometric consideration that the fundamental matrix F must have rank 2 and hence det F D 0; (7.13) which is called the rank constraint . Generally, three approaches exist for enforcing this rank constraint: A posteriori correction: We compute θ without considering the rank constraint, using a standard (algebraic or geometric) method. If the data are exact, Eq. (7.13) is automatically satisfied whatever method is used, and if the point correspondences are sufficiently accurate, we obtain a solution such that det F  0. So, we modify the solution F afterward to enforce the rank constraint. Internal access: Introducing hidden variables u, we express each component of θ in terms of u in such a way that θ .u/ identically satisfies the rank condition. en, we search for an optimal value of u, e.g., minimize the Sampson error. External access: We do iterations without using hidden variables in such a way that θ approaches the true value at each iteration and at the same time the rank constraint is automatically satisfied when the iterations have converged.

Geometrically, these approaches are interpreted as illustrated in Fig.7.2: we seek a solution that minimizes some cost, e.g., the least-square error, the Sampson error, or the geometric distance, subject to the rank constraint det F = 0. For a posteriori correction, we first go to the point in the solution space where the cost is minimum, without considering the rank constraint, and then move to the hypersurface defined by the rank constraint det F = 0 (Fig. 7.2a). It is easy to find the closest point in the hypersurface using the singular value decomposition (SVD) (,! Problem 7.1), but this is not an optimal solution. e optimal solution is obtained by moving in the direction along which the cost increases the least. e internal access approach first parameterizes the hypersurface using the hidden variable u, which can be seen as the surface coordinates of the

7.1. FUNDAMENTAL MATRIX COMPUTATION

71

SVD det F = 0

(a)

det F = 0

det F = 0

(b)

(c)

Figure 7.2: (a) A posteriori correction: We first go to the minimum of the cost without considering the rank constraint and then move to the hypersurface defined by det F = 0. (b) Internal access: We parameterize the hypersurface of det F = 0 and optimize the surface coordinates on it. (c) External access: We do iterations in the outside space, moving in the direction of decreasing the cost at each step in such a way that in the end the rank constraint is exactly satisfied.

hypersurface. en, we search the hypersurface to minimize the cost (Fig. 7.2b). e external access, on the other hand, does iterations in the outside space, moving in the direction of decreasing the cost at each step in such a way that in the end the rank constraint is exactly satisfied (Fig. 7.2c). In each approach, various different schemes have been studied in the past; see Supplemental Note (page 77).

7.1.3 OUTLIER REMOVAL For extracting point correspondences between two images of the same scene, we look for similarly appearing points in them and match them, using some similarity measure. For this, various methods have been studied. However, existing methods are not all perfect, sometimes matching wrong points. Such false matches are called outliers; correct matches are called inliers. Since fundamental matrix computation has mathematically the same form as ellipse fitting, the RANSAC procedure in Section 4.1 can be used for outlier removal. e underlying idea is, for a given set of point pairs, to find a fundamental matrix such that the number of point pairs that approximately satisfy the epipolar equation is as large as possible. Note that the fundamental matrix F has nine elements. Hence, if eight point pairs are given, we can determine F from the resulting eight epipolar equations of the form of Eq. (7.1) up to scale. e RANSAC procedure goes as follows.

Procedure 7.1

(RANSAC)

1. From among the input point pairs randomly select eight, and let ξ1 , ξ2 , ..., ξ8 be their vector representations in the form of ξ of Eq. (7.3).

72

7. EXTENSION AND GENERALIZATION

2. Compute the unit eigenvector θ of the matrix

M8 D

8 X

ξ˛ ξ˛> ;

(7.14)

˛D1

for the smallest eigenvalue. 3. Store that θ as a candidate, and count the number n of point pairs in the input that satisfy .ξ ; θ /2 < 2d 2 ; .θ ; V0 Œξ θ /

(7.15)

where d is a threshold for admissible deviation of each point of a corresponding pair, e.g., d = 2 (pixels). Store that n. 4. Select a new set of eight point pairs from the input, and do the same. Repeat this many times, and return from among the stored candidates the one for which n is the largest. Comments. Step 2 computes the fundamental matrix by least squares from the eight point pairs selected in Step 1. e rank constraint of Eq. (7.13) is not considered, because speed is more important than accuracy for repeated voting. Step 3 tests to what extent each point pair satisfies the epipolar equation for the candidate θ . e left side of Eq. (7.15), which has the dimension of square length (note that both ξ and V0 Œξ  have the dimension of square length), is an approximation of the square sum of the minimum distances necessary to move the point pair .x; y/ and .x 0 ; y 0 / to .x; N y/ N and .xN 0 ; yN 0 / so that the epipolar equation is satisfied (the derivation is the same as that of Eq. (3.2) for ellipse fitting). As in ellipse fitting, we can directly go on to the next sampling if the count n is smaller than the stored count, and we can stop if the current value of n is not updated over a fixed number of sampling. In the end, those point pairs that do not satisfy Eq. (7.15) for the chosen θ are removed as outliers.

7.2

HOMOGRAPHY COMPUTATION

7.2.1 FORMULATION Suppose we take images of a planar surface by two cameras. If point .x; y/ in one images corresponds to point .x 0 ; y 0 / in the other, it can be shown that they are related in the form x 0 D f0

H11 x C H12 y C H13 f0 ; H31 x C H32 y C H33 f0

y 0 D f0

H21 x C H22 y C H23 f0 ; H31 x C H32 y C H33 f0

(7.16)

for some constants Hij . As in ellipse fitting and fundamental matrix computation, f0 is a scaling constant. Equation (7.16) can be equivalently written as 0 0 1 0 10 1 x =f0 H11 H12 H13 x=f0 @ y 0 =f0 A ' @ H21 H22 H23 A @ y=f0 A ; (7.17) 1 H31 H32 H33 1

7.2. HOMOGRAPHY COMPUTATION

73

where ' denotes equality up to a nonzero constant. As mentioned in Chapter 6, this mapping from .x; y/ to .x 0 ; y 0 / is called a homography (or projective transformation), when the matrix H = .Hij / is nonsingular (see Eq. (5.16)). It can be shown that the matrix H is determined by the relative positions of the two cameras and their intrinsic parameters, such as the focal length, and the position and orientation of the planar scene. Since the same homography is represented if H multiplied by a nonzero constant, we normalize it to 0 1 s X kH k @ Hij2 A D 1: (7.18) i;j D1;3

Equation (7.17) implies the left and right sides are parallel vectors, so we may rewrite it as 0

1 0 H11 x 0 =f0 @ y 0 =f0 A  @ H21 1 H31

If we define the 9-D vectors 0 1 0 0 H11 B 0 B H12 C B C B B 0 B H C B B 13 C B f x B H C B B 21 C 0 B C B θ D B H22 C ; ξ.1/ D B f0 y B C B B f02 B H23 C B C B B xy 0 B H31 C B C B @ yy 0 @ H32 A f0 y 0 H33

H12 H22 H32

10 1 0 1 H13 x=f0 0 H23 A @ y=f0 A D @ 0 A : 1 0 H33 0

1

C C C C C C C C; C C C C C A

ξ.2/

B B B B B B B DB B B B B B @

f0 x f0 y f02 0 0 0 xx 0 yx 0 f0 x 0

0

1

C C C C C C C C; C C C C C A

(7.19)

ξ.3/

B B B B B B B DB B B B B B @

xy 0 yy 0 f0 y 0 xx 0 yx 0 f0 x 0 0 0 0

1

C C C C C C C C; C C C C C A

(7.20) the vector equation of Eq. (7.19) has the following three components: .ξ .1/ ; θ / D 0;

.ξ .2/ ; θ / D 0;

.ξ .3/ ; θ / D 0:

(7.21)

e vector θ has scale indeterminacy, and Eq. (7.18) is equivalent to normalizing θ to unit norm: kθ k = 1. However, we should note that the three equations of Eq. (7.21) are not linearly independent (,! Problem 7.2). is means that if the first and the second equations in Eq. (7.21) are satisfied, for example, the third one is automatically satisfied. Computing the homography matrix H from point correspondences of two planar scene images is the first step of many computer vision applications. In fact, from the computed H , we can determine the relative position and orientation of the two cameras and the positions the planar surface relative to the two cameras. Mathematically, computing the matrix H that satisfies Eq. (7.16) for corresponding points .x˛ ; y˛ / and .x˛0 ; y˛0 /, ˛ = 1, ..., N , in the presence of noise

74

7. EXTENSION AND GENERALIZATION

(xα', y α')

(xα , yα )

Figure 7.3: Computing a homography from noisy point correspondences.

(Fig. 7.3) is equivalent to computing a unit vector θ such that .ξ˛.1/ ; θ /  0;

.ξ˛.2/ ; θ /  0;

.ξ˛.3/ ; θ /  0;

˛ D 1; :::; N;

(7.22)

where ξ˛.k/ is the value of the 9-D vector ξ .k/ in Eq. (7.20) obtained by substituting x˛ , y˛ , x˛0 , and y˛0 for x , y , x 0 , and y 0 , respectively. We regard the data x˛ , y˛ , x˛0 , and y˛0 as deviations of their true values xN ˛ , yN˛ , and xN ˛0 , by noise terms x˛ , y˛ , x˛0 , and y˛0 , respectively, and write x˛ D xN ˛ C x˛ ;

x˛0 D xN ˛0 C x˛0 ;

y˛ D yN˛ C y˛ ;

y˛0 D yN˛0 C y˛0 :

(7.23)

If we substitute these into ξ˛.k/ obtained from Eq. (7.20), we have

ξ˛.k/ D ξN˛.k/ C 1 ξ˛.k/ C 2 ξ˛.k/ ;

(7.24)

where ξN˛.k/ is the true value of ξ˛.k/ obtained by replacing x˛ , y˛ , x˛0 , and y˛0 by their true values xN ˛ , yN˛ , xN ˛0 , and yN˛0 , respectively, while 1 ξ˛.k/ and 2 ξ˛.k/ are, respectively, the first- and the secondorder noise terms. We regard the noise terms x˛ and y˛ as random variables and define the covariance matrices of ξ˛.k/ and ξ˛.l/ by V .kl/ Œξ˛  D EŒ1 ξ˛.k/ 1 ξ˛.l/> ;

(7.25)

where EŒ   denotes expectation for their probability distribution. If we regard x˛ , y˛ , x˛0 , and y˛0 as independent Gaussian variables of mean 0 and standard deviation  , Eq. (7.10) holds. Using this relationship, we can express the covariance matrix V .kl/ Œξ˛  in the form V .kl/ Œξ˛  D  2 V0.kl/ Œξ˛ ;

where T˛.k/ is the 9  4 matrix defined by  @ξ .k/ T˛.k/ D @x

@ξ .k/ @y

V0.kl/ Œξ˛  D T˛.k/ T˛.l/> ;

@ξ .k/ @x 0

@ξ .k/ @y 0

ˇ ˇ ˇ : ˇ

(7.26)

(7.27)

˛

Here, .  /j˛ means that the expression is evaluated at x = x˛ , y = y˛ , x 0 = x˛0 , and y 0 = y˛0 . As in the ellipse and fundamental matrix cases, we call V0.kl/ Œξ˛  the normalized covariance matrices. Computing the homography matrix H from noisy point correspondences means computing the unit vector θ that satisfies (7.22) by considering the noise properties described by the

7.2. HOMOGRAPHY COMPUTATION

75

.kl/

covariance matrices V Œξ˛ . is is the same as ellipse fitting and fundamental matrix computation except that the number of equations is three, rather than 1. It can be shown that after appropriate modification, we can apply the algebraic methods (least squares, the Taubin method, HyperLS, iterative reweight, renormalization, and hyper-renormalization) and the geometric methods (FNS, geometric distance minimization, and hyperaccurate correction). For instance, all algebraic methods solve Eq. (2.17), but the matrix M is modified to

MD

N 3 1 X X W˛.kl/ ξ˛.k/ ξ˛.l/> : N ˛D1

(7.28)

k;lD1

For least squares, the Taubin method, ana HyperLS, the weight W˛.kl/ is the Kronecker delta ıkl , taking 1 for k = l and 0 otherwise. For iterative reweight, renormalization, and hyperrenormalization, W˛.kl/ is given by   W˛.kl/ D .θ ; V0.kl/ θ / : (7.29) 2

e right side is a symbolic abbreviation, meaning the .kl/ element of the pseudoinverse of truncated rank 2 of the matrix whose .kl/ element is .θ ; V0.kl/ θ /. e use of the pseudoinverse of truncated rank 2 reflects the fact that only two equations in Eq. (7.21) are linearly independent. e form of the matrix N depends on the method. For example, N = I (the identity) for least squares and iterative reweight. For renormalization, it has the form

ND

N 3 1 X X W˛.kl/ V0.kl/ V0 Œξ˛  N ˛D1

(7.30)

k;lD1

where the weight W˛.kl/ is given by Eq. (7.29); the Taubin method results if W˛.kl/ is replaced by ıkl . For HyperLS and hyper-renormalization, N has a slightly complicated form (see the Supplemental Note of Chapter 8) (page 90). For geometric fitting, the Sampson error of Eq. (3.3) is modified to N 3 1 X X J D W˛.kl/ .ξ˛.k/ ; θ /.ξ˛.l/ ; θ /; (7.31) N ˛D1 k;lD1

with the weight W˛.kl/ define by Eq. (7.29). e FNS procedure for minimizing this goes basically the same as Procedure 3.1, where the matrix M in Eq. (3.4) is given by Eq. (7.28) with the weight W˛.kl/ of Eq. (7.29). e matrix L in Eq. (3.4) is modified to

LD

N 3 1 X X .k/ .l/ .kl/ v˛ v˛ V0 Œξ˛ ; N ˛D1 k;lD1

(7.32)

76

7. EXTENSION AND GENERALIZATION

where v˛.k/ is defined by v˛.k/

D

3 X

W˛.kl/ .ξ˛.l/ ; θ /;

(7.33)

lD1

using the weight W˛.kl/ in Eq. (7.29). e geometric distance minimization also goes basically the same as Procedure 3.2 with slight modifications (we omit the details).

7.2.2 OUTLIER REMOVAL Homography computation requires extraction of corresponding points between two images, but, as in fundamental matrix computation, existing extraction algorithms often produce wrong matches, i.e., outliers. Since the mathematical principle is the same for fundamental matrix computation and homography computation, we can apply the RANSAC of Procedure 7.1 with appropriate modifications. Note that the homography matrix H has nine elements. Hence, if four point pairs are given, we can determine H from the resulting four pairs of homography equations of the form of Eq. (7.16) up to scale. Procedure 7.1 is modified as follows. Procedure 7.2

(RANSAC)

1. From among the input point pairs randomly select four, and let ξ1.k/ , ξ2.k/ , ξ3.k/ , and ξ4.k/ , k = 1, 2, 3, be their vector representations in the form of ξ .k/ in Eq. (7.20). 2. Compute the eigenvector θ of the matrix

M4 D

4 X 3 X

ξ˛.k/ ξ˛.k/> ;

(7.34)

˛D1 kD1

for the smallest eigenvalue. 3. Store that θ as a candidate, and count the number n of point pairs in the input that satisfy 3 X

W .kl/ .ξ .k/ ; θ /.ξ .l/ ; θ / < 2d 2 ;

(7.35)

k;lD1

where the weight W .kl/ is defined by Eq. (7.29) for that pair, and d is a threshold for admissible deviation of each point of a corresponding pair, e.g., d = 2 (pixels). Store that n. 4. Select a new set of four point pairs from the input, and do the same. Repeat this many times, and return from among the stored candidates the one for which n is the largest. Comments. Step 2 computes the homography matrix by least squares from the four point pairs selected in Step 1. Step 3 evaluates, for each point pair .x; y/ and .x 0 ; y 0 /, the square distances

7.3. SUPPLEMENTAL NOTE 0

77

0

necessary to move them to the positions .x; N y/ N and .xN ; yN / that satisfy the homography equation for the candidate value of θ . Since both ξ and V0 Œξ  have the dimension of square length, the left side of Eq. (7.34) has the dimension of square length. As in ellipse fitting and fundamental matrix computation, we can directly go on to the next sampling if the count n is smaller than the stored count, and we can stop if the current value of n is not updated over a fixed number of sampling. In the end, those point pairs that do not satisfy Eq. (7.35) for the chosen θ are removed as outliers.

7.3

SUPPLEMENTAL NOTE

e epipolar equation of Eq. (7.1) is one of the most important foundations of computer vision and is explained in many textbooks, e.g., Hartley and Zisserman [2003], Kanatani [1993a, 1996]. e fundamental matrix F can be determined up to scale by solving eight epipolar equations of the form of Eq. (7.1) determined by eight corresponding point pairs, as shown in the first two steps of the RANSAC (Procedure 7.1). e rank constraint can be imposed by SVD. is is the most classical method for fundamental matrix computation and is known as the 8-point algorithm. e same procedure can be applied to any number of point pairs, using least squares followed by SVD, so this is also-called the “8-point algorithm.” e principle of optimal a posteriori correction is stated in Kanatani [1996], and its application to fundamental matrix computation is described in Kanatani and Ohta [2003], Kanatani and Sugaya [2007a,b]. Many variants exist for parameterize the rank constraint, but the most direct approach is the use of the SVD of F ; see Sugaya and Kanatani [2007a], Sugaya [2007b]. A typical method of the external access is the extended FNS , or EFNS (see Kanatani and Matsunaga [2013] for the details), which is an extension of the FNS of Chojnacki et al. [2000]. Application of EFNS to fundamental matrix computation is given in Kanatani and Sugaya [2010b]. e homography of Eq. (7.16) between two images of a planar scene is also one of the most fundamental principles of computer vision and is discussed in many textbooks, e.g., Hartley and Zisserman [2003], Kanatani [1993a, 1996]. e method of renormalization, which was initially considered for ellipse fitting and fundamental matrix computation [Kanatani, 1993b, 1996], was extended to homographies by Kanatani et al. [2000, 2014]. e Taubin method, which was originally intended for curve and surface fitting [Taubin, 1991], was extended to homographies by Rangarajan and Papamichalis [2009]. e HyperLS for homography computation was presented by Kanatani et al. [2011] and later generalized to hyper-renormalization by Kanatani et al. [2014]. e FNS of Chojnacki et al. [2000], which was initially considered for ellipse fitting and fundamental matrix computation, was extended symbolically to homographies by Scoleri et al. [2005]. It was reduced to a compact computational scheme by Kanatani and Niitsuma [2011]. e hyperaccurate correction for ellipse fitting [Kanatani, 2006, 2008] was extended to homographies by Kanatani and Sugaya [2013]. For ellipse fitting, sophisticate methods based on high order error analysis, such as HyperLS, hyper-renormalization, and hyperaccurate correction, exhibit superior performance to other methods, as demonstrated in the preceding chapter. However, these methods do not lead

78

7. EXTENSION AND GENERALIZATION

to accuracy improvement very much for fundamental matrices and homographies; the accuracy is nearly the same as FNS or EFNS. It is surmised that this is partly due to the consequence of the constraint on fundamental matrices and homographies being bilinear in the variables: the quadratic terms in Eqs. (7.3) and (7.20) are xx 0 , xy 0 , x 0 y , etc., but no terms of x 2 , y 2 , x 02 , and y 02 are included. Since noise in different images is regarded as independent, the noise in xx 0 , xy 0 , x 0 y , etc. has expectation 0, i.e., these terms are unbiased. is is a big contrast to the ellipse equation, which contains the square terms x 2 and y 2 . One of the consequences of this for fundamental matrix computation is that the vector e defined by Eq. (2.15) is 0, as easily seen from Eqs. (7.8) and (7.10).

PROBLEMS 7.1.

(1) Show that if the matrix norm is defined by Eq. (7.2), we can write kF k2 for an 3  3 matrix F as kF k2 D 12 C 22 C 32 ; (7.36) where i , i = 1, 2, 3, are the singular values of F . (2) Describe the procedure to force a nonsingular matrix F to det F = 0 and kF k = 1, using SVD.

7.2.

Show that the three equations of Eq. (7.21) are linearly dependent.

79

CHAPTER

8

Accuracy of Algebraic Fitting We analyze the accuracy of algebraic methods for ellipse fitting in a generalized framework. We do a detailed error analysis and derive explicite expressions for the covariance and bias of the solution. e hyper-renormalization procedure is derived in this framework. In order that the result directly applies to the fundamental matrix computation described in Section 7.1, we treat θ and ξ˛ as n-D vectors (n = 6 for ellipse fitting, and n = 9 for fundamental matrix computation) and do not use particular properties of ellipse fitting.

8.1

ERROR ANALYSIS

As pointed out in Section 2.4, all algebraic methods for estimating θ solve the equation in the form: M θ D N θ ; (8.1)

where the n  n matrices M and N are defined in terms of the observations ξ˛ , ˛ = 1, ..., N , and the unknown θ . Here, we consider iterative reweight, renormalization, and hyperrenormalization, so M is given by (see Eq. (2.18))

MD

N 1 X W˛ ξ˛ ξ˛> ; N ˛D1

W˛ D

1 : .θ ; V0 Œξ˛ θ /

(8.2)

We assume that the observation ξ˛ is perturbed from its true value ξN˛ in the form of Eq. (1.17). Accordingly, the solution θ of Eq. (8.1) is also perturbed from its true value θN in the form

θ D θN C 1 θ C 2 θ C    ;

(8.3)

where i denotes the i th order term in the standard deviation  of the noise in the data. e scalar  and the matrix N on the right side of Eq. (8.1) depend on particular methods, but we do error analysis, regarding them as yet to be determined. Substituting Eqs. (1.17) and (8.3) into Eq. (8.2), we can expand M and W˛ in the form N C 1 M C 2 M C    ; M DM

(8.4)

W˛ D WN ˛ C 1 W˛ C 2 W˛ C    ;

(8.5)

N N  1 X N  1 X > > N N 1 M D W˛ 1 ξ˛ ξ˛ C ξ˛ 1 ξ˛ C 1 WN ˛ ξN˛ ξN˛> ; N ˛D1 N ˛D1

(8.6)

where 1 M and 2 M are given by

80

8. ACCURACY OF ALGEBRAIC FITTING

2 M D

N  1 X N  W˛ 1 ξ˛ 1 ξ˛> C 2 ξ˛ ξN˛> C ξN˛ 2 ξ˛> N ˛D1 N N   1 X 1 X > > N N C 1 W˛ 1 ξ˛ ξ˛ C ξ˛ 1 ξ˛ C 2 W˛ ξN˛ ξN˛> ; N ˛D1 N ˛D1

and 1 W˛ and 2 W˛ are given by   1 W˛ D 2WN ˛2 1 θ ; V0 Œξ˛ θN ;   .1 W˛ /2 2 W ˛ D WN ˛2 .1 θ ; V0 Œξ˛ 1 θ / C 2.2 θ ; V0 Œξ˛ θN / : WN ˛

(8.7)

(8.8) (8.9)

(See Proposition 8.1 below.) Similarly expanding  and N , we can write Eq. (8.1) in the following form: N C 1 M C 2 M C    /.θN C 1 θ C 2 θ C    / .M N C 1 N C 2 N C    /.θN C 1 θ C 2 θ C    /: D .N C 1  C 2  C    /.N

(8.10)

e true solution θN satisfies the constraint .ξN˛ ; θN / = 0 for the true observations ξN˛ . Hence, N θN = 0. Equating the noiseless terms on both sides of Eq. (8.10), we Eq. (8.2) implies that M N N N N N obtain M θ = N θ . We assume that NN θN ¤ 0, so N = 0. Equating the first-order terms on both sides of Eq. (8.10), we obtain N 1 θ C 1 M θN D 1 N N θN : M

Computing the inner product with θN on both sides, we obtain       N 1 θ C θN ; 1 M θN D 1  θN ; N N θN : θN ; M

(8.11)

(8.12)

N 1 θ / = .M N θN ; 1 θ / = 0. Equation (8.6) implies .θN ; 1 M θN / = 0. Since we are Note that .θN ; M assuming that .θN ; NN θN / ¤ 0, we see that 1  = 0. Noting this and equating the second-order terms on both sides of Eq. (8.10), we obtain N 2 θ C 1 M 1 θ C 2 M θN D 2 N N θN : M

8.2

(8.13)

COVARIANCE AND BIAS

N θN = 0, so the matrix M N We want to solve Eq. (8.11) (with 1  = 0) for 1 θ . However, M N has rank n 1, θ being its null vector. Hence, its inverse does not exist. Instead, we consider its N (also rank n 1). Note that the product M N M N (,! Problem 8.4) equals the pseudoinverse M projection matrix PθN in the direction of θN (,! Problem 6.1). It follows that, after multiplication N on both sides, 1 θ is expressed in the form of Eq. (8.11) by the pseudoinverse M 1 θ D

N 1 M θN ; M

(8.14)

8.2. COVARIANCE AND BIAS

81

where we have noted that since θ is normalized to unit norm, the first-order error 1 θ is orthogonal to θN and hence PθN 1 θ = 1 θ . From Eq. (8.14), the covariance matrix V Œθ  of the solution θ is given to a first approximation by   2 N V Œθ  D E 1 θ 1 θ > D M : (8.15) N (See Proposition 8.2 below.) is coincides with the theoretical accuracy limit called the KCR lower bound to be discussed in Chapter 10. From Eq. (8.15), we conclude that the solution θ of iterative reweight, renormalization, and hyper-renormalization has the same covariance matrix, which agrees with the theoretical accuracy limit to a first approximation. Hence, we cannot substantially improve the covariance any further. However, the total error is the sum of the covariance terms and the bias terms, and we may reduce the bias by a clever choice of the matrix N . Evidently, the first-order term 1 θ in Eq. (8.14) is unbiased, since the expectation of oddorder error terms is zero: EŒ1 θ  = 0. So, we focus on the second-order bias EŒ2 θ . Substituting Eq. (8.14) into Eq. (8.13), we obtain N θN D M N 2 θ 2 N

N 1 M θN C 2 M θN D M N 2 θ C T θN ; 1 M M

(8.16)

where we define the matrix T to be

T  2 M

N 1 M : 1 M M

(8.17)

Because θ is a unit vector, it has no error in the direction of itself; we are interested in the error orthogonal to it. So, we investigate the second-order error component orthogonal to θN (see Fig. 6.2): N N ? (8.18) 2 θ  PθN 2 θ D M M 2 θ : Note that the first-order error 1 θ in Eq. (8.14) is itself orthogonal to θN . Multiplying Eq. (8.16) N on both, we obtain ? θ in the following form: by M 2  N N T θN : ? 2 N (8.19) 2θ DM N 2 θ / = Computing the inner product of Eq. (8.16) and θN on both sides and noting that .θN ; M

0, we obtain 2  in the form

  θN ; T θN : 2  D  θN ; NN θN

Hence, Eq. (8.19) is rewritten as follows: ? 2θ

N DM

.θN ; T θN / N N Nθ N θN / .θN ; N

us, the second-order bias is N EŒ? 2 θ D M

.θN ; EŒT θN / N N Nθ N θN / .θN ; N

(8.20)

!

T θN :

(8.21) !

EŒT θN  :

(8.22)

82

8. ACCURACY OF ALGEBRAIC FITTING

8.3

BIAS ELIMINATION AND HYPER-RENORMALIZATION

From Eq. (8.22), we find a crucial fact: if we can choose such an N that its noiseless value NN satisfied N θN for some constant c , we will have EŒT θN  = c N   N E ? 2θ DM

N θ/ .θN ; c N NN θN N θN / .θN ; N

N θN cN

!

D 0:

(8.23)

en, the bias will be O. 4 /, because the expectation of odd-order noise terms is zero. In order to choose such an N , we need to evaluate the expectation EŒT θN . We evaluate the expectation N 1 M θN term by term, using the identity of T θN = 2 M θN 1 M M h i E 1 ξ˛ 1 ξˇ> D  2 ı˛ˇ V0 Œξ˛ ; (8.24) which results from our assumption of independent noise for different points and the definition of the normalized covariance matrix V0 Œξ˛ , where ı˛ˇ is the Kronecker delta, taking 1 for ˛ = ˇ and 0 otherwise. We also use the definition (see Eq. (2.15)) EŒ2 ξ˛  D  2 e

(8.25)

of the vector e (we do not use Eq. (2.12), which holds only for ellipse fitting). In the end, we find that EŒT θN  =  2 NN θN holds if we define NN in the form (see Lemmas 8.3 and 8.4 and Proposition 8.5 below) N h i 1 X N  NN D W˛ V0 Œξ˛  C 2S ξN˛ e> N ˛D1 N  h i 1 X N 2  N N ξN˛ V0 Œξ˛  C 2S V0 Œξ˛ M N ξN˛ ξN˛ ; W ξ ; M ˛ ˛ N 2 ˛D1

(8.26)

where S Œ   denotes symmetrization (S ŒA = .A C A> /=2). Since the matrix NN in Eq. (8.26) is defined in terms of true values, we replace them by observations in actual computation. However, the matrix M defined by noisy observations ξ˛ is no longer of rank n 1; it is generally positive definite with positive eigenvalues. So, we replace N by the pseudoinverse M M 1 (see Eq. (2.16)). Using the resulting n 1 of truncated rank n matrix N , we obtain the hyper-renormalization scheme of Procedure 2.3. e bias of the resulting N by ξ˛ and M introduces only errors of O. 4 /, solution θ is O. 4 /, because replacing ξN˛ and M since the third-order noise terms have expectation 0.

8.4. DERIVATIONS

8.4

83

DERIVATIONS

(Errors of weights) e first and the second error terms of W˛ are given by Eqs. (8.8) and (8.9). Proof . e definition of W˛ in Eq. (8.2) is written as W˛ .θ ; V0 Œξ˛ θ / = 1. Expanding W˛ and θ , we obtain   WN ˛ C1 W˛ C2 W˛ C   .θN C1 θ C2 θ C   /; V0 Œξ˛ .θN C1 θ C2 θ C   / D 1: (8.27) N N N Equating the noiseless terms on both sides, we obtain W˛ .θ ; V0 Œξ˛ θ / = 1. Equating the firstorder terms on both sides, we obtain Proposition 8.1

1 W˛ .θN ; V0 Œξ˛ θN / C WN ˛ .1 θ ; V0 Œξ˛ θN / C WN ˛ .θN ; V0 Œξ˛ 1 θN / D 0;

(8.28)

from which we obtain 1 W˛ D

2WN ˛ .1 θ ; V0 Œξ˛ θN / D .θN ; V0 Œξ˛ θN /

2WN ˛2 .1 θ ; V0 Œξ˛ θN /:

(8.29)

us, we obtain Eq. (8.8). Equating the second-order terms on both sides of Eq. (8.27), we obtain   2 W˛ .θN ; V0 Œξ˛ θN / C 1 W˛ .1 θ ; V0 Œξ˛ θN / C .θN V0 Œξ˛ ; 1 θ /   CWN ˛ .1 θ ; V0 Œξ˛ 1 θ / C .2 θ ; V0 Œξ˛ θN / C .θN ; V0 Œξ˛ 2 θ / D 0; (8.30) from which we obtain   21 W˛ .1 θ ; V0 Œξ˛ θN / C WN ˛ .1 θ ; V0 Œξ˛ 1 θ / .θN ; V0 Œξ˛ θN /  C 2.2 θ ; V0 Œξ˛ θN /      1 W˛ N N ˛ .1 θ ; V0 Œξ˛ 1 θ / C 2.2 θ ; V0 Œξ˛ θN / D W˛ 21 W˛ C W 2WN ˛2   2 .1 W˛ / WN ˛2 .1 θ ; V0 Œξ˛ 1 θ / C 2.2 θ ; V0 Œξ˛ θN / : D (8.31) WN ˛

2 W˛ D

1

us, we obtain Eq. (8.9).



(Covariance of the solution) e covariance matrix of θ is given by Eq. (8.15). Proof . Substituting Eq. (8.1) into Eq. (8.14) and noting that ξ˛ θ = 0, we can write 1 θ in the form ! N X 1 N 1 M θN D M N WN ˛ .1 ξ˛ ; θN /ξN˛ : (8.32) 1 θ D M N ˛D1

Proposition 8.2

84

8. ACCURACY OF ALGEBRAIC FITTING

e covariance matrix V Œθ  = EŒ1 θ 1 θ >  is evaluated to be 1 2 0 3 N N X X 1 1 N @ N 5 WN ˛ .1 ξ˛ ; θN /ξN˛ WN ˇ .1 ξˇ ; θN /ξNˇ A M V Œθ  D E 4M N ˛D1 N ˇ D1 2 1 3 0 N X N @ 1 N 5 D E 4M WN ˛ WN ˇ .θN ; 1 ξ˛ /.1 ξˇ ; θN /ξN˛ ξNˇ A M N2 ˛;ˇ D1 1 0 N X N N @ 1 DM WN ˛ WN ˇ .θN ; EŒ1 ξ˛ 1 ξˇ θN /ξN˛ ξNˇ A M N2 ˛;ˇ D1 0 1 N X 1 N @ N DM WN ˛ WN ˇ .θN ;  2 ı˛ˇ V0 Œξ˛ θN /ξN˛ ξNˇ A M N2 ˛;ˇ D1 ! N 2 X  2 N N DM WN .θN ; V0 Œξ˛ θN /ξN˛ ξN˛ M N 2 ˛D1 ˛ ! N 2 2 2 N 1 X N N N N D M N M NM N D M N ; D M W˛ ξ˛ ξ˛ M N N ˛D1 N N NM N N M where we have used the identity M lem 8.3).

N = M

N for the pseudoinverse M

(8.33) (,! Prob

In order to derive Eq. (8.26), we evaluate the expectation of Eq. (8.17) term by term.

Lemma 8.3

(Expectation of 2 M θN )

N N  2 2 X   2 X N  N N N N N V0 Œξ˛ θN ξN˛ : EŒ2 M θ  D W˛ V0 Œξ˛ θ C .e; θ /ξ˛ C 2 WN ˛ WN ˛ ξN˛ ; M N ˛D1 N ˛D1 (8.34)

8.4. DERIVATIONS

85

Proof . From .ξN˛ ; θN / = 0 and Eq. (8.7), we obtain 2 M θN D

N  1 X N  W˛ 1 ξ˛ 1 ξ˛> C 2 ξ˛ ξN˛> C ξN˛ 2 ξ˛> θN N ˛D1 N N   1 X 1 X > > N N N C 1 W˛ 1 ξ˛ ξ˛ C ξ˛ 1 ξ˛ θ C 2 W˛ ξN˛ ξN˛> θN N ˛D1 N ˛D1

D

N  1 X N  W˛ .1 ξ˛ ; θN /1 ξ˛ C .2 ξ˛ ; θN /ξN˛ N ˛D1

C

N   1 X 1 W˛ 1 ξ˛ ; θN ξN˛ : N ˛D1

(8.35)

Hence, h

N i 1 X       N E 2 M θ D WN ˛ E 1 ξ˛ 1 ξ˛> θN C EŒ2 ξ˛ ; θN ξN˛ N ˛D1

C D

N  1 X EŒ1 W˛ 1 ξ˛ ; θN ξN˛ N ˛D1

N N   2 X N  1 X W˛ V0 Œξ˛ θN C .e; θN /ξN˛ C EŒ1 W˛ 1 ξ˛ ; θN ξN˛ : N ˛D1 N ˛D1

(8.36)

Consider the expectation of 1 W˛ 1 ξ˛ . From Eqs. (8.1), (8.8), and (8.14), we see that     N 1 M θN ; V0 Œξ˛ θN 2WN ˛2 1 θ ; V0 Œξ˛ θN D 2WN ˛2 M 0 0 N   X N @1 D2WN ˛2 @M WN ˇ 1 ξˇ ξNˇ> C ξNˇ 1 ξˇ> N ˇ D1 1 1 N 1 X 1 WN ˇ ξNˇ ξNˇ> A θN ; V0 Œξ˛ θN A C N

1 W˛ D

ˇ D1

N X

D

2 N

D

N 2 X N2 N N N V0 Œξ˛ θN /.1 ξˇ ; θN /: W˛ Wˇ .ξˇ ; M N

ˇ D1

ˇ D1

N ξNˇ ; V0 Œξ˛ θN / WN ˛2 WN ˇ .1 ξˇ ; θN /.M

(8.37)

86

8. ACCURACY OF ALGEBRAIC FITTING

Hence, 2

3 N     X 2 N V0 Œξ˛ θN 1 ξˇ ; θN 1 ξ˛ 5 EŒ1 W˛ 1 ξ˛  D E 4 WN ˛2 WN ˇ ξNˇ ; M N ˇ D1

N  h i 2 X N2 N N N V0 Œξ˛ θN E 1 ξ˛ 1 ξ > θN W˛ Wˇ ξˇ ; M D ˇ N

D

2 N

ˇ D1 N X ˇ D1

N V0 Œξ˛ θN / 2 ı˛ˇ V0 Œξ˛ θN WN ˛2 WN ˇ .ξNˇ ; M

2 2 N 3 N N V0 Œξ˛ θN /V0 Œξ˛ θN : W .ξ˛ ; M D N ˛

(8.38)

It follows that N N    1 X 1 X  2 2 N 3 N N V0 Œξ˛ θN /V0 Œξ˛ θN ; θN ξN˛ W˛ .ξ˛ ; M .EŒ1 W˛ 1 ξ˛ ; θN /ξN˛ D N ˛D1 N ˛D1 N

D

N N 2 X 2 2 X N 3 N N V0 Œξ˛ θN /.θN ; V0 Œξ˛ θN /ξN˛ D 2 N V0 Œξ˛ θN /ξN˛ : (8.39) W . ξ ; M WN 2 .ξN˛ ; M ˛ N 2 ˛D1 ˛ N 2 ˛D1 ˛

Substituting Eq. (8.39) into Eq. (8.36), we obtain Eq. (8.34).

Lemma 8.4



N 1 M θN ) (Expectation of 1 M M 2

N 1 M θN  D  EŒ1 M M N2 C

N X ˛D1

N ξN˛ /V0 Œξ˛ θN WN ˛ WN ˛ .ξN˛ ; M

N 3 2 X N N N N V0 Œξ˛ θN /ξN˛ : W˛ W˛ .ξ˛ ; M N 2 ˛D1

(8.40)

Proof . From Eq. (8.1), we can write N N  1 X N  1 X W˛ 1 ξ˛ ξN˛> C ξN˛ 1 ξ˛> θN C 1 WN ˛ ξN˛ ξN˛> θN 1 M θN D N ˛D1 N ˛D1 N 1 X N D W˛ .1 ξ˛ ; θN /ξN˛ : N ˛D1

(8.41)

8.4. DERIVATIONS

87

We can also write N 1 X N W˛ .1 ξ˛ ; θN /ξN˛ N ˛D1

!

N 1 M θN D 1 M M N 1 M M 1 0 N N   X X 1 1 WN ˇ 1 ξˇ ξNˇ> C ξNˇ 1 ξˇ> C 1 WN ˇ ξNˇ ξNˇ> A D@ N N ˇ D1 ˇ D1 ! N 1 X N N M W˛ .1 ξ˛ ; θN /ξN˛ N ˛D1 N  1 X N N  N .1 ξ˛ ; θN /ξN˛ W˛ Wˇ 1 ξˇ ξNˇ> C ξNˇ 1 ξˇ> M D 2 N C D

1 N2

1 N2 C

D

˛;ˇ D1 N X

  N ξN˛ WN ˛ WN ˇ .1 ξ˛ ; θN / 1 ξˇ ξNˇ> C ξNˇ 1 ξˇ> M

˛;ˇ D1 N X

1 N2

1 N2

˛;ˇ D1

N X

˛;ˇ D1

N X

1 N2

C

1 N2

N ξN˛ WN ˛ 1 WN ˇ .1 ξ˛ ; θN /ξNˇ ξNˇ> M

N ξN˛ /1 ξˇ . t1 / WN ˛ WN ˇ .1 ξ˛ ; θN /.ξNˇ ; M

˛;ˇ D1 N X

C

N .1 ξ˛ ; θN /ξN˛ WN ˛ 1 WN ˇ ξNˇ ξNˇ> M

˛;ˇ D1 N X ˛;ˇ D1

N ξN˛ /ξNˇ . t2 / WN ˛ WN ˇ .1 ξ˛ ; θN /.1 ξˇ ; M N ξN˛ /ξNˇ . t3 /: WN ˛ 1 WN ˇ .1 ξ˛ ; θN /.ξNˇ ; M

(8.42)

We evaluate the expectation of the three terms t1 , t2 , and t3 separately. e expectation of t1 is

EŒt1  D D

N   1 X N N N N ξN˛ /E 1 ξˇ 1 ξ > θN W˛ Wˇ .ξˇ ; M ˛ 2 N

1 N2 2

D

 N2

˛;ˇ D1 N X ˛;ˇ D1 N X

˛D1

N ξN˛ / 2 ı˛ˇ V0 Œξ˛ θN WN ˛ WN ˇ .ξNˇ ; M

N ξN˛ /V0 Œξ˛ θN : WN ˛2 .ξN˛ ; M

(8.43)

88

8. ACCURACY OF ALGEBRAIC FITTING

e expectation of t2 is

EŒt2  D D

N  1 X N N N N ξN˛ ξNˇ W W θ ; EŒ ξ  ξ  M ˛ 1 ˛ 1 ˇ ˇ N2

1 N2 2

D

 N2

˛;ˇ D1 N X ˛;ˇ D1 N X ˛D1

  N ξN˛ ξNˇ WN ˛ WN ˇ θN ;  2 ı˛ˇ V0 Œξ˛ M

  N V0 Œξ˛ θN ξN˛ : WN ˛3 ξN˛ ; M

(8.44)

Finally, we consider the expectation of t3 . From Eq. (8.37), we can write

1 Wˇ D

N 2 X N2 N N N V0 Œξˇ θN /.1 ξ ; θN /: W W .ξ ; M N D1 ˇ

(8.45)

Hence, 2

3 N X 2 N V0 Œξˇ θN /.1 ξ ; θN /1 ξ˛ 5 EŒ1 Wˇ 1 ξ˛  D E 4 WN 2 WN .ξN ; M N D1 ˇ D D D

N 2 X N2 N N N V0 Œξˇ θN /EŒ1 ξ˛ 1 ξ θN W W .ξ ; M N D1 ˇ N 2 X N2 N N N V0 Œξˇ θN / 2 ı˛ V0 Œξ˛ θN W W .ξ ; M N D1 ˇ

2 2 N 2 N N N V0 Œξˇ θN /V0 Œξ˛ θN : W W˛ .ξ˛ ; M N ˇ

(8.46)

It follows that     2 2 N V0 Œξˇ θN /V0 Œξ˛ θN ; θN EŒ1 WN ˇ 1 ξ˛ ; θN D WN ˛ WN ˇ2 .ξN˛ ; M N 2 2 2 N N 2 N N V0 Œξˇ θN /.θN ; V0 Œξ˛ θN / D 2 WN 2 .ξN˛ ; M N V0 Œξˇ θN /: D W˛ Wˇ .ξ˛ ; M N N ˇ

(8.47)

8.4. DERIVATIONS

89

us, the expectation of t3 is   2 N 1 X N 2 N 2 N N N N N N W .ξ˛ ; M V0 Œξˇ θ /.ξˇ ; M ξ˛ / ξNˇ EŒt3  D 2 W˛ N N ˇ 2

D D

2 N3

2

˛;ˇ D1 N X

2

˛;ˇ D1 N X

2

ˇ D1 N X

2 N3

2 D 2 N D

˛;ˇ D1 N X

2 N2

ˇ D1

N V0 Œξˇ θN /.ξNˇ ; M N ξN˛ /ξNˇ WN ˛ WN ˇ2 .ξN˛ ; M N ξN˛ ξN> M N V0 Œξˇ θN ξNˇ WN ˛ WN ˇ2 ξNˇ> M ˛

N WN ˇ2 ξNˇ> M

! N 1 X N N N N V0 Œξˇ θN ξNˇ W˛ ξ˛ ξ˛ M N ˛D1

N 2 X N M NM N V0 Œξˇ θN /ξNˇ D 2 N V0 Œξˇ θN /ξNˇ ; (8.48) WN ˇ2 .ξNˇ ; M WN ˇ .ξNˇ ; M N2 ˇ D1

N M NM N =M N for the pseudoinverse M N (,! Problem 8.3). where we have used the identity M N 1 M θN Adding the expectations of t1 , t2 , and t3 together, we obtain the expectation of 1 M M in the form of Eq. (8.40). 

(Optimal NN ) e identity EŒT θN  =  2 NN θN holds if NN is defined by Eq. (8.26). N 1 M θN , we obtain the Proof . Combining the above expectations EŒ2 M θN  and EŒ1 M M N expectation of T θ in the form Proposition 8.5

N N  2 2 X 2 X N  N N N N N V0 Œξ˛ θN /ξN˛ EŒT θ  D W˛ V0 Œξ˛ θ C .e; θ /ξ˛ C 2 WN 2 .ξN˛ ; M N ˛D1 N ˛D1 ˛ N N 2 X N 2 N 3 2 X N 2 N N N N N V0 Œξ˛ θN /ξN˛ W . ξ ; M ξ /V Œ ξ  θ W .ξ˛ ; M ˛ ˛ 0 ˛ N 2 ˛D1 ˛ N 2 ˛D1 ˛ N N  2 X 2 X N  > N N N N ξN˛ /V0 Œξ˛ θN D W˛ V0 Œξ˛ θ C ξ˛ e θ WN 2 .ξN˛ ; M N ˛D1 N 2 ˛D1 ˛ N 2 X N 2 N N N W ξ˛ ξ˛ M V0 Œξ˛ θN N 2 ˛D1 ˛ N  2 X N  D W˛ V0 Œξ˛ θN C .ξN˛ e> C eξN˛ /θN N ˛D1

N 2 X N 2 N N ξN˛ /V0 Œξ˛ θN W .ξ˛ ; M N 2 ˛D1 ˛

N 2 X N 2 N N N N ξN˛ ξN˛ /θN : W .ξ˛ ξ˛ M V0 Œξ˛  C V0 Œξ˛ M N 2 ˛D1 ˛

(8.49)

90

8. ACCURACY OF ALGEBRAIC FITTING

Hence, if we define the matrix NN by Eq. (8.26), the above equation is written as EŒT θN  = N θN .  2N 

8.5

SUPPLEMENTAL NOTE

e covariance and bias analysis of algebraic ellipse fitting was first presented in Kanatani [2008]. Based on this analysis, the hyper-renormalization procedure was obtained by Kanatani et al. [2012]. e result is generalized by Kanatani et al. [2014] to multiple constraints in the form N of .ξ .k/ ; θ / = 0, k = 1, ..., L (see Eq. (7.21)). In this case, Eq. (8.15) still holds if the matrix M is replaced by   W˛.kl/ D .θ ; V0.kl/ θ / ;

N L 1 X X MD W˛.kl/ ξ˛.k/ ξ˛.l/> ; N ˛D1

r

k;lD1

(8.50)

where r is the number of independent constraints (see Eqs. (7.28) and (7.29)). Equation (8.26) is generalized to N L  1 X X N .kl/  .kl/ W˛ V0 Œξ˛  C 2S ŒξN˛.k/ e.l/>  NN D ˛ N ˛D1

1 N2

k;lD1 N L X X

WN ˛.kl/ WN ˛.mn/

  N ξN.m/ V .ln/ Œξ˛  ξN˛.k/ ; M ˛ 0

h˛D1 k;l;m;nD1 i N ξN.l/ ξN.n/> ; C2S V0.km/ Œξ˛ M ˛ ˛

(8.51)

where WN ˛.kl/ is the true value of the W˛.kl/ in Eq. (8.50), and e.k/ ˛ is the vector defined by the relation h i E 2 ξ˛.k/ D  2 e.k/ (8.52) ˛ : us, the hyper-renormalization scheme can be applied to many other problems including homography computation (L = 3, r = 2) described in Section 7.2. For homography computation, however, we have e.k/ ˛ = 0, because we see from Eqs. (7.18) that 0 0 0 1 1 1 0 y˛0 x˛ 0 B B y 0 y˛ C B C C 0 0 ˛ B B B C C C B B B C C C 0 0 0 B B B C C C B B B C C C 0 0 0 B B x˛ x˛ C B C C B B B C C C .2/ .3/ .1/ 0 2 ξ˛ D B 2 ξ˛ D B x˛ y˛ C : 2 ξ˛ D B 0 0 C; C; B B B C C C B B B C C C 0 0 0 B B B C C C B x˛0 x˛ C B B x˛ y˛0 C C 0 B B B C C C @ x˛0 y˛ A @ @ y˛ y˛0 A A 0 0 0 0 (8.53)

8.5. SUPPLEMENTAL NOTE

All these have expectation 0, since we regard variables of mean 0.

x˛ , y˛ , x˛0 ,

and

y˛0

91

as independent random

PROBLEMS 8.1.

Let

0

B ADU@

1

1 ::

: r

C > AV

(8.54)

be the singular value decomposition of an m  n matrix A, where the second matrix on the right side is an r  r diagonal matrix, r  min.m; n/, with singular values 1      r (> 0) as diagonal elements in that order (all non-diagonal elements are 0). e matrices U and V are, respectively, m  r and n  r matrices consisting of orthonormal columns. Show that Eq. (8.54) can be written in the form

A D 1 u1 v1> C 2 u2 v2> C    C r ur vr> ;

(8.55)

where ui and vi are the i th columns of U and V , respectively. 8.2.

e pseudoinverse of the matrix A of Eq. (8.54) is defined to be 1 0 1=1 C > B :: A DV @ AU : :

(8.56)

1=r

Show that this can be written in the form

A D 8.3.

1 1 1 v1 u> v2 u> vr u> 1 C 2 C  C r : 1 2 r

Show that the following identities holds for the pseudoinverse:

AA A D A; 8.4.

(8.57)

A AA D A :

(8.58)

For the matrix A of Eq. (8.54), show that the product AA is the projection matrix onto the space spanned by u1 , ..., ur and that A A is the projection matrix onto the space spanned by v1 , ..., vr .

93

CHAPTER

9

Maximum Likelihood and Geometric Fitting We discuss maximum likelihood estimation and Sampson error minimization and do high order error analysis to derive explicit expressions for the covariance and bias of the solution. e hyperaccurate correction procedure is derived in this framework.

9.1

MAXIMUM LIKELIHOOD AND SAMPSON ERROR

We are assuming that each point .x˛ ; y˛ / is perturbed from its true position .xN ˛ ; yN˛ / by independent Gaussian noise of mean 0 and variance  2 . In statistical terms, the geometric distance between them N  1 X SD .x˛ xN ˛ /2 C .y˛ yN˛ /2 (9.1) N ˛D1 is called the Mahalanobis distance. Strictly speaking, it should be called the “square Mahalanobis distance,” because it has the dimension of square length, but the term “Mahalanobis distance” is widely used for simplicity, just as the term “geometric distance.” Finding an ellipse that minimizes this is called maximum likelihood (ML) estimation. In fact, the probability density, or the likeli2 hood , of the total noise is .constant/  e  NS=2 according to our noise model, so minimizing S is equivalent to maximizing the likelihood. By “minimizing S ,” we mean minimizing S with respect to the ellipse parameter θ . However, Eq. (9.1) itself does not contain θ ; it is included in the ellipse equation .ξN; θ / = 0, subject to which Eq. (9.1) is minimized. is is a major difficulty for direct maximum likelihood estimation. is could be handled by introducing auxiliary variables (we will discuss this later), but the difficulty is resolved if we do maximum likelihood, using not the raw point coordinates .x˛ ; y˛ / but their nonlinear mapping ξ˛ , whose covariance matrix V Œξ˛  is given by Eq. (1.21). If we assume that the observed ξ˛ is perturbed from its true value ξN˛ by independent Gaussian noise of mean 0 and covariance matrix V Œξ˛  (=  2 V0 Œξ˛ ), maximum likelihood estimation is to minimize the Mahalanobis distance in the ξ -space N 1 X ξ˛ J D N ˛D1

ξN˛ ; V0 Œξ˛  .ξ˛

 ξN˛ / ;

(9.2)

94

9. MAXIMUM LIKELIHOOD AND GEOMETRIC FITTING

subject to the constraint .ξN˛ ; θ / D 0;

(9.3)

˛ D 1; :::; N: 1

In Eq. (9.2), we use the pseudoinverse V0 Œξ˛  rather than the inverse V0 Œξ˛  , because the covariance matrix V Œξ˛  is not positive definite, the last column and row consisting of 0 (see Eq. (1.21)). According to the Gaussian noise assumption for ξ˛ , the probability density, or the 2 likelihood, of the data fξ˛ g is .constant/  e  NJ =2 , so maximizing the likelihood is equivalent to minimizing Eq. (9.2) subject to Eq. (9.3). In this formulation, the constraint is linear in ξN˛ , so that we can easily eliminate ξN˛ , using Lagrange multipliers, to obtain J D

N 1 X W˛ .ξ˛ ; θ /2 ; N ˛D1

W˛ D

1 ; .θ ; V0 Œξ˛ θ /

(9.4)

which is simply the Sampson error of Eq. (3.3) (see Proposition 9.1 below). is is a function of θ , so we can minimize this by searching the θ -space. However, ξ˛ is a nonlinear mapping of .x˛ ; y˛ /, so the assumption that ξ˛ is a Gaussian variable of mean 0 and covariance matrix V Œξ˛  (=  2 V0 Œξ˛ ) does not strictly hold, even if .x˛ ; y˛ / are independent Gaussian variables of mean 0 and variance  2 . Still, if the noise in .x˛ ; y˛ / is small, the noise distribution of ξ˛ is expected to have an approximately Gaussian-like form, concentrating around 0 with covariance matrix V Œξ˛  (=  2 V0 Œξ˛ ). It has been experimentally confirmed that it is indeed the case; the Sampson error minimization solution and the geometric distance minimization are almost identical. In fact, we can minimize the geometric distance by repeating the Sampson error minimization, as shown in Chapter 3, but the resulting solution is the same over several significant digits. us, we can in practice identify the Sampson error with the geometric distance. Moreover, the Gaussian noise assumption for ξ˛ makes mathematical error analysis very easy, enabling us to evaluate higher order covariance and bias and derive a theoretical accuracy limit. In the following we adopt this Gaussian noise assumption for ξ˛ .

9.2

ERROR ANALYSIS

We now analyze the error of the solution θ that minimizes the Sampson error J of Eq. (9.4). As in the preceding chapter, we treat θ and ξ˛ as n-D vectors so that the result also holds for the fundamental matrix computation described in Section 7.1. e derivative of Eq. (9.4) with respect to θ has the form of Eq. (3.8), i.e., rθ J D 2.M

L/θ ;

(9.5)

where the n  n matrices M and L are given by N 1 X MD W˛ ξ˛ ξ˛> ; N ˛D1

(9.6)

9.2. ERROR ANALYSIS

LD

N 1 X 2 W .ξ˛ ; θ /2 V0 Œξ˛ : N ˛D1 ˛

95

(9.7)

As in the preceding chapter, we substitute Eq. (1.17) into Eq. (9.4) and (9.6) and expand M and W˛ in the form of Eqs. (8.4) and (8.5), where i M and i W˛ , i = 1, 2, are given by Eqs. (8.1), (8.7), (8.8), and (8.9). Similarly, we expand the matrix L in Eq. (9.7) in the form N C 1 L C 2 L C    LDL

(9.8)

N = 1 L = O . e Since .ξN˛ ; θN / = 0 in the absence of noise, we can see from Eq. (9.7) that L second-order term 2 L is given by 2 L D

N 1 X N 2 N W .ξ˛ ; 1 θ /2 C .ξN˛ ; 1 θ /.1 ξ˛ ; θN / N ˛D1 ˛  C .1 ξ˛ ; θN /.ξN˛ ; 1 θ / C .1 ξ˛ ; θN /2 V0 Œξ˛ :

(9.9)

At the value θ that minimizes the Sampson error J , the derivative rθ J in Eq. (9.5) vanishes, so

M θ D Lθ :

(9.10)

Substituting Eqs. (8.4), (8.3), and (9.8), we expand this equality in the form N C 1 M C 2 M C    /.θN C 1 θ C 2 θ C    / D .2 L C    /.θN C 1 θ C 2 θ C    /: .M (9.11) Equating terms of the same order on both sides, we obtain N 1 θ C 1 M θN D 0; M

(9.12)

N 2 θ C 1 M 1 θ C 2 M θN D 2 LθN : M (9.13) N θN = 0 in the absence of noise. So M N has rank n 1, θN being its null vector. Multiply Recall that M N N M N = P N (,! Problems 6.1 Eq. (9.12) by the pseudoinverse M on both sides. Noting that M θ and 8.4) and that 1 θ is orthogonal to θN , we obtain 1 θ D

N 1 M θN ; M

(9.14)

which is the same as Eq. (8.14). is means that the algebraic solution and the Sampson error minimization solution agree to a first approximation. Hence, the covariance matrix V Œθ  is also given by Eq. (8.15), achieving the KCR lower bound up to O. 4 /. So, we focus on the second-order N on both sides, we obtain term. Multiplying Eq. (9.13) by M ? 2θ D

N 1 M 1 θ M

where we define ? 2 θ by Eq. (8.18).

N 2 M θN C M N 2 LθN ; M

(9.15)

96

9. MAXIMUM LIKELIHOOD AND GEOMETRIC FITTING

9.3

BIAS ANALYSIS AND HYPERACCURATE CORRECTION

e expectation of odd-order error terms is zero. In particular, EŒ1 θ  = 0. Hence, we focus on the second-order bias EŒ> 2 θ . From Eq. (9.15), we obtain EŒ? 2 θ D

N 1 M 1 θ  EŒM

N 2 M θN  C EŒM N 2 LθN : EŒM

(9.16)

We evaluate each term separately, using our noise assumption of Eq. (8.24) and the definition of the vector e in Eq. (8.25). In the end, we obtain the following expression (see Lemmas 9.2, 9.3, and 9.4 below): EŒ? 2 θ D

N N 2 N X N 2 N 2 N X N N V0 Œξ˛ θN /ξN˛ : W˛ .e; θN /ξN˛ C 2 M W˛ .ξ˛ ; M M N N ˛D1 ˛D1

(9.17)

Hence, we can improve the accuracy of the computed θ by subtracting this expression, which we call hyperaccurate correction. Since Eq. (9.17) is defined in terms of true values, we replace them by observations for actual computation. e matrix M defined by observed ξ˛ is generally of full N by the pseudoinverse M rank, so we replace M 1 (see Eq. (2.16)). n 1 of truncated rank n us, we obtain the following expression for hyperaccurate correction (see Eq. (3.19)): c θ D

2 Mn N

1

N X ˛D1

W˛ .e; θ /ξ˛ C

2 M N2 n

1

N X

W˛2 .ξ˛ ; Mn

1 V0 Œξ˛ θ /ξ˛ :

(9.18)

˛D1

Subtracting this, we correct θ to

θ

N Œθ

(9.19)

c θ ;

where N Œ   denotes normalization to unit norm. is operation is necessary, since θ is defined to be a unit vector. For evaluating Eq. (9.18), we need to estimate  2 . is is done, using the wellknown fact in statistics: if Smin is the minimum of Eq. (9.1), then NSmin = 2 is a 2 variable of N .n 1/ degrees of freedom, and the expectation of a 2 variable equals its degree of freedom. Note that the parameter θ has n components but is normalized to unit norm, so it has n 1 degrees of freedom. Hence, if we identify EŒSmi n  with the minimum of the Sampson error J in Eq. (9.4), we can estimate  2 in the following form (see Eq. (3.18)): O 2 D

J 1

.n

1/=N

:

(9.20)

By this hyperaccurate correction, the bias is removed up to O. 4 /; replacing true values by observations in Eq. (9.18) and estimating  2 by Eq. (9.20) introduces only errors of O. 4 /.

9.4

DERIVATIONS

We first derive Eq. (9.4). From Eqs. (1.8) and (7.3), we see that the last component of ξ is f02 , so no noise perturbation occurs in the direction of k  .0; :::; 0; 1/> . is means that V0 Œξ˛  has

9.4. DERIVATIONS

97

rank n 1, k being its null vector. Hence, both V0 Œξ˛ V0 Œξ˛  and V0 Œξ˛  V0 Œξ˛  are equal to the projection matrix onto the subspace orthogonal to k (,! Problem 8.4), having the form of Pk = I kk> (,! Problem 6.1). (However, all the following results also hold if V0 Œξ˛  is nonsingular). Proposition 9.1 (Sampson error) Minimizing Eq. (9.2) subject to Eq. (9.3) reduces to minimization

of the Sampson error of Eq. (9.4). Proof . In minimizing Eq. (9.2) with respect to ξN˛ , we must note that ξN˛ is constrained not only by Eq. (9.3) but also by .ξN˛ ; k/ = f02 . So, We introducing Lagrange multipliers ˛ and ˛ and consider N 1 X .ξ˛ N ˛D1

ξN˛ ; V0 Œξ˛  .ξ˛

N X

ξN˛ //

˛D1

˛ .ξN˛ ; θ /

N X ˛D1

˛ ..ξN˛ ; k/

f02 /:

(9.21)

Differentiating this with respect to ξN˛ and setting the result to 0, we obtain 2 V0 Œξ˛  .ξ˛ N

ξN˛ /

˛ k D 0:

(9.22)

Multiply this by V0 Œξ˛ . Note that V0 Œξ˛ V0 Œξ˛  = Pk and Pk .ξ˛ V0 Œξ˛ k = 0. Hence, we obtain

ξN˛ / = 0. Also, note that

2 .ξ˛ N

ξN˛ /

from which we obtain

ξN˛ D ξ˛

˛ θ

˛ V0 Œξ˛ θ D 0; N ˛ V0 Œξ˛ θ : 2

(9.23)

(9.24)

Substituting this into Eq. (9.3), we have .ξ˛ ; θ /

N ˛ .θ ; V0 Œξ˛ θ / D 0; 2

so ˛ D

2.ξ˛ ; θ / : N.θ ; V0 Œξ˛ θ /

(9.25)

(9.26)

Substituting Eq. (9.24) into Eq. (9.2), we obtain J D D

 N  N N ˛ N X 2 1 X N ˛ V0 Œξ˛ θ ; V0 Œξ˛  V0 Œξ˛ θ D  .θ ; V0 Œξ˛ V0 Œξ˛  V0 Œξ˛ θ / N ˛D1 2 2 4 ˛D1 ˛ N N 1 X .ξ˛ ; θ /2 N X 2 ˛ .θ ; V0 Œξ˛ θ / D ; 4 ˛D1 N ˛D1 .θ ; V0 Œξ˛ θ /

where we have used the identity V0 Œξ˛ V0 Œξ˛  V0 Œξ˛  = V0 Œξ˛  (,! Problem 8.3).

(9.27) 

98

9. MAXIMUM LIKELIHOOD AND GEOMETRIC FITTING

We now derive Eq. (9.17) by evaluating the right side of Eq. (9.16) term by term. To do this, we split the expression of 1 M and 2 M in Eqs. (8.1) and (8.7) into the following terms: 1 M D 01 M C 1 M ; 2 M D 02 M C 2 M C Ž2 M ;

(9.28) (9.29)

01 M 

N 1 X N W˛ .1 ξ˛ ξN˛> C ξN˛ 1 ξ˛> /; N ˛D1

(9.30)

1 M 

N 1 X 1 W˛ ξN˛ ξN˛> ; N ˛D1

(9.31)

02 M 

N 1 X N W˛ .1 ξ˛ 1 ξ˛> C 2 ξ˛ ξN˛> C ξN˛ 2 ξ˛> /; N ˛D1

(9.32)

2 M 

N 1 X 1 W˛ .1 ξ˛ ξN˛> C ξN˛ 1 ξ˛> /; N ˛D1

(9.33)

Ž2 M

N 1 X  2 W˛ ξN˛ ξN˛> : N ˛D1

(9.34)

Equation (9.17) is obtained by combining the following Lemmas 9.2, 9.3, and 9.4: Lemma 9.2

N 1 M 1 θ ) (Expectation of M N 2 X N 1 M 1 θ  D  M N N ξN˛ /V0 Œξ˛ θN EŒ M WN ˛2 .ξN˛ ; M N2 ˛D1

C

N 3 2 N X N 2 N N V0 Œξ˛ θN /ξN˛ : M W˛ .ξ˛ ; M N2 ˛D1

(9.35)

Proof . From Eq. (9.28), we can write N 1 M 1 θ  D EŒM N 0 M M N 0 M θN  C EŒM N  M M N 0 M θN : EŒM 1 1 1 1

e first term on the right side is written as follows: h i N 0 M M N 0 M θN E M 1 1 2 3 N   X 1 N .1 ξˇ ; θN /ξNˇ 5 N D E 4 2M WN ˛ WN ˇ 1 ξ˛ ξN˛> C ξN˛ 1 ξ˛> M N ˛;ˇ D1

(9.36)

9.4. DERIVATIONS

D

N N 2 X 2 N X N 2 N N ξN˛ /V0 Œξ˛ θN C  M N N ξN˛ /ξN˛ : . ξ ; M W WN ˛2 .θN ; V0 Œξ˛ M M ˛ ˛ 2 N2 N ˛D1 ˛D1

99

(9.37)

From Eqs. (9.33) and (8.9), we obtain 1 M D

N 2 X N2 N W .M 01 M θN ; V0 Œξ˛ θN /ξN˛ ξN˛> : N ˛D1 ˛

(9.38)

Hence, the second term on the right side of Eq. (9.36) is N  M M N 0 M θN  EŒM 1 " 1 # N 2 N X N2 N N 0 M θN /ξN˛ W˛ .M 01 M θN ; V0 Œξ˛ θN /.ξN˛ ; M DE M 1 N ˛D1 D D

N 2 N X N2 N M W˛ .ξ˛ ; EŒ1 θ 1 θ > V0 Œξ˛ θN /ξN˛ N ˛D1 N 2 2 N X N 2 N N V0 Œξ˛ θN /ξN˛ ; M W˛ .ξ˛ ; M N2 ˛D1

(9.39)

where we have used using our noise assumption of Eq. (8.24). Adding Eqs. (9.33) and (9.39), we obtain Eq. (9.35).  Lemma 9.3

N 2 M θN ) (Expectation of M N 2 M θN  D EŒ M

N  2 N X N  M W˛ V0 Œξ˛ θN C .e; θN /ξN˛ N ˛D1 N  2 2 N X N 2  N N V0 Œξ˛ θN ξN˛ : M W ξ ; M ˛ ˛ N2 ˛D1

(9.40)

Proof . Using Eq. (9.29) and noting that Ž2 M θN = 0, we can write N 2 M θN  D EŒM

N 0 M θN  EŒM 2

N  M θN : EŒM 2

(9.41)

e first term on the right side is written as N  1 X N  W˛ EŒ1 ξ˛ 1 ξ˛>  C ξN˛ EŒ2 ξ˛>  θN N ˛D1 N  2 N X N  M W˛ V0 Œξ˛ θN C .e; θN /ξN˛ ; N ˛D1

N 0 M θN  D M N EŒM 2 D

(9.42)

100

9. MAXIMUM LIKELIHOOD AND GEOMETRIC FITTING

where we used the definition of the vector e in Eq. (8.25). e second term on the right side of Eq. (9.41) is written as

N  M θN  D EŒM 2 D

"

N E M N 2M

N 1 X 1 W˛ .1 ξ˛ ; θN /ξN˛ N ˛D1

#

N  1 X N 2N N V0 Œξ˛ θN ξN˛ : W˛ θ ; EŒ1 ξ˛ .01 M θN /> M N ˛D1

(9.43)

e expression EŒ1 ξ˛ .01 M θN />  in the above equation is evaluated as follows: 2 1> 3 0 N  h i  X 1 7 6 E 1 ξ˛ .01 M θN /> DE 41 ξ˛ @ WN ˇ 1 ξˇ ξNˇ> C ξNˇ 1 ξˇ> θN A 5 N ˇ D1

N h i 1 X N 2 N D Wˇ E 1 ξ˛ 1 ξˇ> θN ξNˇ> D W˛ V0 Œξ˛ θN ξN˛> : N N

(9.44)

ˇ D1

Hence, Eq. (9.43) has the following form:

N  M θN  D EŒM 2 D

N 2 2 N X N 3 N N V0 Œξ˛ θN /ξN˛ M W˛ .θ ; V0 Œξ˛ θN /.ξN˛ ; M N2 ˛D1 N 2 2 N X N 2 N N V0 Œξ˛ θN /ξN˛ : M W˛ .ξ˛ ; M N2 ˛D1

us, Eq. (9.41) can be written in the form of Eq. (9.40).

Lemma 9.4

(9.45)



N 2 LθN ) (Expectation of M

N 2 LθN  D EŒM

N N 2 N X N 2 N 2 N X N N N N M W˛ .ξ˛ ; M ξ˛ /V0 Œξ˛ θ C M W V0 Œξ˛ θN : N2 N ˛D1 ˛D1

(9.46)

9.5. SUPPLEMENTAL NOTE

101

Proof . Substituting Eq. (9.9), we can write N 2 LθN  EŒM " # " # N N X 1 N X N2 N 1 N DE M W˛ .ξ˛ ; 1 θ /2 V0 Œξ˛ θN C E M WN ˛2 .ξN˛ ; 1 θ /.1 ξ˛ ; θN /V0 Œξ˛ θN N N ˛D1 ˛D1 # " # " N N 1 N X N2 1 N X N2 2 N N N N N W˛ .1 ξ˛ ; θ /.ξ˛ ; 1 θ /V0 Œξ˛ θ C E W˛ .1 ξ˛ ; θ / V0 Œξ˛ θ CE M M N N ˛D1 ˛D1 D

N N 2 N X N 2 N 1 N X N2 N N N N ξ /V Œ ξ  θ C M W . ξ ; M M W˛ .ξ˛ ; EŒ1 θ 1 ξ˛> θN /V0 Œξ˛ θN ˛ 0 ˛ ˛ ˛ N2 N ˛D1 ˛D1 N N 1 N X N2 N 2 N X N > N N C M W˛ .θ ; EŒ1 ξ˛ 1 θ ξ˛ /V0 Œξ˛ θ C W V0 Œξ˛ θN ; M N N ˛D1 ˛D1

(9.47)

where we have used Eq. (8.15). e expression EŒ1 θ 1 ξ˛>  in the above equation can be evaluated as follows: N 0 M θN 1 ξ >  D EŒM 1 ˛

EŒ1 θ 1 ξ˛>  D

 2 N N N N> M W˛ ξ˛ θ V0 Œξ˛ : N

Hence, Eq. (9.47) has the form of Eq. (9.46).

(9.48) 

Substituting Eqs. (9.35), (9.40), and (9.46) into Eq. (9.16), we obtain Eq. (9.17).

9.5

SUPPLEMENTAL NOTE

e hyperaccurate correction technique was first presented by Kanatani [2006], and detailed covariance and bias analysis was given by Kanatani [2008]. However, the terms that contain the vector e were omitted in these analyses. It has been confirmed by experiments that the vector e plays no significant role in the final results. Later, the hyperaccurate correction technique was generalized by Kanatani and Sugaya [2013] to the problem with multiple constraints .ξ .k/ ; θ / = 0, k = 1, ..., L, in which case Eq. (9.18) is replaced by 2 Mn N

c θ D

2

C

 M N2 n

1

N X L X

.l/ W˛.kl/ .e.k/ ˛ ; θ /ξ˛

˛D1 k;lD1 N X L X

1

W˛.km/ W˛.ln/ .ξ˛.l/ ; Mn

.mn/ Œξ˛ θ /ξ˛.k/ ; 1 V0

(9.49)

˛D1 k;lD1

where the matrix M and the weights W˛.kl/ are given by Eq. (8.50). e vector e.k/ ˛ is defined by Eq. (8.52). us, the hyperaccurate correction can be applied to a wide range of problems

102

9. MAXIMUM LIKELIHOOD AND GEOMETRIC FITTING

including the homography computation (L = 3, r = 2) described in Section 7.2. However, e.k/ ˛ = 0 for homography computation, as pointed out in the Supplemental Note (page 90) of the preceding chapter.

103

CHAPTER

10

eoretical Accuracy Limit We derive a theoretical limit on ellipse fitting accuracy, assuming Gaussian noise for ξ˛ . It is given in the form of a bound, called the “KCR lower bound,” on the covariance matrix of the solution θ . e resulting form indicates that all iterative algebraic and geometric methods achieve this bound up to higher order terms in  , meaning that these are all optimal with respect to covariance. As in Chapters 8 and 9, we treat θ and ξ˛ as n-D vectors for generality, and the result of this chapter applies to a wide variety of problems including the fundamental matrix computation described in Chapter 7.

10.1 KCR LOWER BOUND We want to estimate from noisy observations ξ˛ of the true values ξN˛ the ellipse parameter θ that should satisfy .ξN˛ ; θ / = 0, ˛ = 1, ..., N , in the absence of noise. Evidently, whatever method we use, the estimate θO that we compute is a “function” of the observations ξ˛ in the form θO = θO .fξ˛ gN ˛D1 /. Such a function is called an estimator of θ . It is an unbiased estimator if EŒθO  D θ ;

(10.1)

where EŒ   is the expectation over the noise distribution. e important fact is that Eq. (10.1) must be satisfied identically in ξN˛ and θ , because unbiasedness is a property of the “estimation method,” not the data or the ellipse. Adopting the Gaussian noise assumption for ξ˛ (though this is not strictly true; see Section 9.1), we regard ξ˛ as Gaussian random variables of mean 0 and covariance matrices V Œξ˛  (=  2 V0 Œξ˛ ), independent for different ˛ . Let the computed estimate be θO = θ C θ , and define its covariance matrix by   V Œθ  D Pθ E θ θ > Pθ :

(10.2)

We introduce the projection operation Pθ = I θθ > to the space orthogonal to θ (,! Problem 6.1), because θ is normalized to unit norm. To be specific, the n-D vector θ is constrained to be on the .n 1/-D unit sphere S n 1 centered at the origin, and we are interested in the er-

104

10. THEORETICAL ACCURACY LIMIT

ror behavior of θ in the tangent space Tθ .S n 1 / to the unit sphere S n Chapter 6). We now derive the following result. eorem 10.1

1

at θ (see Fig. 6.2 in

(KCR lower bound) If θO is an unbiased estimator of θ , the inequality ! N 1  2 1 X N N N> W˛ ξ˛ ξ˛ ; WN ˛ D V Œθ   N N ˛D1 .θ ; V0 Œξ˛ θ /

(10.3)

holds for the covariance matrix V Œθ  of θ irrespective of the estimation method, where the inequality A  B means that A B is positive semidefinite. e right side of Eq. (10.3) has rank n 1 and is called the KCR (Kanatani-Cramer-Rao) lower bound . Comparing Eq. (10.3) with Eqs. (8.2) and (8.15), we see that the solution of iterative reweight, renormalization, and hyper-renormalization achieves this bound to a first approximation. Since Eq. (8.14) is the same as Eq. (9.14), the Sampson error minimization solution also achieves this bound to a first approximation. us, all iterative algebraic and geometric methods are optimal as far as covariance is concerned. We now give the proof of eorem 10.1.

10.2 DERIVATION OF THE KCR LOWER BOUND From our noise assumption, the probability density of ξ˛ is  1 p.ξ˛ / D C˛ exp .ξ˛ ξN˛ /; V Œξ˛  .ξ˛ 2

 ξN˛ / ;

(10.4)

where C˛ is a normalization constant. Here, we use the pseudoinverse V Œξ˛  rather than the inverse V Œξ˛  1 , considering a general case of the covariance matrix V Œξ˛  not being positive definite (for ellipse fitting and homography computation, the last column and row consist of 0; see Eqs. (1.21) and (7.12)). If θO is an unbiased estimator, the equality EŒθO

(10.5)

θ D 0

is an identity in ξN˛ and θ . Hence, Eq. (10.5) is invariant to infinitesimal variations in ξN˛ and θ . In other words, ı

Z

.θO

θ /p1    pN d ξ D

Z

.ı θ /p1    pN d ξ C

D ıθ C

Z

.θO

θ/

N X

N Z X ˛D1

.θO

θ /p1    ıp˛    pN d ξ

.p1    ıp˛    pN /d ξ D 0;

(10.6)

˛D1

R R R where p˛ is an abbreviation of p.ξ˛ / and we use the shorthand d ξ =    d ξ1    d ξN . We are considering variations in ξN˛ (not ξ˛ ) and θ . Since the estimator θO is a function of the observations

10.2. DERIVATION OF THE KCR LOWER BOUND

105

ξ˛ , it is not affected by such variations. Note that the variation ı θ is independent of ξ˛ , so it can R R be moved outside the integral d ξ . Also note that p1    pN d ξ = 1. Using the logarithmic differentiation formula, we can write the infinitesimal variation of Eq. (10.4) with respect to ξN˛ in the form ıp˛ D p˛ ı log p˛ D p˛ .rξN˛ log p˛ ; ı ξ˛ / D .l˛ ; ı ξ˛ /p˛ ; (10.7) where we define the score functions l˛ by

l˛  rξN˛ log p˛ :

(10.8)

Substituting Eq. (10.7) into Eq. (10.6), we obtain ıθ D

Z

.θO

N X

θ/

"

.l˛ ; ı ξN˛ /p1    pN d ξ D E .θO

θ/

˛D1

N X

#

.l˛ ; ı ξN˛ / :

(10.9)

˛D1

Due to the constraint .ξN˛ ; θ / = 0, the infinitesimal variations ı ξN˛ and ı θ are constrained to be .θN ; ı ξN˛ / C .ξN˛ ; ı θ / D 0:

(10.10)

Consider the following particular variations ı ξN˛ : ı ξN˛ D

L X l;mD1

WN ˛ .ξN˛ ; ı θ /V0 Œξ˛ θN :

(10.11)

For this ı ξN˛ , Eq. (10.10) is satisfied for any variation ı θ of θ . In fact, .θN ; ı ξN˛ / D

WN ˛ .ξN˛ ; ı θ /.θN ; V0 Œξ˛ θN / D

.ξN˛ ; ı θ /:

(10.12)

Substituting Eq. (10.11) into Eq. (10.9), we obtain " # N X ı θ D E .θO θ / WN ˛ .ξN˛ ; ı θ /.l˛ ; V0 Œξ˛ θN / D

"

E .θO

˛D1

θ/

N X ˛D1

# > N N WN ˛ .l˛ ; V0 Œξ˛ θ /ξ˛ ı θ D

"

E .θO

θ/

N X

m> ˛

#

ıθ;

(10.13)

˛D1

where we define

m˛ D WN ˛ .l˛ ; V0 Œξ˛ θN /ξN˛ :

(10.14)

We now derive the core inequality. Proposition 10.2

(Bound on the covariance matrix V Œθ ) V Œθ   M ;

(10.15)

106

10. THEORETICAL ACCURACY LIMIT

where

2

1> 3 !0 N N X X 6 7 M DE4 m˛ @ mˇ A 5 : ˛D1

(10.16)

ˇ D1

Proof . Equation (10.13) must hold for arbitrary infinitesimal variation ı θ of θ . However, we must not forget that θ is normalized to unit norm. Hence, its infinitesimal variation ı θ is constrained to be orthogonal to θ . So, we can write ı θ = Pθ ı u, where ı u is an arbitrary infinitesimal variation and Pθ ( I θθ > ) is the orthogonal projection matrix onto the tangent space Tθ .S n 1 / to the unit sphere S n 1 at θ . We obtain from Eq. (10.13) " # N X > Pθ E .θO θ / m Pθ ı u D Pθ ı u: (10.17) ˛

˛D1

Note that the constraint .ξN˛ ; θ / = 0 implies Pθ ξN˛ = ξN˛ , so from Eq. (10.14) we see that Pθ m˛ = m˛ . Since Eq. (10.17) must hold for arbitrary (unconstrained) ı u, we conclude that " # N X > O Pθ E .θ θ / m D Pθ ; (10.18) ˛

˛D1

from which we obtain the identity 2 !> 3 ! O O P .θ θ / P .θ θ / 5 PθN E 4 PθN m ˛ ˇ D1 mˇ ˛D1

P > θ /.θO θ /> Pθ Pθ EŒ.θO θ / N ˛D1 m˛  i h PN P P D N > > θ / ˛D1 m> E . N ˛ / ˛D1 m˛ /. ˇ D1 mˇ /   V Œθ  Pθ D : Pθ M

Pθ EŒ.θO .Pθ EŒ.θO

!

(10.19)

Since the inside of the expectation EŒ   is positive semidefinite, so is the rightmost side. A positive semidefinite matrix sandwiched by any matrix and its transpose is also positive semidefinite, so the following is also positive semidefinite:     Pθ M V Œθ  Pθ Pθ M Pθ M M M   2 2 Pθ2 M C M M M Pθ V Œθ Pθ Pθ M C M Pθ D M MM M Pθ2 C M M M   V Œθ  M D : (10.20) M

10.3. EXPRESSION OF THE KCR LOWER BOUND

107

Pθ2

Here, we have noted that = Pθ (,! Problems 6.2) and Pθ m˛ = m˛ , hence Pθ M = M Pθ = M from Eq. (10.16). We have also used the properties M M = M M = Pθ and M M M = M of the matrix M whose domain is Tθ .S n 1 / (,! Problems 8.3 and 8.4). e fact that Eq. (10.20) is positive semidefinite implies the inequality of Eq. (10.15). 

10.3 EXPRESSION OF THE KCR LOWER BOUND What remains is to show that the pseudoinverse M of the matrix M in Eq. (10.16) has the expression in Eq. (10.3). To show this, we use the following facts. Lemma 10.3

(Expectation of score) e score functions l˛ have expectation 0: EŒl˛  D 0:

(10.21)

R Proof . Since p˛ d ξ = 1 is an identity in ξN˛ , the gradient of the left side with respect to ξN˛ (not ξ˛ ) is identically 0. Using the logarithmic differentiation formula (see Eq. (10.7)), we have Z Z Z Z rθN˛ p˛ d ξ D rθN˛ p˛ d ξ D p˛ rθN˛ log p˛ d ξ D p˛ l˛ d ξ D EŒl˛ : (10.22)

Hence, l˛ has expectation 0.



e matrix EŒl˛ l˛>  is called the Fisher information matrix of ξN˛ . It is related to the covariance matrix of ξ˛ as follows. Lemma 10.4

(Fisher information matrix) EŒl˛ l˛>  D

1 V0 Œξ˛  : 2

(10.23)

Proof . From Eq. (10.4), the score functions l˛ in Eq. (10.8) satisfy

l˛ D V Œξ˛  ξ˛ D

1 V0 Œξ˛  ξ˛ : 2

(10.24)

Hence, 1 V0 Œξ˛  EŒξ˛ ξ˛> V0 Œξ˛  4 1 1 D 2 V0 Œξ˛  V0 Œξ˛ V0 Œξ˛  D 2 V0 Œξ˛  ;  

EŒl˛ l˛>  D

where we have noted the identity V0 Œξ˛  V0 Œξ˛ V0 Œξ˛  = V0 Œξ˛  (,! Problem 8.3).

(10.25) 

108

10. THEORETICAL ACCURACY LIMIT

Now, we are ready to show the following result. Proposition 10.5

(Bound expression) e matrix M in Eq. (10.16) has the following pseudoin-

verse:

M

2 D N

N 1 X N N N> W˛ ξ˛ ξ˛ N ˛D1

!

:

(10.26)

Proof . Since EŒl˛  = 0, we see from Eq. (10.14) that EŒm˛  = 0. Noise in different observations is independent, so EŒm˛ m>  = EŒm˛ EŒmˇ > = O for ˛ ¤ ˇ . Hence, ˇ

MD D

N X ˛;ˇ D1 N X ˛D1

EŒm˛ m> ˇ D

N X ˛D1

EŒm˛ m> ˛ D

N X ˛D1

WN ˛2 EŒ.l˛ ; V0 Œξ˛ θN /2 ξN˛ ξN˛>

WN ˛2 .θN ; V0 Œξ˛ EŒl˛ l˛> V0 Œξ˛ θN /ξN˛ ξN˛>

N 1 X N2 N D 2 W .θ ; V0 Œξ˛ V0 Œξ˛  V0 Œξ˛ θN /ξN˛ ξN˛>  ˛D1 ˛

D

N N 1 X N2 N N 1 X N N N> > N N N W . θ ; W˛ ξ˛ ξ˛ ; V Œ ξ  θ / ξ ξ D 0 ˛ ˛ ˛  2 ˛D1 ˛  2 N ˛D1

(10.27)

where we have used Eq. (10.23) and the identity V0 Œξ˛ V0 Œξ˛  V0 Œξ˛  = V0 Œξ˛  (,! Problem 8.3). us, we obtain Eq. (10.26). 

10.4 SUPPLEMENTAL NOTE e KCR lower bound of Eq. (10.3) was first derived by Kanatani [1996] in a much more general framework of nonlinear constraint fitting. is KCR lower bound is closely related to the CramerRao lower bound in traditional statistics, so in Kanatani [1996] the bound of Eq. (10.3) was simply called the “Cramer-Rao lower bound.” Pointing out its difference from the Cramer-Rao lower bound, Chernov and Lesort [2004] named it the “KCR (Kanatani-Cramer-Rao) lower bond” and gave a rigorous proof in a linear estimation framework. e proof of this chapter was based on Kanatani [2008]. In Kanatani [1996], the KCR lower bound of Eq. (10.3) is generalized to the problem with multiple constraints, .ξ˛.k/ ; θ / = 0, k = 1, ..., L, among which only r are independent, in the following form: 1 0 N X L   2 X  @1 WN ˛.kl/ D .θ ; V0.kl/ Œξ˛ θ / : (10.28) V Œθ   WN ˛.kl/ ξN˛.k/ ξN˛.l/> A ; N N ˛D1 k;lD1

(e matrix on the right side of the second equation has rank r .) us, the KCR lower bound for homography computation (L = 3, r = 2) described in Section 7.2 is given in this form.

10.4. SUPPLEMENTAL NOTE

109

e difference of the KCR lower bound from the Cramer-Rao lower bound originates from the difference of geometric estimation, such as ellipse fitting, from traditional statistical estimation. In traditional statistics, we model a random phenomenon as a parameterized probability density p.ξ jθ /, which is called the statistical model , and estimate the parameter θ from noisy instances ξ1 , ..., ξN of ξ obtained by repeated sampling. In ellipse fitting, in contrast, the constraint .ξ ; θ / = 0 is an implicit equation of θ . If we are to deal with this problem in the traditional framework of statistics, the procedure goes as follows. Suppose the observation ξ˛ and the parameter θ are both n-D vectors. We introduce m-D auxiliary variables X1 , ..., XN so that the true value ξN˛ of ξ˛ can be expressed as a function of X˛ and θ in the form of ξN˛ = ξN˛ .X˛ ; θ /. For example, if ξ˛ are some image measurements of the ˛ th point X˛ in the scene and θ encodes the camera parameters, then ξN˛ .X˛ ; θ / is the measurements of that point we would obtain in the absence of noise by using the camera specified by θ . For ellipse fitting, X˛ can be chosen to be the angle ˛ (from a fixed direction) of the moving radius from the center of the ellipse specified by θN . Our goal is to jointly estimate X1 , ..., XN , and θ from noisy observations ξ˛ = ξN˛ C ε˛ , where ε˛ indicates random noise. Let p.fε˛ gN ˛D1 / be the probability density of the noise. e Cramer-Rao lower bound is obtained as follows. First, consider log p.fξ˛ ξN˛ .X˛ ; θ /gN ˛D1 /. Next, evaluate its second derivatives with respect to X1 , ..., XN , and θ (or multiply its first derivatives each other) to define an .mN C n/  .mN C n/ matrix. en, take expectation of that matrix with respect to the density p.fξ˛ ξN˛ .X˛ ; θ /gN ˛D1 /. e resulting matrix is called the Fisher information matrix, and its inverse is called the Cramer-Rao lower bound on the (joint) covariance matrix of X1 , ..., XN , and θ . e auxiliary variables X1 , ..., XN are also known as the nuisance parameters, while θ is called the structural parameter or parameter of interest . Usually, we are interested only in the covariance of θ , so we take out from the .mN C n/  .mN C n/ joint covariance matrix the n  n lower right submatrix, which can be shown to coincide with the KCR lower bound of Eq. (10.3). However, evaluating and inverting the large Fisher information matrix, whose size can grow quickly as the number N of observations increases, is a heavy burden both analytically and computationally. e KCR lower bound in Eq. (10.3) is expressed only in terms of the parameter θ of interest without involving auxiliary nuisance parameters, resulting in significant analytical and computational efficiency, yet giving the same value as the Cramer-Rao lower bound.

111

Answers Chapter 1 P 2 1.1. (1) We minimize .1=N / N ˛D1 .ξ˛ ; n/ subject to knk = 1. Corresponding to Procedure 1.1, the computation goes as follows.

1. Compute the 3  3 matrix

MD

N 1 X ξ˛ ξ˛> : N ˛D1

2. Solve the eigenvalue problem M n = n, and return the unit eigenvector n for the smallest eigenvalue . (2) We obtain the following expression: 1> 10 x˛ x˛ V Œξ˛  D EŒξ˛ ξ˛>  D EŒ@ y˛ A @ y˛ A  0 0 1 0 1 0 2 1 0 0 EŒx˛  EŒx˛ y˛  0 D @ EŒy˛ y˛  EŒy˛2  0 A D 2 @ 0 1 0 A : 0 0 0 0 0 0 0

Chapter 2

2.1. e procedure goes as follows. 1. Compute the 6  6 matrices

MD

N 1 X ξ˛ ξ˛> ; N ˛D1

ND

N 1 X V0 Œξ˛ : N ˛D1

2. Solve the generalized eigenvalue problem M θ = N θ , and return the unit generalized eigenvector θ for the smallest generalized eigenvalue . 2.2. e procedure goes as follows.

112

ANSWERS

1. Compute the 6  6 matrices N 1 X MD ξ˛ ξ˛> ; N ˛D1 N  1 X V0 Œξ˛  C 2S Œξ˛ e>  ND N ˛D1  C2S ŒV0 Œξ˛ M5 ξ˛ ξ˛>  :

N 1 X .ξ˛ ; M5 ξ˛ /V0 Œξ˛  N 2 ˛D1

2. Solve the generalized eigenvalue problem M θ = N θ , and return the unit generalized eigenvector for the eigenvalue  of the smallest absolute value. Chapter 3

3.1. e gradient of Eq. (3.3) with respect to θ is written as rθ J D

N 1 X 2.ξ˛ ; θ /ξ˛ N ˛D1 .θ ; V0 Œξ˛ θ /

N 1 X 2.ξ˛ ; θ /2 V0 Œξ˛ θ D 2.M N ˛D1 .θ ; V0 Œξ˛ θ /2

L/θ D 2Xθ ;

where M , L, and X are the matrices defined in Eqs. (3.4) and (3.5). Chapter 5

5.1. If the curve of Eq. (5.1) is translated in the x and y directions by xc and yc , respectively, where xc and yc are defined by Eq. (5.5), the ellipse is centered at the origin O in the form Ax 2 C 2Bxy C Cy 2 D c;

c D Axc2 C 2Bxc yc C Cyc2

f0 F:

If the xy coordinate system is rotated around the origin by angle  , there exists a  such that the curve equation has the form 1 x 02 C 2 y 02 D c;

in the rotated x 0 y 0 coordinate system. As is well known in linear algebra, the values 1 and       A B cos  sin  2 are the eigenvalues of the matrix , and and are the B C sin  cos  corresponding eigenvectors. is curve is an ellipse when 1 and 2 have the same sign, i.e., when 1 2 > 0. Since the eigenvalues 1 and 2 are the solutions of the secular equation ˇ ˇ ˇ  A ˇ B ˇ ˇ D 2 .A C C / C .AC B 2 / D 0; ˇ B  C ˇ

ANSWERS 2

113

2

we have 1 2 = AC B . Hence, the curve is an ellipse when AC B > 0. It is a real ellipse if 1 and 2 have the same sign as c , and an imaginary ellipse otherwise. e curve is a parabola if one of 1 and 2 is 0, i.e., 1 2 = 0, and is a hyperbola if 1 and 2 have different signs, i.e., 1 2 < 0. ese correspond to AC B 2 = 0 and AC B 2 < 0, respectively. 5.2. (1) Expanding the left side, we can see that the cubic term in  is 3 . We also see that the quadratic term is .A11 C A22 C A33 /2 , i.e., 2 trŒA, and that the coefficient of the linear term is the sum of the minors of A obtained by removing the row and column containing each diagonal element in turn, i.e., AŽ11 C AŽ22 C AŽ33 = trŒAŽ . e constant term is obtained by letting  = 0, i.e., jAj. (2) Since A

1

= AŽ =jAj, we obtain

jA C B j D jAj  jI C A

1

ˇ AŽ ˇˇ ˇ B j D jAj  ˇI C B ˇ: jAj

Evidently, the cubic term in  is jAj3 , and the coefficient of the quadratic term is, from the above result of (1), jAjtrŒ.AŽ =jAj/B  = trŒAŽ B . Letting  = 0, we see that the constant term is jB j. On the other hand, note the equality ˇ ˇ ˇ ˇ 3 ˇB jA C B j D  ˇ C Aˇˇ :  e coefficient of  equals the coefficient of 1=2 in jB = C Aj, which equals trŒB Ž A (= trŒAB Ž ) from the above result. us, we obtain Eq. (5.55). e above argument assumes that A has its inverse A 1 and that  ¤ 0. However, Eq. (5.55) is a polynomial in A and . Hence, this is a “polynomial identity,” holding for all A and  including the case of jAj = 0 and  = 0. 5.3. Equation (1.1) is rearranged in terms of x in the form Ax 2 C 2.By C Df0 /x C .Cy 2 C 2f0 Ey C f02 F / D 0;

which is factorized as follows:

˛; ˇ D

A.x ˛/.y ˇ/ D 0; q .By C Df0 / ˙ .By C Df0 /2 A.Cy 2 C 2f0 Ey C f02 F /

: A Since jQj = 0, the quadratic equation degenerates into the product of two linear terms. Hence, the inside of the square root .B 2

AC /y 2 C 2f0 .BD

AE/y C f02 .D 2

AF /

114

ANSWERS

is a square of a linear expression in y . If B 2 AC > 0, it has the form p  BD AE 2 B 2 AC y C f0 p : B 2 AC us, we obtain the following factorization: ! p p .By C Df0 / C B 2 AC y C f0 .BD AE/= B 2 AC A x A ! p p .By C Df0 / B 2 AC y f0 .BD AE/= B 2 AC  x D 0: A Equation (5.56) results from this. 5.4. If n2 ¤ 0, we substitute y = .n1 x C n3 f0 /=n2 into Eq. (5.1) to obtain Ax 2 2Bx.n1 x C n3 f0 /=n2 C C.n1 x C n3 f0 /2 =n22 C2f0 .Dx E.n1 x C n3 f0 /=n2 / C f02 F D 0:

Expanding this, we obtain the following quadratic equation in x : .An22

2Bn1 n2 C C n21 /x 2 C 2f0 .Dn22 C C n1 n3 C.C n23 2En2 n3 C F n22 /f02 D 0:

Bn2 n3

En1 n2 /x

If we let xi , i = 1, 2, be the two roots, yi is given by yi = .n1 xi C n3 f0 /=n2 . If n2  0, we substitute x = .n2 y C n3 f0 /=n1 into Eq. (5.1) to obtain A.n2 x C n3 f0 /2 =n21 2Bx.n2 y C n3 f0 /=n1 C Cy 2 C2f0 . D.n2 y C n3 f0 /=n1 C Ey/ C f02 F D 0:

Expanding this, we obtain the following quadratic equation in y : .An22

2Bn1 n2 C C n21 /y 2 C 2f0 .En21 C An2 n3 C.An23 2Dn1 n3 C F n21 /f02 D 0:

Bn1 n3

Dn1 n2 /y

If we let yi , i = 1, 2, be the two roots, xi is given by xi = .n2 xi C n3 f0 /=n1 . In actual computation, in stead of considering if n2 ¤ 0 or n2  0, we compute the former solution whenjn2 j  jn1 j and the latter solution when jn2 j < jn1 j. If the quadratic equation has imaginary roots, the line does not intersect the ellipse. If it has a double root, we obtain the tangent point. 5.5. e normal to the line n1 x C n2 y C n3 f0 = 0 is given by .n1 ; n2 /> . e normal to a curve F .x; y/ = 0 at .x0 ; y0 /> is given up to scale by rF = .@F=@x; @F=@y/. Hence, the normal .n1 ; n2 /> to the curve of Eq. (5.1) at .x0 ; y0 / is given up to scale by n1 D Ax0 C By0 C f0 D;

n2 D Bx0 C Cy0 C f0 E:

ANSWERS

115

>

e line passing through .x0 ; y0 / having normal .n1 ; n2 / is given by n1 .x

x0 / C n2 .x

x0 / C n3 f0 D 0:

After expansion using Ax02 C 2Bx0 y0 C Cy02 C 2f0 .Dx0 C Ey0 / C f02 F = 0, we obtain the line n1 x C n2 y C n3 f0 = 0 given by Eq. (5.8). 5.6. We only need to read .x˛ ; y˛ / in Procedure 3.2 as .a; b/ and remove the step of updating the ellipse. Hence, we obtain the following procedure. 1. Represent the ellipse by the 6-D vector θ as in Eq. (1.8), and let J0 = 1 (a sufficiently large number), aO = a, bO = b , and aQ = bQ = 0.

2. Compute the normalized covariance matrix V0 ŒξO obtained by replacing xN ˛ and yN˛ of V0 Œξ˛  in Eq. (1.21) by aO and bO , respectively. 3. Compute

0

B B B B ξ D B B B @

aO 2 C 2aO aQ Q 2.aO bO C bO aQ C aO b/ Ob 2 C 2bO bQ 2f0 .aO C a/ Q O Q 2f0 .b C b/ f0

4. Update aQ , bQ , aO , and bO as follows: 

aQ bQ





2

2.ξ ; θ / .θ ; V0 ŒξOθ /



1 2

2 3

4 5



1 aO @ bO A ; f0

1

C C C C C: C C A

0

aO

a

a; Q

bO

b

Q b:

5. Compute J  D aQ 2 C bQ 2 : O and stop. Else, let J0 If J   J0 , return .a; O b/

J  , and go back to Step 2.

5.7. Transposition of AA 1 = I on both side gives .A 1 /> A> = I . is means that .A 1 /> is the inverse of A> , i.e., .A 1 /> = .A> / 1 . 5.8. Since .x; y; f /> is the direction of the line of sight, we can write X = cx , Y = cy , Z = cf for some c . is point is on the supporting plane n1 X C n2 Y C n3 Z = h, so c.n1 x C n2 y C n3 f / = h holds. Hence, c = h=.n1 x C n2 y C n3 f /, and X , Y , and Z are given by Eq. (5.57).

116

ANSWERS

5.9. e unit vector along the Z -axis is k = .0; 0; 1/> . Let ˝ be the angle made by n and k (positive for rotating n toward k screw-wise). It is computed by ˝ D sin

1

kn  kk:

e unit vector orthogonal to both n and k is given by

l D N Œn  k: e rotion around l by angle ˝ screw-wise is given by the matrix

RD 0

cos ˝ C l12 .1 cos ˝/ @ l2 l1 .1 cos ˝/ C l3 sin ˝ l3 l1 .1 cos ˝/ l2 sin ˝

l1 l2 .1 cos ˝/ l3 sin ˝ cos ˝ C l22 .1 cos ˝/ l3 l2 .1 cos ˝/ C l1 sin ˝

1 l1 l3 .1 cos ˝/ C l2 sin ˝ l2 l3 .1 cos ˝/ l1 sin ˝ A : cos ˝ C l32 .1 cos ˝/

5.10. Let I.i; j / be the value of the pixel with integer coordinates .i; j /. For non-integer coordinates .x; y/, let .i; j / be their integer parts, and .; / (= .x i; y j /) their fraction parts. We compute I.x; y/ by the following bilinear interpolation: I.x; y/ D .1 /.1 /I.i; j / C .1 /I.i C 1; j / C.1 /I.i; j C 1/ C I.i C 1; j C 1/:

is means that we first do linear interpolation in the j -direction, combining I.i; j / and I.i; j C 1/ in the ratio  W 1 , combining I.i C 1; j / and I.i; j C 1/ in the ratio  W 1 , and then combining these in the i -direction in the ratio  W 1  . We obtain the same result if we change the order, linearly interpolating first in the i -direction and then in the j -direction. Chapter 6

6.1 Let  be the angle made by u and v (Fig. 6.12). e length of the projection of v onto the line in the direction of the unit vector u is given by kv k cos  D kukkv k cos  D .u; v /:

Hence, v has the component .u; v /u in the u direction. It follows that the projection of v onto the plane is given by

u

.u; v /u D u

uu> v D .I

uu> /v D Pu v :

6.2 We see that

Pu2 D .I uu> /.I uu> / D I uu> uu> C uu> uu> D I D I 2uu> C uu> D I uu> D Pu :

2uu> C u.u; u/u>

ANSWERS

117

Chapter 7

7.1 (1) Let

0

1 F DU@ 0 0

0 2 0

1 0 0 A V >; 3

1  2  3 > 0;

./

be the singular value decomposition of the matrix F . Using the identities trŒA> A = kAk2 and trŒAB  = trŒBA about the matrix trace and noting that U and V are orthogonal matrices, we obtain the following expression: 0 1 0 1 1 0 0 1 0 0 kF k2 D trŒF > F  D trŒV @ 0 2 0 A U > U @ 0 2 0 A V >  0 0 3 0 0 3 0 2 1 0 2 1 1 0 0 1 0 0 D trŒV @ 0 22 0 A V >  D trŒ@ 0 22 0 A V > V  0 0 32 0 0 32 1 0 2 1 0 0 D tr @ 0 22 0 A 0 0 32 D 12 C 22 C 32 :

(2) We first compute the singular value decomposition of F in the form of the above ./ and replace the smallest eigenvalue 3 by 0 to make det F = 0. en, we change the scale so that kF k = 1. In other words, we correct F to q 0 1 1 = 12 C 22 0 0 B C > q C F UB 0  = 2 C 2 0 A V : @ 2

0

1

0

2

0

7.2 From Eq. (7.20), we see that x 0 ξ .1/ y 0 ξ .2/ = ξ .3/ . Hence, the three equations of Eq. (7.21) are related by x 0 .ξ .1/ ; θ / y 0 .ξ .2/ ; θ / = .ξ .3/ ; θ /, meaning that if the first and the second equations are satisfied, the third is automatically satisfied.

118

ANSWERS

Chapter 8

8.1 From the rule of matrix multiplication, we observe that 0 10 1 B CB :: A D u1    ur @ A@ : r

1 v1> r :: C D X  u v > : A i i i : iD1 vr

8.2 is is immediately obtained from the above result. 8.3 Since fui g and fvi g are orthonormal sets, .ui ; uj / = ıij and .vi ; vj / = ıij hold, where ıij is the Kronecker delta (1 for i = j and 0 otherwise). Hence, we observe that

AA A D D

A AA D D

r X

i ui vi>

iD1 r X i;j;kD1 r X

r r r X X X 1 i k k uk vk> D vj uj> ui .vi ; vj /.uj ; uk /vk> j j

j D1

kD1 r X

i k ıij ıj k ui vk> D j

iD1

i;j;kD1

i ui vi> D A;

r r r X X X 1 j 1 > > vi u>  u v v u D vi .ui ; uj /.vj ; vk /u> j j j k k i k i k i k

iD1 r X

i;j;kD1

j D1

kD1 r

i;j;kD1

X 1 j D vi u> ıij ıj k vi u> i DA : k i k i iD1

8.4 We see that

AA D

r X iD1

i ui vi>

r r r r X X X X 1 i i vj uj> D ui .vi ; vj /uj> D ıij ui uj> D ui u> i ; j j j

j D1

i;j D1

i;j D1

iD1

j D1

i;j D1

i;j D1

iD1

r r r r r X X X X X 1 j j > > >  u v D vi vi> : A AD vi u> v . u ; u / v D ı v v D j j i i j ij i j i j j i i i iD1

Hence,

AA uk D A Avk D

r X iD1

r X iD1

ui .ui ; uk / D vi .vi ; vk / D

r X iD1

r X iD1

ıi;j ui D

ıi;j vi D





uk 0 vk 0

1kr ; r