Generalized Gaussian Error Calculus 3642033040, 9783642033049

The book of nature is written in the language of mathematics Galileo Galilei, 1623 Metrology strives to supervise the ?o

129 80 5MB

English Pages 314 [298] Year 2010

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Part I Basics of Metrology
True Values and Traceability
Metrology
Traceability
Measurement Errors
Precision and Accuracy
Measurement Uncertainty
Measuring Result
Rivaling Physical Approaches
Models and Approaches
Gaussian Error Model
Generalized Gaussian Approach
Robust Testing Conditions
Linearizations
Quiddity of Least Squares
Analysis of Variance
Road Map
Part II Generalized Gaussian Error Calculus
The New Uncertainties
Gaussian Versus Generalized Gaussian Approach
Uncertainty and True Value
Designing Uncertainties
Quasi Safeness
Treatment of Random Errors
Well-Defined Measuring Conditions
Multidimensional Normal Model
Permutation of Repeated Measurements
Treatment of Systematic Errors
Repercussion of Biases
Uniqueness of Worst-Case Assessments
Part III Error Propagation
Means and Means of Means
Arithmetic Mean
Extravagated Averages
Mean of Means
Individual Mean Versus Grand Mean
Functions of Erroneous Variables
One Variable
Two Variables
More Than Two Variables
Concatenated Functions
Elementary Examples
Test of Hypothesis
Method of Least Squares
Empirical Variance--Covariance Matrix
Propagation of Systematic Errors
Uncertainties of the Estimators
Weighting Factors
Example
Part IV Essence of Metrology
Dissemination of Units
Working Standards
Key Comparisons
Multiples and Sub-multiples
Calibration Chains
Pairwise Comparisons
Founding Pillars
Consistency
Traceability
Part V Fitting of Straight Lines
Preliminaries
Distinction of Cases
True Straight Line
Straight Lines: Case (i)
Fitting Conditions
Orthogonal Projection
Uncertainties of the Input Data
Uncertainties of the Components of the Solution Vector
Uncertainty Band
EP-Region
Straight Lines: Case (ii)
Fitting Conditions
Orthogonal Projection
Uncertainties of the Components of the Solution Vector
Uncertainty Band
EP-Region
Straight Lines: Case (iii)
Fitting Conditions
Orthogonal Projection
Series Expansion of the Solution Vector
Uncertainties of the Components of the Solution Vector
Uncertainty Band
EP-Region
Part VI Fitting of Planes
Preliminaries
Distinction of Cases
True Plane
Planes: Case (i)
Fitting Conditions
Orthogonal Projection
Uncertainties of the Input Data
Uncertainties of the Components of the Solution Vector
EPC-Region
Planes: Case (ii)
Fitting Conditions
Orthogonal Projection
Uncertainties of the Components of the Solution Vector
Confidence Intervals and Overall Uncertainties
Uncertainty Bowls
EPC-Region
Planes: Case (iii)
Fitting Conditions
Orthogonal Projection
Series Expansion of the Solution Vector
Uncertainties of the Components of the Solution Vector
Uncertainty Bowls
EPC-Region
Part VII Fitting of Parabolas
Preliminaries
Distinction of Cases
True Parabola
Parabolas: Case (i)
Fitting Conditions
Orthogonal Projection
Uncertainties of the Input Data
Uncertainties of the Components of the Solution Vector
Uncertainty Band
EPC-Region
Parabolas: Case (ii)
Fitting Conditions
Orthogonal Projection
Uncertainties of the Components of the Solution Vector
Uncertainty Band
EPC-Region
Parabolas: Case (iii)
Fitting Conditions
Orthogonal Projection
Series Expansion of the Solution Vector
Uncertainties of the Components of the Solution Vector
Uncertainty Band
EPC-Region
Part VIII Non-linear Fitting
Series Truncation
Homologous True Function
Fitting Conditions
Orthogonal Projection
Iteration
Uncertainties of the Components of the Solution Vector
Transformation
Homologous True Function
Fitting Conditions
Orthogonal Projection
Uncertainties of the Components of the Solution Vector
Part IX Appendices
Graphical Scale Transformations
Expansion of Solution Vectors
Special Confidence Ellipses and Ellipsoids
Extreme Points of Ellipses and Ellipsoids
Drawing Ellipses and Ellipsoids
Security Polygons and Polyhedra
EP Boundaries and EPC Hulls
Student's Density
Uncertainty Band Versus EP-Region
Quantiles of Hotelling's Density
References
Index
Recommend Papers

Generalized Gaussian Error Calculus
 3642033040, 9783642033049

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Generalized Gaussian Error Calculus

Michael Grabe

Generalized Gaussian Error Calculus

With 47 Figures

123

Dr. rer. nat. Michael Grabe Am Hasselteich 5 38104 Braunschweig, Germany [email protected]

ISBN 978-3-642-03304-9 e-ISBN 978-3-642-03305-6 DOI 10.1007/978-3-642-03305-6 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2009940174 c Springer-Verlag Berlin Heidelberg 2010  This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover design: eStudio Calamar Steinen Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)

To Lucy, Niklas, Finley, and Rafael

Preface

The book of nature is written in the language of mathematics Galileo Galilei, 1623

Metrology strives to supervise the flow of the measurand’s true values through consecutive, arbitrarily interlocking series of measurements. To highlight this feature the term traceability has been coined. Traceability is said to be achieved, given the true values of each of the physical quantities entering and leaving the measurement are localized by specified measurement uncertainties. The classical Gaussian error calculus is known to be confined to the treatment of random errors. Hence, there is no distinction between the true value of a measurand on the one side and the expectation of the respective estimator on the other. This became apparent not until metrologists considered the effect of so-called unknown systematic errors. Unknown systematic errors are time-constant quantities unknown with respect to magnitude and sign. While random errors are treated via distribution densities, unknown systematic errors can only be assessed via intervals of estimated lengths. Unknown systematic errors were, in fact, addressed and discussed by Gauss himself. Gauss, however, argued that it were up to the experimenter to eliminate their causes and free the measured values from their influence. Unfortunately, this is not possible. Considering the present state of measurement technique, unknown systematic errors are of an order of magnitude comparable to that of random errors and this causes the Gaussian error calculus to break down. Consequently, the metrological community needs to consider how the error calculus to-be should address the coexistence of random errors and unknown systematic errors. In the late 1970s, a seminar entitled On the Statement of the Measuring Uncertainty [14] was held at the Physikalisch-Technische Bundesanstalt in Braunschweig which, regrettably enough, induced a bifurcation of error calculus: one branch attempted to save the Gaussian approach by formally randomizing unknown systematic errors and thus producing the Guide to the

VIII

Preface

Expression of Uncertainty in Measurement, GUM for short [15]; the other proposed a revision from scratch and issued an essentially new, generalized version of the Gaussian error calculus. This latter approach will be discussed here. The proceeding considers time-constant unknown systematic errors to spawn biased estimators thus preventing the true values of the measurands and the expectations of the respective estimators from coinciding. Eventually, this physically founded distinction gave birth to the term traceability. Independently of the question of how to treat unknown systematic errors, the author devotes attention to another point of interest relating to the treatment of random errors. Commonly, random errors are considered normally distributed, at least approximately, and we shall, as a matter of course, retain this assumption. The inclusion of the multidimensional model in error calculus, an obvious extension, seems as yet to have been overlooked. For this model to be deployed properly, each of the variables involved is required to hold the same number of repeated measurements. This apparently banal request turns out exceedingly beneficial: it solves the cumbersome problem error calculus traditionally suffers from when it comes to assign confidence intervals in error propagation. Renouncing the stipulation of equal numbers of repeated measurements puts experimenters beyond the validity of the distribution density of the empirical moments of second order—and this very observation causes the trouble. The generalized Gaussian approach presented here produces reliable, easy to attain measurement uncertainties meeting the demands of traceability. At the same time, the approach features the properties of a building kit: any overall uncertainty is the sum of the contribution of random errors taken from a confidence interval as provided by Student and the contribution of unknown systematic errors, as expressed by an appropriate worst case estimation. The book supplements the author’s monographMeasurement Uncertainties in Science and Technology [28]. Its principles have, however, been tautened and cast into an order attaching prior-ranking importance to metrology’s starting point, the traceability of physical units, physical constants and physical quantities at large. In addition, the previously discussed formalism has been extended in some respects. Finally, the procedures are now illustrated by scores of numerically based diagrams helping the reader to pursue the idea of traceability. Braunschweig, November 2009

Michael Grabe

Contents

Part I Basics of Metrology 1

True Values and Traceability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Metrology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Traceability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Measurement Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Precision and Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Measurement Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Measuring Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Rivaling Physical Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 3 3 4 6 6 6 8

2

Models and Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Gaussian Error Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Generalized Gaussian Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Robust Testing Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Linearizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Quiddity of Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Analysis of Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Road Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9 9 10 14 15 16 20 21

Part II Generalized Gaussian Error Calculus 3

The New Uncertainties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Gaussian Versus Generalized Gaussian Approach . . . . . . . . . . . 3.2 Uncertainty and True Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Designing Uncertainties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Quasi Safeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25 25 25 26 29

4

Treatment of Random Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Well-Defined Measuring Conditions . . . . . . . . . . . . . . . . . . . . . . . 4.2 Multidimensional Normal Model . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Permutation of Repeated Measurements . . . . . . . . . . . . . . . . . . .

31 31 32 33

X

5

Contents

Treatment of Systematic Errors . . . . . . . . . . . . . . . . . . . . . . . . . . 35 5.1 Repercussion of Biases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 5.2 Uniqueness of Worst-Case Assessments . . . . . . . . . . . . . . . . . . . . 36

Part III Error Propagation 6

Means and Means of Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Arithmetic Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Extravagated Averages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Mean of Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Individual Mean Versus Grand Mean . . . . . . . . . . . . . . . . . . . . . .

39 39 41 41 47

7

Functions of Erroneous Variables . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 One Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Two Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 More Than Two Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Concatenated Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Elementary Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Test of Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53 53 56 61 66 68 74

8

Method of Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Empirical Variance–Covariance Matrix . . . . . . . . . . . . . . . . . . . . 8.2 Propagation of Systematic Errors . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Uncertainties of the Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Weighting Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79 79 82 83 84 87

Part IV Essence of Metrology 9

Dissemination of Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 9.1 Working Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 9.2 Key Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

10 Multiples and Sub-multiples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 10.1 Calibration Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 10.2 Pairwise Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 11 Founding Pillars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 11.1 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 11.2 Traceability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

Contents

XI

Part V Fitting of Straight Lines 12 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 12.1 Distinction of Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 12.2 True Straight Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 13 Straight Lines: Case (i) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1 Fitting Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Uncertainties of the Input Data . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4 Uncertainties of the Components of the Solution Vector . . . . . 13.5 Uncertainty Band . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6 EP -Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

121 121 121 123 124 126 127

14 Straight Lines: Case (ii) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1 Fitting Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Uncertainties of the Components of the Solution Vector . . . . . 14.4 Uncertainty Band . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5 EP -Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

131 131 132 132 135 137

15 Straight Lines: Case (iii) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1 Fitting Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3 Series Expansion of the Solution Vector . . . . . . . . . . . . . . . . . . . 15.4 Uncertainties of the Components of the Solution Vector . . . . . 15.5 Uncertainty Band . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.6 EP -Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

141 141 142 143 145 147 148

Part VI Fitting of Planes 16 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 16.1 Distinction of Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 16.2 True Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 17 Planes: Case (i) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.1 Fitting Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3 Uncertainties of the Input Data . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4 Uncertainties of the Components of the Solution Vector . . . . . 17.5 EP C-Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

157 157 157 158 159 161

XII

Contents

18 Planes: Case (ii) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.1 Fitting Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.2 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.3 Uncertainties of the Components of the Solution Vector . . . . . 18.4 Confidence Intervals and Overall Uncertainties . . . . . . . . . . . . . 18.5 Uncertainty Bowls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.6 EP C-Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

165 165 166 166 168 169 171

19 Planes: Case (iii) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.1 Fitting Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.2 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.3 Series Expansion of the Solution Vector . . . . . . . . . . . . . . . . . . . 19.4 Uncertainties of the Components of the Solution Vector . . . . . 19.5 Uncertainty Bowls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.6 EP C-Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

179 179 180 181 183 185 187

Part VII Fitting of Parabolas 20 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 20.1 Distinction of Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 20.2 True Parabola . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 21 Parabolas: Case (i) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.1 Fitting Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Uncertainties of the Input Data . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 Uncertainties of the Components of the Solution Vector . . . . . 21.5 Uncertainty Band . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.6 EP C-Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

195 195 195 196 197 199 199

22 Parabolas: Case (ii) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.1 Fitting Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.3 Uncertainties of the Components of the Solution Vector . . . . . 22.4 Uncertainty Band . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.5 EP C-Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

203 203 204 204 207 209

23 Parabolas: Case (iii) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.1 Fitting Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.2 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.3 Series Expansion of the Solution Vector . . . . . . . . . . . . . . . . . . . 23.4 Uncertainties of the Components of the Solution Vector . . . . . 23.5 Uncertainty Band . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.6 EP C-Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

213 213 214 215 217 219 220

Contents

XIII

Part VIII Non-linear Fitting 24 Series Truncation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.1 Homologous True Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.2 Fitting Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.3 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.4 Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.5 Uncertainties of the Components of the Solution Vector . . . . .

227 227 228 228 230 231

25 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.1 Homologous True Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.2 Fitting Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.3 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.4 Uncertainties of the Components of the Solution Vector . . . . .

237 237 237 238 239

Part IX Appendices A

Graphical Scale Transformations . . . . . . . . . . . . . . . . . . . . . . . 245

B

Expansion of Solution Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 251

C

Special Confidence Ellipses and Ellipsoids . . . . . . . . . . . . . . 257

D

Extreme Points of Ellipses and Ellipsoids . . . . . . . . . . . . . . . 261

E

Drawing Ellipses and Ellipsoids . . . . . . . . . . . . . . . . . . . . . . . . . 265

F

Security Polygons and Polyhedra . . . . . . . . . . . . . . . . . . . . . . . 267

G

EP Boundaries and EPC Hulls . . . . . . . . . . . . . . . . . . . . . . . . . 277

H

Student’s Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283

I

Uncertainty Band Versus EP-Region . . . . . . . . . . . . . . . . . . . 287

J

Quantiles of Hotelling’s Density . . . . . . . . . . . . . . . . . . . . . . . . 295

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299

Part I

Basics of Metrology

1 True Values and Traceability

As much as we expect the laws of physics to be true, we expect the constants of physics to possess true values. Metrology starts from here, from the very grasping of true values and, following the modus operandi of physicists, their traceability to the primary standards of the system of physical units agreed on.

1.1 Metrology Metrology aims at the entirety of experimental procedures needed to assess the true values of physical quantities and physical constants. At the same time, metrology asks for a world-wide uniformity of measuring results. Uniformity in turn premises the traceability of metrological standards to the International System of Units, the SI for short [2]. As a matter of fact, due to measuring errors, metrology is not in a position to uncover the true values of physical quantities or constants directly. All metrologists can do is to attempt to assess or to localize the true values of physical quantities by means of appropriately positioned intervals of reliable extensions.

1.2 Traceability Traceability is the fundamental concept of metrology and refers to the handing over of true values in the course of a measurement. Basically, the transfer is accomplished via a comparator mediating the difference between the standard and the measurand, Fig. 1.1. Consider two measuring objects of equal physical quality, say β0,j and β0,k . Traceability expresses itself through β0,j − β0,k = x0

(1.1)

where x0 designates the true indication of the comparator. Let us state:

4

1 True Values and Traceability

Fig. 1.1. In its most basic form traceability connects the true values of two objects, say, β0,j and β0,k with the true indication of the comparator, say, x0

Traceability controls the transfer of the true value of one measuring object to the true value of another measuring object mediated by the true indication of a comparator countervailing the action. Likewise, traceability is established through mathematical concatenations as functions and algorithms, say, e.g. least squares. If accomplished, traceability controls the flow of true values all about within the widely ramified net of physical quantities. If not established, the mutual compatibility of measuring results turn out ill-defined.

1.3 Measurement Errors Let x denote a quantity to be measured and x0 its true value. Then, the first differs from the second due to the joint action of a random error and a time-constant unknown systematic error, Fig. 1.2. Random errors are due to uncontrollable, irregular, statistically born microscopic processes arising during the taking up of a series of repeated measurements. A common attempt to quantify their dispersion is to allot an empirical standard deviation to them. By contrast, unknown systematic errors are evoked during the configuration of the measuring device. There are alignments to be made, be they optical, mechanical, electrical or otherwise. Due to technical reasons, none of them will be thoroughly perfect. Rather, there will be smaller or larger residual deviations from the ideal positioning which the experimenter cannot clear up. Furthermore, the experimenter will have to account for environmental and

1.3 Measurement Errors

5

Fig. 1.2. A measuring device, operating stationarily, separates by itself random from systematic errors; x0 true value, μ center of scattering of the series of repeated measurements, x ¯ arithmetic mean. Prior to the measurements the device somehow allocates a fixed, time-constant value f to the systematic error where −fs,x ≤ f ≤ fs,x

boundary conditions being controllable only with finite accuracy, mechanical or electrical switching-on effects, inherently different approaches to measure one and the same physical quantity and, if need be, to consider varying theoretical approaches. According to their nature, we address unknown systematic errors as biases. Here, the idea of non-drifting or stationarily operating measuring devices is of primary importance. Any such device separates by itself random and unknown systematic errors. When it comes to error propagation, this separation channels the flow of random and systematic errors into categorically differing branches. Obviously, an unknown systematic error shifts the bulk of measured data collectively. Unfortunately, neither the direction nor the amount of any such shift is known or ever knowable. Rather, the measurements are burdened by an unknown, non-eliminable quantity or bias for short staying visually hidden—as long as the measuring device betrays no drift. Unknown systematic errors can only be assessed through intervals, their boundaries being taken from an analysis of the buildup, the functioning and behavior of the measuring device as a whole. From there, experimenters are in a position to specify intervals limiting their possible ranges. As has been shown elsewhere, these intervals may always be taken to be symmetric to zero [28]. After all, unknown systematic errors are considered time-constant, unknown with respect to magnitude and sign and confined by intervals symmetric to zero. Though biasing the true values of the measurands, they are not seen to affect the scattering of the random errors.

6

1 True Values and Traceability

1.4 Precision and Accuracy The term precision quantifies the random scattering of measured data. A common measure for precision is the empirical standard deviation. In addition to random errors there are biases being, by their very nature, time-constant non-statistical perturbations. Figuring out the potential difference between the estimate of the true value and the true value itself, random errors and biases enter collectively. Only the combined action of random errors and biases constitute what is called the measuring accuracy. As the Gaussian error calculus dismisses unknown systematic errors,1 we shall have to break new ground.

1.5 Measurement Uncertainty The measuring uncertainty expresses the measuring accuracy quantitatively – the lower the uncertainty the higher the accuracy of the measurement. Due to formal reasons, uncertainties emerge as positive quantities which afterward have to be provided with a ± sign. A measuring uncertainty should not be practiced as an end in itself, rather it should localize the position of the true value of the measurand by the smallest reliable interval – whatever size this actually may have. While high level metrology strives for extremely low uncertainties, everyday metrology gets by with uncertainties of moderate sizes depending on the particular kind of application. Remarkably enough, legal metrology addresses the properties of uncertainties as follows: Neither the seller nor the buyer should be in a position to extricate directed advantages deploying metrological quantifications. Case related, uncertainties may or may not carry a physical dimension.

1.6 Measuring Result Any measurement should issue an estimate of the true value of the measurand and a measuring uncertainty expressed as estimator ± uncertainty estimator − uncertainty ≤ true value ≤ estimator + uncertainty , 1

(1.2)

Just to recall: Gauss identified and discussed unknown systematic errors, however excluded them from his formalism.

1.6 Measuring Result

7

Fig. 1.3. The interval estimator ± uncertainty is required to localize the true value of the measurand

Fig. 1.3. In general, estimators happen to be mean values, the pertaining symbols carrying a bar on top. Given x designates a measurand, its mean would read x ¯. The common symbol of the measuring uncertainty is u. To establish a relationship to the measurand an appropriate subscript should be added; the final account would read x ¯ ± ux¯ .

(1.3)

Example Let (10.31 ± 0.05) g denote the result of a weighing. Here, 10.31 g designates the estimate of the unknown true value of the mass and ± 0.05 g the estimate’s uncertainty. Uncertainties depend on the error model drawn on. This aspect asks experimenters to verify the localization properties of their uncertainties. But as nature strictly hides true values, formal checks can only be done via numerical simulations. Clearly, only a data simulation puts us in a position to firstly preset a true value and to subsequently test the localization of which via the uncertainty as issued by the (simulated) measuring process.

8

1 True Values and Traceability

1.7 Rivaling Physical Approaches Let any two rivaling physical theories be distinguished by measurement. To decide which approach is more likely to be correct, a decision is necessarily confined to the limits of the measurement accuracy. As has been emphasized repeatedly, measurement uncertainties should express the smallest intervals localizing the true values with sufficient reliability. Uncertainties being either too narrow or too wide might spoil experimenters’ efforts and obscure their conclusions.

2 Models and Approaches

Asking the error model to map the physical properties of the experimental set-up abolishes the Gaussian error calculus.

2.1 Gaussian Error Model Following Gauss [27], experimenters are faced with random errors and timeconstant unknown systematic errors. The former, being due to random perturbations, fluctuate from one measurement to the next; the latter at the same time stemming from time-constant effects remain unaltered. Gauss, as a matter of fact, dismissed unknown systematic errors arguing that it is to be left to the experimenter to get rid of them. Roughly about two centuries later, due to enhanced measuring accuracies and international measurement comparisons, this very reasoning issued a dilemma: unknown systematic errors, proving non-eliminable, caused Gauss’ error calculus to break down. To an observer, repeated measurements scatter randomly with respect to some center, say, μ. At the same time a potential unknown systematic error remains invisible. Denoting random errors by the symbol ε, the Gaussian error equation, in case of n repeated measurements, reads xl = μ + εl ;

l = 1, . . . , n .

(2.1)

Obviously, this equation reflects what the experimenter observes but covers not necessarily the actual operating principle of the measuring device. Rather, experience urges to discern the center of scattering, μ, and the true value, x0 , of the measurand. As a rule, these quantities are separated by an unknown systematic error or a bias. On that score, Gauss framed the conditions of the error calculus incomplete, his formalism being doomed to failure: As far as empirical data embed unknown systematic errors or biases the Gaussian error calculus should be considered obsolete and the same applies to other statistically borne procedures burdened by unknown systematic errors, as e.g. the analysis of variance.

10

2 Models and Approaches

2.2 Generalized Gaussian Approach In contrast to the original Gaussian approach, the generalized Gaussian approach introduces unknown systematic errors right from the outset. Errors of this, say, second kind probably (re)entered the stage on account of a paper issued by Eisenhart [6] in the early 1950s, pondering their consequences in regard to quoting realistic measurement uncertainties. Systematic errors, Eisenhart argued, would displace the center of the scattering of the random errors relative to the true value of the measurand. Remarkably enough, the aftermath of Eisenhart’s paper did not trigger off an impetus revolutionizing error calculus. Rather, the metrological community had to wait 30 more years for an initiative to evolve. The starting point happened to be set on February 20, 1978 when a seminar was held at the Physikalisch-Technische Bundesanstalt in Braunschweig entitled On the Statement of the Measuring Uncertainty 1 [14]. Regrettably enough, the seminar initiated a bifurcation of error calculus giving rise to the “Guide to the Expression of Uncertainty in Measurement” [15], GUM for short, and to what the author calls “Generalized Gaussian Error Calculus” which we shall exclusively address in the following. To this end, we confine our considerations to non-drifting measuring devices operating, as Eisenhart [6] vividly explained, in a state of statistical control. Such working condition reflects what may also be termed a stationary random process. Akin to (2.1), we notionally decompose each of the measured values, say, xl ; l = 1, . . . , n into three terms, the true value x0 , a specific l-dependent random error εl and a common, time-constant unknown systematic error f xl = x0 + εl + f ,

f = const. ;

l = 1, . . . , n .

(2.2)

This equation, depicting the standard situation of metrology, marks the starting point of the generalized Gaussian error calculus. It maps the physical behavior of the measuring device suggesting the random dispersion of the data to be charged by one and the same time-constant unknown systematic error. Further, we consider the random errors to be normally distributed and the unknown systematic error to be limited by an interval symmetric to zero N (μ, σ 2 );

−fs ≤ f ≤ fs .

(2.3)

While μ designates the center, σ stands for the spread or dispersion of the density. Assuming f to be confined to a symmetric interval does in no way limit the scope of application of our discussions. As has been shown in [28], any non-symmetric interval may easily be transformed into a symmetric one. Due to formal reasons, we shall understand measured data as realizations of random variables. Consider, e.g., a sequence of repeated measurements 1¨

Uber die Angabe der Messunsicherheit.

2.2 Generalized Gaussian Approach

11

x1 , x2 , . . . , xn . Here, the xl are considered successive realizations of some random variable X, X = {x1 , x2 , . . . , xn } . We shall assume the measured data to be normally distributed, i.e. to follow   (x − μ)2 1 pX (x) dx = √ exp − dx . 2σ 2 σ 2π Quite obviously, the expectation of the normally distributed random variable X itself reproduces the center of scattering ∞ x pX (x)dx .

μ= −∞

Similarly, the expectation of the random variable (X − μ)2 reproduces the theoretical variance ∞ (x − μ)2 pX (x)dx .

2

σ = −∞

which, remarkably enough, is in no way affected by the bias f . To shorten the notation of expected values, we shall resort to curly brackets [28], e.g. E{X} = μ and E{(X − μ)2 } = σ 2 . From (2.2) we draw μ = x0 + f ;

−fs ≤ f ≤ fs .

(2.4)

The distinction between the expectation μ and the true value x0 discloses the break down of the Gaussian error calculus. The insertion of (2.4) into (2.2) leads us back to the Gaussian approach xl = μ + εl ;

l = 1, . . . , n

which we observe to hide the unknown systematic error f . Substituting xl −μ for εl turns (2.2) into the identity xl = x0 + (xl − μ) + f ;

l = 1, . . . , n

(2.5)

the handling of which proves much more convenient than (2.2). Summing over l issues a formal decomposition of the arithmetic mean

12

2 Models and Approaches

x ¯ = x0 + (¯ x − μ) + f .

(2.6)

For convenience, this notation, deliberately detached from a sum sign,2 will be used whenever appropriate. In the following we shall strictly adhere to a persistent distinction between theoretical parameters on the one hand and empirical estimators on the other. On this note, the counterpart of the theoretical variance is the empirical variance n 1  (xl − x ¯)2 (2.7) s2 = n−1 l=1

where the bias f affects neither this nor that. In case of more than one variable, the generalized Gaussian approach asks for equal numbers of repeated measurements for each of the variables considered. Otherwise, the so-called empirical covariances would get lost. This, in turn, obstructed the interaction of the empirical variances and those, as yet provisionally addressed, empirical covariances as expressed via the multivariate density of the empirical moments of second order which will be discussed in Sect. 4.2. To summarize: The generalized Gaussian error calculus –

considers unknown systematic errors time-constant quantities



formalizes random and systematic errors according to the physical behavior of the measuring device, i.e. strictly separated



asks, in the multivariate case, for equal numbers of repeated measurements for each of the variables implied and, finally,



persistently distinguishes theoretical parameters from empirical estimators.

The distinction between theoretical parameters and empirical estimators is known to go without saying. Nevertheless, it might be advantageous to address an example: Confidence ellipses may either be based on the theoretical variances and the theoretical covariance or, alternatively, on the empirical variances and the empirical covariance. Ellipses of the first kind are to be taken from the exponent of the two-dimensional normal probability density and are deployable only and only if the theoretical parameters are known— which in metrology they are not. Ellipses of the second kind, by contrast, are issued on the part of Hotelling’s density and prove metrology-tailored as they get along with directly accessible empirical estimators. Figure 2.1 visualizes the error model’s scenario. The left hand diagram illustrates the theoretical density, N (μ, σ 2 ), as well as its empirical counterpart, 2

More precisely sigma sign.

2.2 Generalized Gaussian Approach

13

Fig. 2.1. Error model for stationarily operating devices. On the left: the normal density the center of which is shifted by f from x0 to μ. On the right: localization of x0 by an interval μ + fs . . . μ − fs

p(xi ), taken out of N = 1000 simulated data sorted into boxes of width Δxi according to p(xi ) =

ΔNi ; N Δxi

i = 1, . . . , r .

14

2 Models and Approaches

The values p(xi ); i = 1, . . . , r are marked by crosses. The right hand diagram relates the quantities μ and f to the true value x0 . We ask the interval extending from μ − fs to μ + fs to localize the true value x0 . The diagrams recall Eisenhart’s idea to separate random errors from unknown systematic errors.

2.3 Robust Testing Conditions Data simulations prove indispensable when in comes to moot a new proposal aiming at assessing measurement uncertainties. As has been emphasized repeatedly, nature hides the true values of the measurands. Consequently, experimenters are not in a position to verify their uncertainties via empirical data. The only way out is to simulate data emulating real experiments. As here the true values are know a priori, as they are to be preset, the localization properties of uncertainties are open to be scrutinized ex post. As practice suggests to assume random and systematic errors to range in like orders of magnitudes we shall tie down numerical simulations to this premise. Just for the purposes of data simulation—and only to this end—we notionally assign a rectangular distribution density to f , p(f ) =

1 ; 2fs

−fs ≤ f ≤ fs .

(2.8)

The postulated density formally produces a theoretical variance σf2 = fs2 /3 .

(2.9)

The random number generator which, according to our assumptions, produces normally distributed errors is to be provided with a suitable theoretical standard deviation σε . Setting arbitrarily σf = σε

(2.10)

issues an error interval − fs ≤ f ≤ fs ,

fs =

√ 3 σε .

(2.11)

Obviously, this appears a reasonable choice in order to keep random and systematic errors in comparable sizes. To emphasize, (2.8) only refers to data simulations. Experimenters can neither choose nor manipulate f . Rather, unable to judge the actual value of a particular unknown systematic error, experimenters have no other choice but to rely on intervals expressed in the form of −fs ≤ f ≤ fs , the limits ± fs of which stemming from the mode of operation of the measuring device [28].

2.4 Linearizations

15

Fig. 2.2. Data simulations refer to √ values of f√ taken either out of the left or the right shaded region −fs ≤ f ≤ −fs / 3 or fs / 3 ≤ f ≤ fs

For the purposes of data simulation, we strive to demonstrate the robustness of our approach. Thus, we deliberately provoke critical situations letting f assume values out of either of the two ranges √ √ − fs ≤ f ≤ −fs / 3 or fs / 3 ≤ f ≤ fs , (2.12) Fig. 2.2. This will, the author envisages, codify robust testing conditions. Also, robust testing conditions prove serviceable to scrutinize the localization properties of the ISO Guide. Here, the observation is, the more the unknown systematic errors tend to exhaust the limits of the intervals they are confined to, i.e. the nearer they are either to the lower or to the upper bounds, the sooner the Guide’s uncertainties fail to localize the quested true values. This, if it actually happened to occur, would interrupt the flow of true values and thus the traceability of measures. Incidentally, excluding critical measuring conditions right from the outset would degrade the outstanding rank of metrology within the natural sciences. Ultimately, the Guide’s so-called extension factor kP turns out to be a weak, somewhat dark concept and does not seem apt to belatedly stabilize measurement uncertainties.

2.4 Linearizations In general, error calculus is based on linearizations. The lower the uncertainties of the input data the more we are encouraged to consider linearizations admissible. Nevertheless, in individual cases verifications may stand to reason.

16

2 Models and Approaches

Under linearization the error model (2.2) produces a linear sum of three more or less bulky terms which, for convenience, will be ordered as follows: We let the first term express the respective true value, the second the influence of random errors and, finally, the third the influence of the unknown systematic errors. We shall meet this formal decomposition in any kind of error propagation, be this a functional relationship, a fitting procedure, or a system of linear equations. The generalized Gaussian error model (2.2) and the attempted linearizations maintain a persistent separation of random errors and unknown systematic errors. Should the magnitudes of the measurement errors rule linearizations out, coarser methods to assess measurement uncertainties had to be deployed. This, necessarily, produced coarser results.

2.5 Quiddity of Least Squares Submitting a linear system to a least squares adjustment involves three steps: – to verify that the linear system is readable in terms of true values, – to design a solution vector via orthogonal projection and, finally, – to assign uncertainties to the components of the solution vector. True System The inconsistencies of the linear system should solely be due to measurement errors. Let Aβ ≈ x

(2.13)

be an inconsistent, overdetermined, linear system to be submitted to least squares. Here, A denotes an (m × r) , m > r matrix of real coefficients aik , rank (A) = r, x an (m × 1) column vector of erroneous input data xi and, finally, β an (r × 1) column vector of unknowns βk ⎛ ⎛ ⎞ ⎞ ⎞ ⎛ x1 β1 a11 a12 . . . a1r ⎜ ⎜ ⎟ ⎟ ⎟ ⎜ ⎜ x2 ⎟ ⎜ β2 ⎟ ⎜ a21 a22 . . . a2r ⎟ ⎜ ⎜ ⎟ ⎟ ⎟ ⎜ A=⎜ ⎟, x=⎜ ⎟, β=⎜ ⎟. ⎜ ... ⎟ ⎜...⎟ ⎜ ... ... ... ... ⎟ ⎝ ⎝ ⎠ ⎠ ⎠ ⎝ am1 am2 . . . amr xm βr The aim is to estimate the r unknowns βk ; k = 1, . . . , r. Clearly, the inconsistencies of (2.13) should relate to measurements errors and not to a physically

2.5 Quiddity of Least Squares

17

ill-posed problem, which the method of least squares could not cure. Thus, we ask the system to be readable in terms of true values. If, instead of the erroneous data xi ; i = 1, . . . , m, we notionally inserted the respective true values x0,i ; i = 1, . . . , m the linear system would read Aβ 0 = x0 .

(2.14)

Here β 0 denotes the true solution vector with components β0,k ; k = 1, . . . r and x0 the vector of the true input data x0,i ; i = 1, . . . , m being hidden under measuring errors. Should (2.14) not happen to apply, the adjustment attempted to treat a physically ill-defined problem. Remarkably enough, even then the method of least squares would yield a “solution vector,” given rank (A) = r. So there is nothing else for it but to state that the method of least squares per se does not let experimenters off the hook. In the following, we shall always assume (2.14) to apply. Though there are more equations than unknowns, we may formally solve (2.14) for β 0 , β 0 = B T x0 ;

B = A(AT A)−1 .

(2.15)

The flow of true values or maintenance of traceability is illustrated in Fig. 2.3. Let us once more refer to (2.14) which formally equates the vector x0 of true values with a linear combination of the column vectors of the matrix A. We observe that x0 lies in the column space of the matrix A. This is to be confronted with (2.13). There, due to measuring errors, the vector x of observations lies outside the column space of the matrix A. Orthogonal Projection The method of least squares is just a trick: The vector x, thrown out of the column space of A, is simply projected back thus rendering the inconsistent system (2.13) solvable by brute force. Whether or not this procedural method is wise will be discussed soon. For the moment, suffice it to state that the projection per se is in no way in a position to sense any properties of the measurement errors of x. The orthogonal projection is brought about by a projection operator P = A(AT A)−1 AT .

(2.16)

The operator formally produces a mathematically consistent system ¯ = Px Aβ

(2.17)

the least squares solution vector of which is ¯ = BTx ; β

B = A(AT A)−1 .

(2.18)

18

2 Models and Approaches

Fig. 2.3. Traceability in least squares

¯ is linear in the input data. In a sense, Remarkable enough, the vector β the input gets weighted by the coefficients of the matrix B. The orthogonal projection minimizes the sum of squared residuals [28]



¯ = x − Aβ ¯ T x − Aβ ¯ . Q

(2.19)

In case the input data x1 , x2 , . . . , xm are free of systematic errors and the random errors are uncorrelated and come from the same parent distribution N (μ, σ 2 ), the minimized sum of squared residuals provides an estimate s2 =

¯ Q ; m−r

E{S 2 } = σ 2

(2.20)

of the theoretical variance σ 2 . Assignment of Uncertainties Let us ask: Is it really wise to rely on a solution vector basically conjured from a cross-grained linear system? Conceding the method of least squares is a trick, what eventually matters is the question whether or not the uncertainties of the components β¯k ; k = 1, . . . , r of the solution vector are apt to localize

2.5 Quiddity of Least Squares

19

the true values β0,k ; k = 1, . . . , r. Let the uβ¯k denote the uncertainties of the β¯k . Then, the adjustment’s result should comply with β¯k − uβ¯k ≤ β0,k ≤ β¯k + uβ¯k ;

k = 1, . . . , r .

(2.21)

Remarkably enough, the error model (2.2) guarantees this very outcome hence proving Gauss’ trick successful. However, the observation that Gauss confined himself to the error equation (2.1) while we, by contrast, rely on (2.2) renders things a bit more intricate. For the time being let us observe: the narrower the intervals (2.21), the more information is available. However, the intervals’ length do not only depend on the properties of the input data but also on the structure of the design matrix A. Given the experimenter is in a position to choose differing matrices, he accordingly encounters differing uncertainties. Moreover, he may deploy weighting factors in order to boost the influence of those input data he considers more accurate and to reduce the influence of other input data he judges less accurate. After all, we may not expect uncertainty statements to be unique. In any case, however, they should localize the true values. Weighting Procedures The classical version of the method least squares is based on the error model (2.1). In particular, as long as there are no biases, the Gauss–Markoff theorem is valid and provides weight factors. These weights are said to be optimal and produce what is called minimum variance estimators. As a matter of fact, systematic errors or biases abolish the Gauss–Markoff theorem. This is why weight factors are no longer readily available. They may, however, be introduced heuristically via trial and error. But would such a proceeding maintain the localization of true values β0,k ; k = 1, . . . , r? To recall, weight factors cause two things: – they numerically shift estimators and – they shrink uncertainties. Hence, it might happen that a set of weights shifts the estimators β¯k ; k = 1, . . . , r away from the true values β0,k ; k = 1, . . . , r while at the same time tightened uncertainties abolish the localization of the true values. Basically, in case of experimental data, we are not in a position to retrace such effects. However, deploying simulated data, delocalizations appear well detectable as has been demonstrated in [28]. To emphasize, the experimenter can never and will never be in a position to reveal whether or not the intervals (2.21) incur delocalizations. Remarkably enough, the situation changes, given the adjustment refers to the error model (2.2) mapping the physical properties of stationary experiments. We then observe:

20

2 Models and Approaches

Within the assumptions of the error model referred to, the localizations of true values are maintained under any kind of weighting. In particular, weights may be chosen by trial and error in order to let uncertainties breathe in a way left to the judgement of the experimenter. In practice, the experimenter will choose a first set of weights, somehow drawn from the uncertainties of the input data. Inspecting the newly produced uncertainties, he may deploy a second set of more or less similar weights thus being in a position to control uncertainties either selectively or flatly at least within certain ranges. In a sense, this procedure may be seen to replace the Gauss–Markoff theorem.

2.6 Analysis of Variance Considering the physical effects unknown systematic errors inflict on series of repeated measurements, the analysis of variance is doomed to run into a dead end. Clearly, the empirical variances to be fed into the analysis of

Fig. 2.4. Road map toward a new error calculus. The procedures printed in dark are to be revised, those in light become obsolete

2.7 Road Map

21

variance are never less than zero. Unknown systematic errors, in contrast, by their very nature being either negative or positive, cannot be extracted from the input data and then somehow re-fed as positive variances. This is why empirical data burdened by unknown systematic errors may not be submitted to the analysis of variance. – Supposedly, this classical and sophisticated tool of data analysis happened to be conceived while unknown systematic errors lay hidden under a veil of oblivion.

2.7 Road Map Biased empirical data, and as a rule we should expect empirical data to be biased, induce a breakdown of the majority of procedures of classical error calculus. While most of them appear to be reparable, at least two of them, the Gauss–Markoff theorem and the analysis of variance are doomed to failure. Certainly, both aspects, the repair of those which prove reparable and the final breakdown of those which are not, are expected to provoke extensive discussions, Fig. 2.4.

Part II

Generalized Gaussian Error Calculus

3 The New Uncertainties

The generalized Gaussian error calculus expresses the influence of random errors and unknown systematic errors by means of confidence intervals and worst-case estimations, respectively. These are the basic building blocks the new approach relies on.

3.1 Gaussian Versus Generalized Gaussian Approach The Gaussian error calculus addresses the expected values of unbiased estimators, in the simplest case the expectation μ of the arithmetic mean x ¯. The Gaussian approach takes the expectations to be the true values of the measurands. By contrast, the generalized Gaussian approach considers biased estimators thus introducing a distinction between expectations and true values: Gaussian focus Generalized Gaussian focus

− −

expected values of unbiased estimators true values of the measurands

3.2 Uncertainty and True Value Let there be n repeated measurements of some measurand. The simplest least squares estimator of the true value x0 is known to be the arithmetic mean 1 xl . n n

x ¯=

(3.1)

l=1

The presence of systematic errors causes the mean to be biased. To that effect, the uncertainty ux¯ should be apt to localize the true value x0 of the measurand, x ¯ − ux¯ ≤ x0 ≤ x ¯ + ux¯ . In general, arithmetic means aiming at one and the same physical quantity and coming from different laboratories will not coincide. This, per se,

26

3 The New Uncertainties

Fig. 3.1. Arithmetic means x ¯1 , x ¯2 , . . . aiming at the same physical quantity, say, x0 and measured in different laboratories are not required to coincide. However, the intersection of their uncertainties should localize the true value of the measurand

is acceptable. However, we expect their uncertainties to overlap and their intersection to localize the true value of the measurand, Fig. 3.1.

3.3 Designing Uncertainties From now on, whenever appropriate, we shall provide the systematic error f and the expected value μ with indices. For example, instead of (2.2) and (2.4) we shall write xl = x0 + εl + fx and μx = x0 + fx ;

−fs,x ≤ fx ≤ fs,x ,

respectively. Given the measuring device works stationarily, the empirical variance n 1  s2x = (xl − x ¯)2 (3.2) n−1 l=1

proves useful to quantify the scattering of random errors. Obviously, the difference

3.3 Designing Uncertainties

27

Fig. 3.2. The measured data do not scatter with respect to the true value x0 of the measurand, they rather scatter throughout a neighborhood of the parameter μx = x0 + fx , where fx denotes the actual value of the systematic error. As this value is unknown with respect to magnitude and sign, the uncertainty has to make allowance for x0 ≤ μx as well as for x0 ≥ μx

1 l n n

xl − x ¯ = εl −

(3.3)

l=1

eliminates the bias fx . As the bias is limited to an interval symmetric to zero [28] − fs,x ≤ fx ≤ fs,x

(3.4)

the localization of the true value x0 appears plausible if the uncertainty ux¯ of the arithmetic mean x ¯ is the sum of the empirical standard deviation sx and the worst case estimation fs,x of fx following ux¯ = sx + fs,x ,

(3.5)

Fig. 3.2. The idea of confidence intervals suggests to localize the position of the unknown expectation

via

¯ = μx E{X}

(3.6)

tP (ν) tP (ν) x ¯ − √ sx ≤ μx ≤ x ¯ + √ sx , n n

(3.7)

where tP (ν) denotes the Student-factor of degrees of freedom ν = n − 1.

28

3 The New Uncertainties

Fig. 3.3. The intervals μx ± fs,x and x ¯ ± ux¯ are meant to localize the true value x0 √ of the measurand; no probability is given. The interval x ¯ ± tP / n sx is expected to localize the parameter μx with probability P

Indeed, we may expect this interval to localize the parameter μx with probability or at confidence level P . In general, we shall put P = 95%. Then, instead of (3.5), the uncertainty interval takes the form tP (ν) ux¯ = √ sx + fs,x . n

(3.8)

After all, the measuring result x ¯ ± ux¯

or

x ¯ − ux¯ ≤ x0 ≤ x ¯ + ux¯

(3.9)

is meant to localize the true value x0 , Fig. 3.3. Often two or more than two means aiming at the same physical quantity are averaged to produce, say, a grand mean. Here, the argument is that any metrological information at hand should be rendered useful. On the other hand, averaging a given set of arithmetic means might seem a contradiction in terms: Why not take the mean the uncertainty of which is minimal and dismiss the others? In a sense, this question aims at the heart of metrology: the true values of the measurands are unknown. Selecting the mean with the smallest uncertainty might be wrong and possibly even momentous. Ultimately, it is our ignorance about true values which suggests pooling procedures however different the uncertainties of the means might be. Pooling stands to reason at least as long as the respective uncertainties mutually overlap. On the other hand, pooling of means might lure us into trouble, if either

3.4 Quasi Safeness

– –

29

one interval misses or even several intervals miss the common true value or the respective true values differ due to physical reasons.

In the latter case, the uncertainty of the grand mean would imply the static deviation between varying physical properties, say, inherently differing true values. Suchlike deviations, however, defy error calculus. Pooling appears advisable if and only if the true values of the means coincide. If metrology’s dominant question aims at the localization of true values, how can we be sure that the quotation (3.9) localizes the measurand’s true value x0 ?

3.4 Quasi Safeness While the confidence interval (3.7) proposes a probability, the quotation of the overall uncertainty (3.9) does not. Nevertheless, we might at least wish to qualify somehow the localization of the true value x0 . In the first instance random errors, emerging from real experiments, appear to lack the outermost tails of the theoretical normal density N (μx , σx ). This in turn means that we have hardly to expect (3.7) to miss the parameter μx . Furthermore, as we have brought to bear the unknown systematic error fx via a worst case estimation, we may expect the interval (3.9) to localize the true value x0 quasi-safe—given everything has gone well.

4 Treatment of Random Errors

The multidimensional normal model covers the behavior of a set of random variables related to different measuring devices. We shall, in particular, refer to the density of the empirical variances and empirical covariances, commonly summed up as empirical moments of second order.

4.1 Well-Defined Measuring Conditions By approximation, we shall consider random errors normally distributed. This assumption suggests to deploy the multidimensional normal model. There is, however, a technical problem inasmuch as experimenters are used to deciding freely on the numbers of repeated measurements. By contrast, the multidimensional normal model premises equal numbers of repeated measurements. Commonly, measuring results are not expected to depend significantly on the numbers of repeated measurements. Consider just one variable: Given n is not extremely small, then, whether there are n or n + 1 repeated measurements will hardly induce a different scenario.1 For two variables, say, X and Y with nx and ny repeated measurements, we may either have nx = ny or nx = ny . Reducing the first case to the latter will barely issue a significantly different result, given nx and ny do not differ abnormally. To emphasize, only nx = ny clears the way for the multidimensional normal model and this is why we shall call nx = ny well-defined measuring conditions. Consider two series of repeated measurements x1 , x2 , . . . , xn

and

y1 , y2 , . . . , yn

(4.1)

1 yl n

(4.2)

with arithmetic means 1 xl , n n

x ¯=

l=1

1

n

y¯ =

l=1

On the other hand, for practical reasons, n cannot be made arbitrarily large as the experimental set up might sometime start to drift.

32

4 Treatment of Random Errors

and moments of second order 1  (xl − x ¯)2 , n−1 n

s2x =

l=1

sxy

1 = n−1

n 

1  (yl − y¯)2 n−1 n

s2y =

l=1

(4.3)

(xl − x ¯)(yl − y¯) .

l=1

The empirical variances and the empirical covariance are well to be distinguished from their theoretical counterparts, the theoretical variances, and the theoretical covariance. Denoting by Sx2 , Sy2 , Sxy random variables with realizations s2x , s2y , sxy , we have E{Sx2 } = σx2 ,

E{Sy2 } = σy2 ,

E{Sxy } = σxy .

4.2 Multidimensional Normal Model ¯ Y¯ , Sx2 , Sy2 , Sxy factorizes into a The joint density of the random variables X, density p 1 of the arithmetic means and a density p 2 of the empirical moments of second order [26] x, y¯) p 2 (s2x , sxy , s2y ) . p(¯ x, y¯, s2x , sxy , s2y ) = p 1 (¯

(4.4)

Remarkably enough, the density (n − 1)n−1 [s2 s2 − s2xy ](n−4)/2 4πΓ(n − 2)|σ|(n−1)/2 x y   n−1 2 2 h(sx , sxy , sy ) ; × exp − 2|σ|

p 2 (s2x , sxy , s2y ) =

(4.5)

h(s2x , sxy , s2y ) = σy2 s2x − 2σxy sxy + σx2 s2y ¯ Y¯ . In case applies to dependent as well as to independent random variables X, of independence, we have σxy = 0. Nevertheless, even then p 2 (s2x , sxy , s2y ) does not factorize. Experimenters are used to answering the question of how to interpret an empirical covariance, given its expectation vanishes, by declaring it dispensable. On the other hand, this obviously conflicts with (4.5) which tells us that the empirical covariance Sxy is demonstrably an integral part of the density and that its neglect would mutilate the statistical coherence of the three random variables Sx2 , Sy2 , Sxy . In particular, there should exist a smooth

4.3 Permutation of Repeated Measurements

33

transition from correlated to weakly correlated and finally to uncorrelated variables, implying σxy → 0. Regardless of this, the empirical covariance sxy , being sample dependent, would continue to take values out of the interval −sx sy ≤ sxy ≤ sx sy . Ultimately, this is the message transferred by the density of the moments of second order. To make the point as clear as possible: Given the variables considered are known to be independent, he who wants to exclude empirical covariances right from the outset is entitled to do so. As will be shown however, the formal consequences thereof will turn out unfavorable. On the other hand, accepting the properties of the density of the moments of second order asks experimenters to always consider equal numbers of repeated measurements. Clearly, unequal numbers would prevent the formalizing of empirical covariances. Concatenated measurands should always be subjected to the same number of repeated measurements thus allowing the formalization of empirical covariances.

4.3 Permutation of Repeated Measurements After all, there is still a point of concern. Given the random variables are known to be independent, the sequences of the xi and yi appear permutable, each permutation producing a different empirical covariance. But which one is valid? Obviously, under the premise of independence, each permutation is equally well admissible. As an example, consider the sequences x1 , x2 , x3 , x4 , x5 , x6 , x7 , x8 , x9 , x10 y1 , y2 , y3 , y4 , y5 , y6 , y7 , y8 , y9 , y10 and x1 , x2 , x3 , x4 , x5 , x6 , x7 , x8 , x9 , x10 y7 , y4 , y6 , y8 , y2 , y10 , y3 , y5 , y1 , y9 . While permutations do not affect the empirical variances s2x , s2y , the empirical covariance sxy will in general take different values. But this is quite in order and does in no way affect the consequences of using (4.5). Rather, the ambiguousness of the empirical covariance in case of independent random variables reflects the basic operating principle of the density per se. Considering empirical covariances will be shown to generalize Student’s confidence intervals to multivariate error propagation. Both in the onedimensional and the multidimensional case, the length of confidence intervals are sample dependent.

5 Treatment of Systematic Errors

Unknown systematic errors are proposed to be propagated via worst-case estimations. This proceeding expresses our state of knowledge and meets the concerns of metrology.

5.1 Repercussion of Biases Averages are seen to reduce the influence of measuring errors. In the first place, averages shall abate the fluctuations of random errors. The question as to how averages affect time-constant systematic errors depends on the particular situation. Simple arithmetic means leave them unaffected, within least squares estimators, however, they get reduced. Let us reconsider the arithmetic mean (3.1). Inserting the error equation (2.2), we recognize the impact of the bias fx , 1 l + fx . n n

x ¯ = x0 +

(5.1)

l=1

The arithmetic mean x ¯ is biased with respect to the true value x0 . This observation insistently asks us to shape the measuring uncertainty appropriately. In Sect. 6.3 we shall consider a weighted mean β¯ of some individual means x ¯i ; i = 1, . . . , m, β¯ =

m 

wi x ¯i ,

i=1

m 

wi = 1 ,

i=1

the wi > 0 denoting so-called weight factors. The systematic errors fi of the individual means, confined to intervals −fs,i ≤ fi ≤ fs,i ;

i = 1, . . . , m ,

issue an uncertainty component fs, β¯ =

m  i=1

wi fs,i .

(5.2)

36

5 Treatment of Systematic Errors

Indeed, fs, β¯ will hardly overestimate the influence of systematic errors. Just to have an example, let us put m = 3 and w1 = w2 = w3 . This issues fs, β¯ = (fs, 1 + fs, 2 + fs, 3 ) /3 .

(5.3)

Metrology’s Achilles’ heel anchors in the traceability of units, calibration chains, individual standards, physical constants and measures in a wider sense. Hence, we emphasize: The attempt to implement traceability should consider biases.

5.2 Uniqueness of Worst-Case Assessments Worst-case estimations provide a natural basis to assess unknown systematic errors or biases. However, the proceeding should be handled with care as we strive for unique quotations. This is why we should never propagate worstcase estimations themselves since this might produce ambiguous results. To assess the overall influence of a whole lot of biases, the triangle inequality is the appropriate tool. However, prior to its becoming operative, those biases which occur more than once should be factored out. Not until then should the triangle inequality be applied; this observation strictly excludes the propagation of worst-case estimations. But this is the only precaution we have to consider in order to ensure unique and meaningful measurement uncertainties.

Part III

Error Propagation

6 Means and Means of Means

We shall confine ourselves to unweighted and weighted arithmetic means as issued by the method of least squares.

6.1 Arithmetic Mean Let us consider n independent repeated measurements x1 , x2 , . . . , xn with formal decompositions xl = x0 + (xl − μx ) + fx ,

−fs,x ≤ fx ≤ fs,x ;

l = 1, . . . , n .

The measurements produce a mean 1 xl n n

x ¯=

(6.1)

l=1

and an empirical standard deviation 1  (xl − x ¯)2 . n−1 n

s2x =

(6.2)

l=1

As shown in Appendix H, Student’s t exists in two versions T (n − 1) =

X − μx Sx

and T (n − 1) =

¯ − μx X √ . Sx / n

Correspondingly, the inequality −tP ≤ T ≤ tP issues confidence intervals xl − tP (n − 1) sx ≤ μx ≤ xl + tP (n − 1) sx ;

l = 1, . . . , n

(6.3)

40

6 Means and Means of Means

Fig. 6.1. Biased arithmetic mean x ¯ with uncertainty ux¯

6.3 Mean of Means

41

and x ¯−

tP (n − 1) tP (n − 1) √ √ sx ≤ μx ≤ x sx . ¯+ n n

(6.4)

In equal measure, both inequalities localize the expected value μx , the first with respect to the n input data and the second with respect to the arithmetic mean. To localize the true value x0 we may either refer to xl ± ux ,

ux = tP (n − 1) sx + fs,x ;

l = 1, . . . , n

(6.5)

or x ¯ ± ux¯ ,

ux¯ =

tP (n − 1) √ sx + fs,x . n

(6.6)

Data Simulation Figure 6.1 depicts n = 10 repeated measurements with fx = const. We observe the biased arithmetic mean, the confidence interval expressing the influence of random errors and localizing the expectation μx and, finally, the uncer¯ encompassing the random errors and the unknown tainty ux¯ of the mean x systematic error, likewise superimposed to every xl . To recall, the true value x0 is only known due to data simulation—in reality, it remains hidden.

6.2 Extravagated Averages Means to be averaged should relate to identically true values. Disregarding this pre-condition means to attempt to adjust an inconsistent linear system. This, in fact, violates the basic idea of the method of least squares. Measurement uncertainties are thought to express measurement errors, but are hardly apt to quantify static disparities between physically varying constants.

6.3 Mean of Means The pooling of a set of means is directed by the question of whether the individuals carry equal weight.

42

6 Means and Means of Means

Unweighted Grand Mean Given m arithmetic means and uncertainties 1 xil , n n

x ¯i =

ux¯i =

l=1

tP (n − 1) √ si + fs, i ; n

i = 1,..., m.

We assume the x ¯i ± ux¯i to localize the common true value x0 = x0,1 = . . . = x0,m . It stands to reason to ask for a grand mean β¯ and its uncertainty uβ¯ . To this end we submit the inconsistent linear system ⎞ ⎛ ⎞ ⎛ 1 x ¯1 ⎜ 1 ⎟ ⎜ ¯2 ⎟ ⎟ ⎜ ⎟β ≈ ⎜ x aβ ≈ x ¯ (6.7) ⎝···⎠ ⎝ ··· ⎠ , x ¯m 1 to least squares letting a = (1

1

1)T

...

denote an auxiliary vector. Multiplying (6.7) on the left by aT produces the grand mean 1  x ¯i . β¯ = m i=1 m

(6.8)

The itemization

x11 , x12 , . . . , x1l . . . , x1n



x ¯1 =

x21 , x22 , . . . , x2l . . . , x2n ···

x1l

l=1



···

n 

↓ ··· ··· ··· ↓



x ¯2 =

n 

x2l (6.9)

l=1

···

xm1 , xm2 , . . . , xml . . . , xmn

···

···



x ¯m =

n 

xml

l=1

suggests to introduce n grand means 1  xil ; β¯l = m i=1 m

l = 1, . . . , n

(6.10)

based on the respective l-th measurements. Of course, the β¯l lead us back to

6.3 Mean of Means

1¯ β¯ = βl . n

43

n

(6.11)

l=1

The error equations xil = x0,i + (xil − μi ) + fi ,

x ¯i = x0,i + (¯ xi − μi ) + fi

produce 1 β¯l = [ x0,i + (xil − μi ) + fi ] , m i=1 m

1 β¯ = [ x0,i + (¯ xi − μi ) + fi ] . m i=1 m

Subtraction yields the n differences 1 β¯l − β¯ = (xil − x ¯i ) ; m i=1 m

l = 1, . . . , n .

(6.12)

From this we take the empirical variance of the β¯l with respect to the grand ¯ means β, n

2 1  ¯ βl − β¯ n−1 l=1 ⎤ ⎡ m m n    1 = 2 (xil − x ¯i ) ⎣ (xjl − x ¯ j )⎦ m (n − 1) i=1 j=1

s2β¯ =

(6.13)

l=1

1  sij m2 i,j m

= in which the

1  (xil − x ¯i )(xjl − x ¯j ) ; n−1 n

sij =

i, j = 1, . . . , m ;

sii ≡ s2i

l=1

denote the empirical variances and covariances of the input data. After all, putting ⎞ ⎛ s11 s12 . . . s1m ⎜ s s ... s ⎟ 2m ⎟ ⎜ 21 22 ⎟ sii ≡ s2i (6.14) s=⎜ ⎜ ... ... ... ... ⎟ , ⎠ ⎝ sm1 sm2 . . . smm relation (6.13) takes the form s2β¯ =

1 aTs a . m2

(6.15)

44

6 Means and Means of Means

The expected value 1  fi m i=1 m

¯ = μ ¯ = x0 + E{β} β

(6.16)

issues the propagated systematic error 1  fi fβ¯ = m i=1 m

(6.17)

with worst-case estimation 1  fs,i . m i=1 m

fs,β¯ =

(6.18)

This relation, averaging systematic errors, affirms that an increasing number m of means does not entail a surge of the propagated systematic error fs,β¯ as discussed in (5.3). The confidence interval tP (n − 1) tP (n − 1) √ √ β¯ − sβ¯ ≤ μβ¯ ≤ β¯ + sβ¯ n n

(6.19)

localizes the expected value μβ¯ with probability P . Hence, the overall uncertainty of the grand mean β¯ turns out to be uβ¯ =

tP (n − 1) √ sβ¯ + fs,β¯ . n

(6.20)

Weighted Grand Mean To find a weighted grand mean, we left multiply (6.7) by a weight matrix. We choose suchlike matrices to be diagonal   (6.21) G = diag g1 , g2 , . . . , gm ; gi = 1/ux¯i . Submitting the weighted system Gaβ ≈ G¯ x.

(6.22)

to least squares yields β¯ =

m  i=1

wi x ¯i .

(6.23)

6.3 Mean of Means

45

For convenience, we gather the weights m 

gi2 wi =  , m gi2

wi = 1

i=1

i=1

within an auxiliary vector w = (w1

w2

...

wm )T .

This suggests to formalize the empirical variance according to s2β¯ = wTs w .

(6.24)

Finally, we consider the worst-case estimation of the propagated systematic error fs,β¯ =

m 

wi fs,i .

(6.25)

i=1

Hence, the overall uncertainty of the weighted grand mean is given by uβ¯ =

tP (n − 1) √ sβ¯ + fs,β¯ , n

β¯ − uβ¯ ≤ x0 ≤ β¯ + uβ¯ .

(6.26)

As is known, biases invalidate the Gauss–Markov theorem. This is why a unique weight matrix does not exist.1 The stipulation gi = 1/ux¯i may be considered a start and subsequently be varied by trial and error. As (6.26) suggests, the experimenter is entitled to this proceeding as any weight matrix safeguards the localization of the true value x0 —be the final uncertainty favorable or not. Ultimately, it is up to the experimenter to decide which measurements to boost and which to belittle. The more weight he assigns to the mean with the smallest uncertainty, the more will the grand mean’s uncertainty resemble this very uncertainty. Thus, the uncertainty of the grand mean is free to float between the smallest and the largest uncertainty of the set of means considered. Whatever choice is taken, the final uncertainty will localize ¯i ± ux¯i ; i = the true value x0 —given this has applied to each of the results x 1, . . . , m. To experience the weighting property of (6.25), let us add another example and put ux¯1 = a, ux¯2 = 2a, ux¯3 = 3a so that w1 = 36/49, w2 = 9/49, w1 = 4/49. Hence, 1

Even if the classical error calculus were applicable, the ideal Gauss–Markov weight matrix did not exist as the theorem refers to theoretical variances and theoretical covariances which, at least in metrology, prove inaccessible. Strictly speaking, the theorem was applicable at no time.

46

6 Means and Means of Means

Fig. 6.2. Unweighted and weighted mean of means, βu and βw , respectively

6.4 Individual Mean Versus Grand Mean

47

fs, β¯ = (36fs, 1 + 9fs, 2 + 4fs, 3 ) /49 . The example discloses the weight factors wi ; i = 1, . . . , m to drive the propagated systematic error fs, β¯ into a reasonable order of magnitude however large m may be. Data Simulation Figure 6.2 pools m = 4 arithmetic means under the premise of a given common true value x0 . We observe the unweighted and the weighted grand mean, β¯u and β¯w , respectively.

6.4 Individual Mean Versus Grand Mean Let us elaborate the uncertainties ud¯i of the differences ¯i − β¯ ; d¯i = x

i = 1, . . . , m

(6.27)

¯ between any of the individual means x ¯i ; i = 1, . . . , m and the grand mean β. ¯ Denoting by d0,i the true values of the di , we are looking for | d¯i − d0,i |≤ ud¯i ;

i = 1, . . . , m .

As the d0,i vanish, we should observe |x ¯i − β¯ |≤ ud¯i ;

i = 1, . . . , m .

(6.28)

Inserting the error equations x ¯i = x0,i + (¯ xi − μi ) + fi ;

i = 1, . . . , m

into (6.27) produces d¯i = x0,i + (¯ xi − μi ) + fi −

m 

wj [x0,j + (¯ xj − μj ) + fj ]

(6.29)

j=1

which firstly reissues di,0 = 0 ;

i = 1, . . . , m .

Reverting to individual measurements xil we find ⎡ ⎤ n m m    1 ⎣(xil − μi ) − wj (xjl − μj )⎦ + fi − wj fj d¯i = n j=1 j=1 l=1

(6.30)

48

6 Means and Means of Means

so that d¯il = (xil − μi ) −

m 

m 

wj (xjl − μj ) + fi −

j=1

wj fj ;

l = 1, . . . , n

(6.31)

j=1

and 1¯ d¯i = dil . n n

(6.32)

l=1

The differences d¯il − d¯i = (xil − x ¯i ) −

m 

wj (xjl − x ¯j )

j=1

suggest the empirical variance 1  ¯ (dil − d¯i )2 n−1 n

s2d¯il =

l=1



=

1 n−1

⎛ ⎞2 ⎤ n m m    ⎢ ⎥ ¯i )2 − 2(xil − x ¯i ) wj (xjl − x ¯j ) + ⎝ wj (xjl − x ¯j )⎠ ⎦ ⎣(xil − x j=1

l=1

j=1

which we turn into s2d¯il = s2i − 2

m 

wj sij + wTs w .

(6.33)

j=1

Relation (6.29) issues the propagated systematic error fd¯i = (1 − wi )fi −

m 

wj fj

(6.34)

wj fs,j .

(6.35)

j=1 j=i

with worst-case estimation fs,d¯i = (1 − wi )fs,i +

m 

j=1 j=i

Here, we refer to Sect. 5.2 where we addressed the need to keep worst-case assessments unique, i.e. to factor out those biases which occur more than once. Adding ±wi fs,i on the right-hand side of (6.35) we arrive at fs,d¯i = (1 − 2wi )fs,i +

m  j=1

wj fs,j

(6.36)

6.4 Individual Mean Versus Grand Mean

49

Hence, the uncertainty of the difference (6.27) takes the form   m  tP (n − 1)  s2i − 2 √ ud¯i = wj sij + wTs w n j=1 (6.37) +(1 − 2wi )fs,i +

m 

wj fs,j

j=1

Data Simulation To recall, the common true value x0 is unknown. Data simulation, however, gives us the chance to preset the true value and thus to scrutinize the properties of uncertainties. Let us consider a set of means x ¯i ± ux¯i ; i = 1, . . . , m and assume they lo¯i ; i = 1, . . . , m calize a common true value x0 . We firstly compare the means x directly. The results are shown in the upper diagrams of Fig. 6.3. We then address (6.28), putting however x ¯i − ud¯i ≤ β¯ ≤ x ¯i + ud¯i ;

i = 1, . . . , m .

¯ The lower diagrams confront the x ¯i ± ud¯i ; i = 1, . . . , m with β. Though the grand mean promises more stability than the individual means, the uncertainties ud¯i prove intricate and somewhat sensitive constructions as they are interdependent. For an illustration, let us consider two further examples, Figs. 6.4 and 6.5. Even if the results x ¯i ±ux¯i ; i = 1, . . . , m mutually overlap, it may happen that one mean or even more means abstain from localizing the true value x0 . As an example, we let x ¯i ± ux¯i ; i = 2, 3 fail to localize x0 . Clearly, the experimenter himself could not notice that and if he resorted to the grand mean, in order to dispel pending doubts, he unfortunately might not detect the given irregularities, Fig. 6.4. Ultimately, trial and error leads us to Fig. 6.5 in which the same two means fail to localize x0 . But as further data have been altered, now two of ¯i ± ux¯i ; i = 2, 3 the ud¯i point to irregularities. Remarkably enough, while x fail to localize x0 , ud¯i ; i = 1, 5 suggest a review.

50

6 Means and Means of Means

Fig. 6.3. Check of consistency of results x ¯i ± ux¯i ; i = 1, . . . , 5; x0 common true ¯i − β value; β¯ grand mean, ud¯i uncertainty of x

6.4 Individual Mean Versus Grand Mean

51

Fig. 6.4. Though x ¯i ± ux¯i ; i = 2, 3 fail to localize x0 , none of the uncertainties ud¯i ; i = 1, . . . , 5 beckons inconsistency

52

6 Means and Means of Means

Fig. 6.5. Under additional modifications, the same means x ¯i ± ux¯i ; i = 2, 3 continue to fail to localize x0 . Now, however, the uncertainties ud¯1 and ud¯5 beckon inconsistency

7 Functions of Erroneous Variables

To assess the influence of measurement errors propagated via functions, we confine ourselves to linearized series expansions. From there, we expect the functions to behave sufficiently smooth throughout a neighborhood of the expansion point. Also, the measuring errors themselves should not exceed reasonable sizes.

7.1 One Variable Consider a measuring result x ¯ ± ux¯ ,

ux¯ =

tP (n − 1) √ sx + fs,x n

(7.1)

and a function Φ(x). The uncertainty uΦ¯ of Φ(¯ x) is obviously given by    dΦ   ux¯ . uΦ =  d¯ x Let us take this result to explore the toolkit of the new error calculus. Series Expansions In terms of the error equations xl = x0 + (xl − μx ) + fx ; x − μx ) + fx ; x ¯ = x0 + (¯

l = 1, . . . , n −fs,x ≤ fx ≤ fs,x

we expand Φ(x) throughout a neighborhood of the true value x0 , firstly with respect to the n points x1 , x2 , . . . , xn Φ(xl ) = Φ(x0 ) +

dΦ dΦ (xl − μx ) + fx + · · · ; dx0 dx0

l = 1, 2, · · · , n

and secondly with respect to the particular realization x ¯ of the notionally ¯ defined random variable X

54

7 Functions of Erroneous Variables

Φ(¯ x) = Φ(x0 ) +

dΦ dΦ (¯ x − μx ) + fx + · · · dx0 dx0

By approximation, we assume that –

the expansions may be linearized,



a derivative dΦ/d¯ x may be substituted for the derivative dΦ/dx0 ,



dΦ/d¯ x may be considered constant.

Just to simplify the notation, we additionally provide the truncated expansions with equality signs which is, of course, incorrect. Thus we have Φ(xl ) = Φ(x0 ) +

dΦ dΦ (xl − μx ) + fx ; d¯ x d¯ x

and Φ(¯ x) = Φ(x0 ) +

l = 1, 2, . . . , n

dΦ dΦ (¯ x − μx ) + fx . d¯ x d¯ x

(7.2)

(7.3)

Subtracting (7.3) from (7.2) produces Φ(xl ) − Φ(¯ x) =

dΦ (xl − x ¯) . d¯ x

(7.4)

Summing this over l we find 1 Φ(xl ) . n n

Φ(¯ x) =

(7.5)

l=1

Random Errors Relation (7.4) provides the empirical variance of the Φ(xl ) with respect to the mean Φ(¯ x) 1  2 = [ Φ(xl ) − Φ(¯ x) ] = n−1 n

s2Φ

l=1



dΦ d¯ x

2 s2x .

(7.6)

Systematic Error Relation (7.3) issues the propagated systematic error fΦ =

dΦ fx , d¯ x

−fs,x ≤ fx ≤ fs,x .

(7.7)

7.1 One Variable

55

Confidence Interval and Overall Uncertainty Let us formally associate random variables X and SΦ with the measured data x1 , x2 , . . . , xn and the empirical variance sΦ . Then, the expected value of (7.3) takes the form   ¯ = Φ(x0 ) + dΦ fx . μΦ = E Φ(X) d¯ x

(7.8)

Given the xl are independent and normally distributed, this also applies to the Φ(xl ); l = 1, . . . , n as given in (7.2). Then, in view of (7.5), we are in a position to define Student’s T (n − 1) =

¯ − μΦ Φ(X) √ . SΦ / n

(7.9)

Thus, we may expect the confidence interval ¯ − Φ(X)

tP (n − 1) − 1) ¯ + tP (n √ √ SΦ ≤ μΦ ≤ Φ(X) SΦ n n

(7.10)

to localize the unknown parameter μΦ with probability P . A realization of (7.10) is, of course, Φ(¯ x) −

tP (n − 1) tP (n − 1) √ √ sΦ . sΦ ≤ μΦ ≤ Φ(¯ x) + n n

Still, we have to assess (7.7). As we stick to worst-case estimations we find    dΦ   fs,x , −fs,Φ ≤ fΦ ≤ fs,Φ .  fs,Φ =  (7.11) d¯ x Hence, the overall uncertainty is given by       dΦ  tP (n − 1)  dΦ    ux¯ .  √ uΦ¯ =  s + f = x s,x  d¯ d¯ x x n

(7.12)

The quantity ux¯ obviously matches (7.1). The final measuring result reads Φ(¯ x) ± uΦ¯

(7.13)

meaning x) + uΦ¯ . Φ(¯ x) − uΦ¯ ≤ Φ(x0 ) ≤ Φ(¯ The magnitude of the derivative

   dΦ     d¯ x

has modified the magnitude of the initial uncertainty ux¯ —though the derivative per se is not connected to the measurements.

56

7 Functions of Erroneous Variables

7.2 Two Variables Let us consider two series of measurements x1 , x2 ,

...

, xn1

and

y 1 , y2 ,

...

, yn2

(7.14)

and some function Φ(x, y). The Gaussian error calculus tacitly imputes n1 = n2 . Though this choice cannot be forbidden, it excludes the exploitation of the multidimensional normal model as discussed in Sect. 4.1. From there, we shall assume n1 = n2 = n. Hence, the problem is to find uΦ¯ from x1 , x2 ,

...

, xn

and

y 1 , y2 ,

...

, yn .

(7.15)

For simplicity, we might prefer the two series to be independent. Clearly, were they dependent, they came in pairs (x1 , y1 ), (x2 , y2 ),

...

, (xn , yn ) .

(7.16)

But fancy the dependence gets steadily weaker and weaker. Then, in the end, we expect the uncertainty due to the data (7.16) to coincide with the uncertainty due to the data (7.15). Let us elaborate the uncertainty in case of dependence and ponder subsequently about the transition to independence. The means and uncertainties of the input data are 1 xl , n n

x ¯=

ux¯ =

l=1

1 y¯ = yl , n n

l=1

tP (n − 1) √ sx + fs,x n (7.17)

tP (n − 1) √ sy + fs,y . uy¯ = n

Series Expansions With a view to the error equations xl = x0 + (xl − μx ) + fx ,

yl = y0 + (yl − μy ) + fy ;

x − μx ) + fx , x ¯ = x0 + (¯

y¯ = y0 + (¯ y − μy ) + fy

l = 1, . . . , n

we consider the expansion of Φ(x, y) throughout a neighborhood of the point (x0 , y0 ), firstly with reference to the n pairs (xl , yl ); l = 1, . . . , n Φ(xl , yl ) = Φ(x0 , y0 ) + +

∂Φ ∂Φ (xl − μx¯ ) + (yl − μy¯) ∂x0 ∂y0 ∂Φ ∂Φ fx + fy + · · · ; ∂x0 ∂y0

l = 1, . . . , n

7.2 Two Variables

57

and secondly with respect to the sample means x ¯, y¯ Φ(¯ x, y¯) = Φ(x0 , y0 ) + +

∂Φ ∂Φ (¯ x − μx¯ ) + (¯ y − μy¯) ∂x0 ∂y0 ∂Φ ∂Φ fx + fy + · · · . ∂x0 ∂y0

By approximation, we assume that –

the expansions may be linearized via truncation,



the derivatives ∂Φ/∂ x ¯ and ∂Φ/∂ y¯ may be substituted for the derivatives ∂Φ/∂x0 and ∂Φ/∂y0 ,



∂Φ/∂ x ¯ and ∂Φ/∂ y¯ may be considered constant .

Just to simplify the notation, we provide the truncated expansions with equality signs Φ(xl , yl ) = Φ(x0 , y0 ) +

∂Φ ∂Φ (xl − μx¯ ) + (yl − μy¯) ∂x ¯ ∂ y¯ (7.18)

+

∂Φ ∂Φ fx + fy ; ∂x ¯ ∂ y¯

l = 1, . . . , n

and Φ(¯ x, y¯) = Φ(x0 , y0 ) +

∂Φ ∂Φ (¯ x − μx¯ ) + (¯ y − μy¯) ∂x ¯ ∂ y¯ (7.19)

+

∂Φ ∂Φ fx + fy ; ∂x ¯ ∂ y¯

l = 1, . . . , n .

Subtracting (7.19) from (7.18) produces Φ(xl , yl ) − Φ(¯ x, y¯) =

∂Φ ∂Φ (xl − x (yl − y¯) ; ¯) + ∂x ¯ ∂ y¯

l = 1, . . . , n .

(7.20)

Summing over l we have 1 Φ¯ = Φ(xl , yl ) . n n

(7.21)

l=1

Random Errors Relation (7.20) produces the empirical variance of the Φ(xl , yl ) with respect to the mean value Φ(¯ x, y¯)

58

7 Functions of Erroneous Variables

1  2 = [ Φ(xl , yl ) − Φ(¯ x, y¯) ] . n−1 n

s2Φ

l=1

On elaborating we find   n 2 ∂Φ 1  2 sΦ = (xl − x ¯)2 n−1 ∂x ¯ l=1

 +2  =

∂Φ ∂x ¯

∂Φ ∂x ¯



∂Φ ∂ y¯



 ¯)(yl − y¯) + (xl − x 

2 s2x

+2

∂Φ ∂x ¯



∂Φ ∂ y¯

∂Φ ∂ y¯



 sxy +



2

∂Φ ∂ y¯

(yl − y¯)2

(7.22)

2 s2y .

Let us compress (7.22) by means of the empirical variance–covariance matrix  sxx sxy s= (7.23) ; sxx ≡ s2x , sxy = syx , syy ≡ s2y syx syy and an auxiliary vector  b=

∂Φ ∂x ¯

∂Φ ∂ y¯

T .

(7.24)

This issues s2Φ = bTs b .

(7.25)

Systematic Errors Again, we formally introduce random variables X and Y and assume their realizations to be the measured data xl , yl ; l = 1, . . . , n. Then, the expected value of (7.19),     ¯ Y¯ ) = Φ(x0 , y0 ) + ∂Φ fx + ∂Φ fy , μΦ = E Φ(X, ∂x ¯ ∂ y¯ issues the propagated systematic error fΦ =

∂Φ ∂Φ fx + fy ; ∂x ¯ ∂ y¯

−fs,x ≤ fx ≤ fs,x , −fs,y ≤ fy ≤ fs,y .

(7.26)

Confidence Interval and Overall Uncertainty In error calculus, it is common to consider consecutive realizations (xl , yl ), (xl+1 , yl+1 ), . . . ; l = 1, 2, . . . , n of the pair of random variables (X, Y ) independent. Thus, the same applies to realizations Φ(xl , yl ), Φ(xl+1 , yl+1 ), . . . ; l = 1, 2, . . . , n of Φ(X, Y ) .

7.2 Two Variables

59

However, with respect to the same l, xl and yl may or may not be dependent. As X and Y are jointly normally distributed, the sequence Φ(xl , yl ) ; l = 1, 2, . . . , n follows a one-dimensional normal distribution, be X and Y dependent or not [28, 30]. Furthermore, let us recall that s2Φ is an unbiased estimator of " ! 2 2 σΦ = E (Φ(X, Y ) − μΦ )  =

∂Φ ∂x ¯



2 σx2 + 2

∂Φ ∂x ¯



∂Φ ∂ y¯



 σxy +

∂Φ ∂x ¯

2

(7.27) σy2 .

In view of (7.21), and as successive realizations of Φ(X, Y ) are considered independent, we are entitled to define a (n − 1)SΦ2 2 σΦ

(7.28)

¯ Y¯ ) − μΦ Φ(X, √ . SΦ / n

(7.29)

χ2 (n − 1) = and hence Student’s T (n − 1) = After all, ¯ Y¯ ) − Φ(X,

− 1) tP (n − 1) ¯ Y¯ ) + tP (n √ √ SΦ ≤ μΦ ≤ Φ(X, SΦ n n

(7.30)

specifies a confidence interval of probability P with respect to the expectation ¯ Y¯ ). μΦ of the arithmetic mean Φ(X, Combining this with the worst-case estimation of (7.26),      ∂Φ   ∂Φ    fs,y ,   fs,x +  (7.31) fs,Φ =  ∂x ¯ ∂ y¯  produces the final measuring result Φ(¯ x, y¯) ± uΦ¯ uΦ¯ =

      ∂Φ   tP (n − 1) √ T  fs,x +  ∂Φ  fs,y . √ b s b +    ∂x ¯ ∂ y¯  n

(7.32)

Remarkably enough, the formalism does not require us to distinguish between dependent and independent random variables. As has been discussed, in case of dependence the data appear in ordered pairs, while in case of independence pairing is of no relevance whatsoever, though the numerical value of the empirical covariance sxy does depend on the actual kind of pairing. This,

60

7 Functions of Erroneous Variables

however, does in no way affect the reliability of the confidence interval (7.30). To recall, the length of confidence intervals are submitted to fluctuations as expressed by Student’s distribution density, may the empirical variance imply the fluctuations of one, of two, or of more than two variables. In particular, in case of independence, any value of the empirical covariance coming out of −sx sy < sxy < sx sy is likewise acceptable. Even purposeful modifications of the sequence of the data would not invalidate (7.30). Had we, however, mutilated the empirical variance (7.22) by dismissing the empirical covariance, our perspective to properly assess the influence of random errors would have been rendered moot. After all, we expect the interval x, y¯) + uΦ¯ Φ(¯ x, y¯) − uΦ¯ ≤ Φ(x0 , y0 ) ≤ Φ(¯

(7.33)

to localize the true value Φ(x0 , y0 ) and this result is due to our decision in favor of what we have called well-defined measuring conditions. Finally, let us redirect our attention to the role of the partial derivatives. Though being in no way connected to the measurements themselves, they are nevertheless involved in the momentousness of the random and systematic errors of the respective input data. Assume, e.g., the uncertainties ux¯ and uy¯ to be equal but that | ∂Φ/∂ x ¯ | exceeds | ∂Φ/∂ y¯ |. Then, the contribution to the uncertainty uΦ¯ due to the errors in x will exceed the contribution due to the errors in y. Robust Assessment Let us address (7.32) and consider the empirical variance  s2Φ

T

=b sb=

∂Φ ∂x ¯



2 s2x

+2

∂Φ ∂x ¯



∂Φ ∂ y¯



 sxy +

∂Φ ∂ y¯

2 s2y .

In case of independence, we may use the upper boundary of the interval −sx sy < sxy < sx sy to write 

 2   ∂Φ ∂Φ ∂Φ +2 s2y sxy + ∂x ¯ ∂ y¯ ∂ y¯     2  2  ∂Φ   ∂Φ  ∂Φ ∂Φ 2     sx sy + ≤ sx + 2  s2y ∂x ¯ ∂x ¯   ∂ y¯  ∂ y¯ =

∂Φ ∂x ¯



2

s2x

  2     ∂Φ    sx +  ∂Φ  sy    ∂x  ¯ ∂ y¯ 

which transfers (7.32) into

(7.34)

7.3 More Than Two Variables

tP (n − 1) √ uΦ¯ = n

#

∂Φ ∂x ¯



2 s2x + 2

∂Φ ∂x ¯



∂Φ ∂ y¯



 sxy +

∂Φ ∂ y¯

61

2 s2y

     ∂Φ   ∂Φ    fs,y   fs,x +  + ∂x ¯ ∂ y¯         ∂Φ  tP (n − 1)  ∂Φ  tP (n − 1)    √ √ s s ≤  + f + f + x s,x y s,y  ∂ y¯  ∂x ¯ n n      ∂Φ   ∂Φ    uy¯ .   ux¯ +  = ∂x ¯ ∂ y¯  We consider

     ∂Φ   ∂Φ    uy¯   ux¯ +  uΦ¯ ≤  ∂x ¯ ∂ y¯ 

(7.35)

a robust assessment of the uncertainty uΦ¯ . Clearly, in this context, a singular empirical variance–covariance matrix s does not matter.

7.3 More Than Two Variables For more than two variables, little remains to be added. To set up the error propagation, we either consider m series of data each comprising the same number n of repeated measurements x11 , x12 , . . . , x1l ↓ x21 , x22 , . . . , x2l ↓ ··· ··· ··· ··· ↓ xm1 , xm2 , . . . , xml

. . . , x1n

device1

. . . , x2n

device2

···

···

. . . , xmn

devicem

or, alternatively, n data tuples (x11 , x21 , . . . , xm1 ) ,

(x12 , x22 , . . . , xm2 ) , . . . (x1l , x2l , . . . , xml ) , . . .

(x1n , x2n , . . . , xmn )

each of which being m-dimensional. We consider the n data tuples independent. However, with respect to the same l, the x1l , x2l , . . . , xml may or may not be dependent. Let us assess the uncertainty of some function Φ(x1 , x2 , . . . , xn ).

62

7 Functions of Erroneous Variables

As usual, we denote by x0,1 , x0,2 , . . . , x0,m the true values of the measurands. The error equations read xil = x0,i + (xil − μi ) + fi ;

i = 1, . . . , m ;

x ¯i = x0,i + (¯ xi − μi ) + fi ;

l = 1, . . . , n

−fs,i ≤ fi ≤ fs,i .

Series Expansion There are series expansion of Φ(x1 , x2 , . . . , xn ) throughout a neighborhood of the point x0,1 , x0,2 , . . . , x0,m , firstly, with respect to the tuples (x1l , x2l , · · · , xml ) ; l = 1, 2, . . . , n producing Φ(x1l , x2l , . . . xml ) = Φ(x0,1 , x0,2 , . . . x0,m ) +

m m   ∂Φ ∂Φ (xil − μi ) + fi + · · · ; ∂x0,i ∂x0,i i=1 i=1

l = 1, . . . , n

and, secondly, with respect to the m sample means x ¯1 , x ¯2 , . . . , x ¯m yielding ¯2 , . . . x ¯m ) = Φ(x0,1 , x0,2 , . . . x0,m ) Φ(¯ x1 , x +

m m   ∂Φ ∂Φ (¯ xi − μi ) + fi + · · · . ∂x0,i ∂x0,i i=1 i=1

Substituting ∂Φ/∂ x ¯i for ∂Φ/∂x0,i , truncating and recasting the expansions issues Φ(x1l , x2l , . . . xml ) = Φ(x0,1 , x0,2 , . . . x0,m ) +

m m   ∂Φ ∂Φ (xil − μi ) + fi ; ∂x ¯i ∂x ¯i i=1 i=1

(7.36) l = 1, . . . , n

and Φ(¯ x1 , x ¯2 , . . . x ¯m ) = Φ(x0,1 , x0,2 , . . . x0,m ) +

m m   ∂Φ ∂Φ (¯ xi − μi ) + fi . ∂x ¯i ∂x ¯i i=1 i=1

(7.37)

Subtraction of (7.37) from (7.36) produces Φ(x1l , x2l , . . . xml ) − Φ(¯ x1 , x ¯2 , . . . x ¯m ) =

m  ∂Φ (xil − x ¯i ) ; ∂ x ¯i i=1

(7.38) l = 1, . . . , n .

7.3 More Than Two Variables

63

Summing over l issues  ¯= 1 Φ Φ(xll , x2l , . . . , xml ) . n n

(7.39)

l=1

Random Errors From (7.38) we draw the empirical variance s2Φ = in which we let

m  ∂Φ ∂Φ sij = bTs b ∂ x ¯ ∂ x ¯ i j i,j



s11 s12 . . . ⎜s ⎜ 21 s22 . . . s=⎜ ⎝ ... ... ... sm1 sm2 . . .

⎞ s1m s2m ⎟ ⎟ ⎟; ... ⎠

(7.40)

sii ≡ s2i ,

sij = sji

(7.41)

smm

denote the empirical variance–covariance matrix of the input data with elements 1  (xil − x ¯i )(xjl − x ¯j ) ; n−1 n

sij =

i, j = 1 , . . . , m

l=1

and  b=

∂Φ ∂x ¯1

∂Φ ∂x ¯2

···

∂Φ ∂x ¯m

T .

an auxiliary vector.

Systematic Errors ¯2, . . . X ¯ m yields the expected value ¯1, X Reading (7.37) in random variables X   ¯1, X ¯2, . . . , X ¯m) μΦ = E Φ(X = Φ(x0,1 , x0,2 , . . . , x0,m ) +

m  ∂Φ fi . ∂x ¯i i=1

Hence, the propagated systematic error is fΦ =

m  ∂Φ fi ; ∂x ¯i i=1

−fs,i ≤ fi ≤ fs,i .

(7.42)

64

7 Functions of Erroneous Variables

Confidence Interval and Overall Uncertainty The confidence interval Φ(¯ x1 , x ¯2 , . . . , x ¯m ) −

tP (n − 1) √ sΦ ≤ μΦ n

tP (n − 1) √ xΦ ≤ Φ(¯ x1 , x ¯2 , . . . , x ¯m ) + n

(7.43)

is expected to localize the parameter μΦ with probability P . Combining the worst-case estimation  m    ∂φ    fs,i (7.44) fs,Φ =  ∂x ¯i  i=1 of the propagated systematic error fΦ with the confidence interval (7.43) for the expected value μΦ produces the final result ¯2 , . . . , x ¯m ) ± uΦ¯ Φ(¯ x1 , x uΦ¯ =

m    ∂Φ tP (n − 1) √ T  √ b sb+  ∂x ¯i n i=1

   fs,i 

(7.45)

which is meant to localize the true value Φ(x0,1 , x0,2 , . . . , x0,m ).

Robust Assessment Considering two variables at a time, the respective empirical covariance may be treated just as in case of two variables. This leads to  m    ∂Φ    ux¯ uΦ¯ ≤ (7.46)  ∂x  i ¯ i i=1 which, obviously, renders the consideration of empirical covariances irrelevant.

Flow of True Values Ultimately, let us visualize the flow of true values. To have an example, we consider three true values, say, β0, 1 , β0,2 , β0,3 , estimators β¯1 , β¯2 , β¯3 and uncertainties uβ¯1 , uβ¯2 , uβ¯3 . Let there be some relationship defining a fourth true value β0,4 via

7.3 More Than Two Variables

65

φ(β0,1 , β0,2 , β0,3 ) = β0,4 . Given the intervals β¯1 ± uβ¯1 , β¯2 ± uβ¯2 , β¯3 ± uβ¯3 localize the true values β0,1 , β0,2 , β0,3 , with respect to φ(β¯1 , β¯2 , β¯3 ) = β¯4 or φ(β¯1 ± uβ¯1 , β¯2 ± uβ¯2 , β¯3 ± uβ¯3 ) ⇒ β¯4 ± uβ¯4 we should have β¯4 − uβ¯4 ≤ β0,4 ≤ β¯4 + uβ¯4 . This is what traceability asks for. The result is illustrated in Fig. 7.1.

Fig. 7.1. The merging of measuring results, say, β¯1 , β¯2 , β¯3 to β¯4 = φ(β¯1 , β¯2 , β¯3 ) should maintain traceability

66

7 Functions of Erroneous Variables

Just to extend the idea, assume there is a separate experiment allowing to measure β4 directly. Then the uncertainties of the preceding result and the directly measured result β¯4  − uβ¯4 ≤ β0,4 ≤ β¯4  + uβ¯4 should overlap.

7.4 Concatenated Functions More often than not, the propagation of measuring errors pervades several stages. This means that functional relationships, burdened by measuring errors, enter other functions. To map the underlying physical processes, we shall steadily channel the flow of random and systematic errors. Let us consider a variable y and a function Φ(x, y) both entering some function Γ [y, Φ(x, y)]. As y enters twofold we expect a covariance to pop up. A covariance due to the multiple appearance of the same variable may, however, be removed. To have an example, let us consider Φ(x, y) =

x y

and Γ [y, Φ(x, y)] = y exp[Φ(x, y)] .

Series Expansions Recalling (7.20) and (7.26) we have Φ(xl , yl ) − Φ(¯ x, y¯) =

∂Φ ∂Φ (xl − x (yl − y¯); ¯) + ∂x ¯ ∂ y¯

l = 1, . . . , n

(7.47)

and fΦ =

∂Φ ∂Φ fx + fy ; ∂x ¯ ∂ y¯

−fs,x ≤ fx ≤ fs,x , −fs,y ≤ fy ≤ fs,y ,

(7.48)

respectively. Let us put Φl ≡ Φ(xl , yl ); l = 1, . . . , n. With this we have ¯ = ∂Γ (yl − y¯) + ∂Γ (Φl − Φ); ¯ Γ(yl , Φl ) − Γ(¯ y , Φ) ∂ y¯ ∂ Φ¯

l = 1, . . . , n

(7.49)

and fΓ =

∂Γ ∂Γ fy + ¯ fΦ ; ∂ y¯ ∂Φ

−fs,y ≤ fy ≤ fs,y , −fs,Φ ≤ fΦ ≤ fs,Φ .

(7.50)

Relation (7.49) is apt to produce an empirical covariance sy,Φ which, however, disappears upon inserting (7.47)

7.4 Concatenated Functions



67



∂Γ ∂Γ ∂Φ ¯ = ∂Γ ∂Φ (xl − x + Γ(yl , Φl ) − Γ(¯ y , Φ) ¯) + (yl − y¯) . ¯ ¯ ∂ y¯ ∂ Φ¯ ∂ y¯ ∂Φ ∂x

(7.51)

In order to keep the propagation of systematic errors unique, we put   ∂Γ ∂Γ ∂Φ ∂Γ ∂Φ fΓ = ¯ fx + + ¯ (7.52) fy . ¯ ∂ y¯ ∂ Φ ∂ y¯ ∂Φ ∂x Introducing ∂Γ ∂Φ bx = ¯ , ¯ ∂Φ ∂x

∂Γ ∂Γ ∂Φ by = + ∂ y¯ ∂ Φ¯ ∂ y¯

 and

b=

bx by

(7.53)

we arrive at ¯ = bx (xl − x Γ(yl , Φl ) − Γ(¯ y , Φ) ¯) + by (yl − y¯)

(7.54)

and fΓ = bx fx + by fy .

(7.55)

Overall Uncertainty ¯ turns out as y , Φ) Obviously, the uncertainty uΓ¯ of Γ (¯ ¯ ± uΓ¯ Γ(¯ y , Φ) uΓ¯ =

tP (n − 1) $ 2 2 √ bx sx + 2bx by sxy + b2y s2y + | bx | fs,x + | by | fs,y n

which is meant to localize the true value Γ (y0 , Φ0 ). Remark Let us subject fΓ = to a worst-case estimation ∗ fs,Γ

∂Γ ∂Γ fy + ¯ fΦ ∂ y¯ ∂Φ

     ∂Γ   ∂Γ     fs,y +  ¯  fs,Φ . = ∂ y¯  ∂Φ

Inserting the worst-case estimation of (7.48)       ∂Φ    fs,x +  ∂Φ  fs,y fs,Φ =    ∂x ¯ ∂ y¯ 

(7.56)

68

7 Functions of Erroneous Variables

produces ∗ fs,Γ

       ∂Γ ∂Φ   ∂Γ   ∂Γ ∂Φ      fs,y   fs,x +  + = ¯ ¯ ∂ y¯   ∂ Φ¯ ∂ y¯  ∂Φ ∂x

which clearly differs from the worst-case estimation of (7.52)       ∂Γ ∂Φ    fs,x +  ∂Γ + ∂Γ ∂Φ  fs,y . fs,Γ =  ¯   ¯ ¯ ∂ y¯ ∂ Φ ∂ y¯  ∂Φ ∂x ∗ To stress, fs,Γ , expressing an inconsistent propagation of systematic errors, conflicts with the proceeding stated in Sect. 5.2. By contrast, fs,Γ brings out a consistent proceeding. In order to keep the propagation of unknown systematic errors unique, worst-case estimations should be carried out not until those errors which occur more than once have been factored out.

7.5 Elementary Examples Let us consider two measuring results     tP (n − 1) tP (n − 1) √ √ sx + fs,x , sy + fs,y x ¯± y¯ ± n n

(7.57)

and see about their sum, difference, product, and quotient.

Sum The error equations of the sum Φ(x, y) = x + y

(7.58)

read Φ(xl , yl ) = x0 + y0 + (xl − μx ) + (yl − μy ) + fx + fy ;

l = 1, 2 , . . . , n

x − μx ) + (¯ y − μy ) + fx + fy . Φ(¯ x, y¯) = x0 + y0 + (¯ The difference x, y¯) = (xl − x ¯) + (yl − y¯) ; Φ(xl , yl ) − Φ(¯

l = 1, 2 , . . . , n

(7.59)

issues the mean 1 Φ(xl , yl ) Φ¯ = n n

l=1

(7.60)

7.5 Elementary Examples

69

and the empirical variance 1  2 [Φ(xl , yl ) − Φ(¯ x, y¯)] = s2x + 2sxy + s2y . n−1 n

s2Φ =

(7.61)

l=1

¯ Y¯ ), The expected value of Φ(X, μΦ = E{Φ(X, Y )} = x0 + y0 + fx + fy ,

(7.62)

discloses the propagated systematic error fΦ = fx + fy ,

−fs,x ≤ fx ≤ fs,x ,

−fs,y ≤ fx ≤ fs,y .

(7.63)

Hence, we expect the interval Φ(¯ x, y¯) ± uΦ¯ uΦ¯ =

tP (n − 1) $ 2 √ sx + 2sxy + s2y + (fs,x + fs,y ) n

(7.64)

to localize the true value x0 + y0 . Perhaps, we might wish to pursue the variation of the length of the confidence interval Φ(¯ x, y¯) −

tP (n − 1) tP (n − 1) √ √ sΦ ≤ μΦ ≤ Φ(¯ sΦ x, y¯) − n n

given the empirical covariance sxy runs from −sx sy to sx sy maintaining −sx sy < sxy < sx sy . The effect of the increase of the empirical variances $ sΦ = s2x + 2sxy + s2y , is illustrated in Fig. 7.2.

Difference To ask for the difference of two arithmetic means, say, x ¯ and y¯ aiming at a true value, say, z0 is tantamount to scrutinize their mutual compatibility, Fig 7.3. Considering Φ(x, y) = x − y

(7.65)

70

7 Functions of Erroneous Variables

Fig. 7.2. Sweeping the empirical covariance sxy through the interval −sx sy · · · sx sy alters the length of the confidence interval

the error equations Φ(xl , yl ) = x0 − y0 + (xl − μx ) − (yl − μy ) + fx − fy ;

l = 1, 2 , . . . , n

x − μx ) − (¯ y − μy ) + fx − fy Φ(¯ x, y¯) = x0 − y0 + (¯ produce Φ(xl , yl ) − Φ(¯ x, y¯) = (xl − x ¯) − (yl − y¯) ;

l = 1, 2 , . . . , n .

(7.66)

Hence, we have 1 Φ¯ = Φ(xl , yl ) n

(7.67)

s2Φ = s2x − 2sxy + s2y .

(7.68)

n

l=1

and

With respect to the expectation

7.5 Elementary Examples

71

Fig. 7.3. Difference between two arithmetic means aiming at a common true value z0

μΦ = E{Φ(X, Y )} = (x0 + fx ) − (y0 + fy ) = μx − μy

(7.69)

there is a Student’s T (n − 1) =

¯ − Y¯ ) − (μx − μy ) (X √ . SΦ / n

(7.70)

As (7.69) issues the propagated systematic error fΦ = fx − fy ,

−fs,x ≤ fx ≤ fs,x ,

−fs,y ≤ fx ≤ fs,y

(7.71)

the two means appear compatible given |x ¯ − y¯ | ≤

tp (n − 1) $ 2 √ sx − 2sxy + s2y + (fs,x + fs,y ) . n

(7.72)

Condoning a singular empirical variance–covariance matrix, a coarser estimation would be |x ¯ − y¯ | ≤ ux¯ + uy¯ . Round robins, as discussed in Sect. 9.2, circulate a measuring standard throughout a group of laboratories. Given the true value of the standard

72

7 Functions of Erroneous Variables

remains constant during the circulation, each of the participants has the chance to assess the true value of one and the same physical quantity. But then, any two uncertainties should mutually overlap.

Product Expanding and linearizing Φ(x, y) = x y

(7.73)

produces Φ(xl , yl ) = x0 y0 + y¯(xl − μx ) + x ¯(yl − μy ) + y¯fx + x ¯fy ;

l = 1, 2 , . . . , n

x − μx ) + x ¯(¯ y − μy ) + y¯fx + x ¯fy . Φ(¯ x, y¯) = x0 y0 + y¯(¯ From Φ(xl , yl ) − Φ(¯ x, y¯) = y¯(xl − x ¯) + x ¯(yl − y¯) ;

l = 1, 2 , . . . , n

(7.74)

we take 1 Φ¯ = Φ(xl , yl ) n

(7.75)

s2Φ = y¯2 s2x + 2 x ¯ y¯ sxy + x ¯2 s2y .

(7.76)

μΦ = E{Φ(X, Y )} = x0 y0 + y¯fx + x ¯fy

(7.77)

n

l=1

and

The expected value

issues the propagated systematic error fΦ = y¯fx + x ¯fy ,

−fs,x ≤ fx ≤ fs,x ,

−fs,y ≤ fx ≤ fs,y .

(7.78)

Hence, the interval Φ(¯ x, y¯) ± uΦ¯ uΦ¯ =

tP (n − 1) $ 2 2 √ y¯ sx + 2¯ xy¯sxy + x ¯ 2 s2y + | y¯ | fs,x + | x ¯ | fs,y n

is meant to localize the true value Φ(x0 , y0 ) = x0 y0 .

(7.79)

7.5 Elementary Examples

73

Quotient Finally, let us consider Φ(x, y) =

x . y

(7.80)

Expanding and linearizing we have1 x0 1 x ¯ 1 x ¯ Φ(xl , yl ) = + (xl − μx ) − 2 (yl − μy ) + fx − 2 fy ; y0 y¯ y¯ y¯ y¯ Φ(¯ x, y¯) =

l = 1, 2, . . . , n

1 x ¯ 1 x ¯ x0 x − μx ) − 2 (¯ + (¯ y − μy ) + fx − 2 fy . y0 y¯ y¯ y¯ y¯

Thus from Φ(xl , yl ) − Φ(¯ x, y¯) =

1 x ¯ (xl − x ¯) − 2 (yl − y¯) y¯ y¯

we draw 1 Φ¯ = Φ(xl , yl ) n

(7.81)

1 2 x ¯ x ¯2 sx − 2 3 sxy + 4 s2y . 2 y¯ y¯ y¯

(7.82)

n

l=1

and s2Φ = The expected value μΦ = E{Φ(X, Y )} =

x0 1 x ¯ + fx − 2 fy y0 y¯ y¯

suggests a propagated systematic error 1 x ¯ fΦ = fx − 2 fy , −fs,x ≤ fx ≤ fs,x , y¯ y¯

−fs,y ≤ fx ≤ fs,y .

(7.83)

(7.84)

After all, we consider the result Φ(¯ x, y¯) ± uΦ¯ tP (n − 1) √ uΦ¯ = n +

#

1 2 x ¯ x ¯2 sx − 2 3 sxy + 4 s2y 2 y¯ y¯ y¯

(7.85)

1 |x ¯| fs,x + 2 fs,y | y¯ | | y¯ |

to localize the true value x0 /y0 . 1

Given X and Y were standardized, normally distributed variables, the quotient U = X/Y follows a Cauchy density. Then, as is known, E{U } and E{U 2 } would not exist. We, however, confine ourselves to truncated Taylor series. In this approximation, the Φ(xl , yl ); l = 1, . . . , n may be considered normally distributed.

74

7 Functions of Erroneous Variables

7.6 Test of Hypothesis Let us return to the difference between two arithmetic means directed at one and the same true value. Time and again, it has been emphasized that the classical error calculus is used to addressing different numbers of repeated measurements 1 1 x1l , n

n

x ¯1 =

2 1 x2 l ; n

n

x ¯2 =

l=1

n1 = n2 .

(7.86)

l=1

Coherently, the empirical covariance is not to be set up simply because there are excess measurements of either of the variables. For the moment, let us disregard systematic errors and assume equal theoretical variances, so that μi = μ and σi2 = σ 2 ; i = 1, 2, respectively. Due to n1 = n2 , the classical error calculus examines the difference between the two means through the ¯2, ¯1 − X theoretical variance of the random variable Z¯ = X % & ¯ 1 − μ) − (X ¯ 2 − μ) 2 } σz2¯ = E{ (X ¯ 1 − μ)2 } − 2E{(X ¯ 1 − μ)(X ¯ 2 − μ)} + E{(X ¯ 2 − μ)2 } = E{(X =

n1 + n2 σ2 σ12 + 2 = σ2 . n1 n2 n1 n2

Thus, the random variable Z¯ follows a '   n1 + n2 N 0, σ n1 n2 density. The associated standardized variable is ¯ −X ¯2) (X '1 . n1 + n2 σ n1 n2 The sum of the two χ2 -s χ2 (ν ∗ ) =

(n1 − 1)S12 + (n2 − 1)S22 σ2

with degrees of freedom ν ∗ = n1 + n2 − 2

7.6 Test of Hypothesis

75

suggests a Student’s ¯2 ¯1 − X X ( σ (n1 + n2 )/(n1 n2 ) ' . T (ν ∗ ) = χ2 (ν ∗ ) ν∗ Written explicitly, we have

T (ν ∗ ) = '

σ

(

¯1 − X ¯2 X √

(n1 + n2 )/(n1 n2 )

(n1 −

1)S12

+ (n2 − σ2

1)S22

n1 + n2 − 2 .

(7.87)

Obviously, this statement lacks an empirical covariance. To compare (7.87) with (7.70), we put n1 = n2 = n ¯1 − X ¯2 √ X T (ν ∗ ) = ( 2 n. S1 + S22

(7.88)

Rewriting (7.70) we have T (ν) = (

¯1 − X ¯2 X S12

− 2S12 +

S22

√ n.

(7.89)

While (7.88) refers to degrees of freedom ν ∗ = 2(n − 1), relation, (7.89) addresses degrees of freedom ν = (n − 1), where tP (2ν) < tP (ν). Both quantities possess equal statistical properties. For an illustration, we consider the respective confidence intervals ( ¯ 2 ) − tP (2ν) S 2 + S 2 /√n ¯1 − X (X 1 2 (7.90) ( √ 2 2 ¯ ¯ ≤ 0 ≤ (X1 − X2 ) + tP (2ν) S1 + S2 / n , and

( ¯1 − X ¯ 2 ) − tP (ν)/ S 2 − 2S12 + S 2 √n (X 1 2 ( ¯ 2 ) + tP (ν) S 2 − 2S12 + S 2 /√n . ¯1 − X ≤ 0 ≤ (X 1 2

(7.91)

Interestingly enough, a dependence between the two means would promptly invalidate (7.90), but would affect (7.91) in no way. To be more general, we now admit different expectations, μ1 = μ2 , and different theoretical variances, σ12 = σ22 , briefly touching the so-called Fisher– Behrens problem which, traditionally, refers to n1 = n2 . To the knowledge of the author, no exact solution has ever been proposed.

76

7 Functions of Erroneous Variables

However, assuming n1 = n2 , (7.70) doubtlessly yields an exact T (ν) =

¯1 − X ¯ 2 ) − (μ1 − μ2 ) √ (X ( n. 2 S1 − 2S12 + S22

(7.92)

Evidently, the proceeding to ask for well-defined measuring conditions brings the empirical covariance S12 into play and this renders the Fisher–Behrens problem an ill-defined issue.2 Data Simulation Under data simulation, (7.90) and (7.91) confirm equal statistical behavior as illustrated in Figs. 7.4 and 7.5, respectively.

2

From a metrological point of view, the mere notation μ1 = μ2 conceals the underlying physical situation. Firstly, desisting from unknown systematic errors and assuming μ1 = μ2 presents us with a contradiction in terms. Secondly, admitting unknown systematic errors, μ1 = μ2 renders the Fischer–Behrens issue meaningless, as we are now asked to probe compatibility via (7.72), but not via a purely statistical approach.

7.6 Test of Hypothesis

77

Fig. 7.4. Difference between two arithmetic means, classical approach, 100 simulations

78

7 Functions of Erroneous Variables

Fig. 7.5. Difference between two arithmetic means, generalized approach, 100 simulations

8 Method of Least Squares

The method of least squares controls the flow of errors via the elements of the design matrix. Hence, assuming linear systems with differing design matrices aiming at the same set of unknowns, the adjustment’s uncertainties would differ even if the uncertainties of the input data were the same.

8.1 Empirical Variance–Covariance Matrix As discussed in (2.18), the solution vector ¯ = BTx ¯ β

(8.1)

is given by the matrix B = A(AT A)−1 = (bik ) ;

i = 1, . . . , m ; k = 1, . . . , r

(8.2)

and the vector of input data 1 xil ; n n

¯ = (¯ x x1 x ¯2 . . . x ¯m )T ,

x ¯i =

i = 1, . . . , m .

(8.3)

l=1

Inserting the formal decomposition x ¯ = x0 + (¯ x − μ) + f

(8.4)

¯ = β 0 + B T (¯ x − μ) + B T f . β

(8.5)

¯ according to resolves β

The elements bik of the matrix B produce the components β¯k =

m 

bik x ¯i ;

k = 1, . . . , r .

(8.6)

i=1

As each of the β¯k relies on the same set of input data, the β¯k are necessarily dependent. The means (8.6) issue

80

8 Method of Least Squares

β¯k =

m 

 bik

i=1

 m  n n 1 1  xil = bik xil n n i=1 l=1

l=1

1¯ = βkl ; n

(8.7)

n

k = 1, . . . , r .

l=1

We consider the sums β¯kl =

m 

bik xil ;

k = 1, . . . , r

(8.8)

i=1

to be least squares estimators, the input data of which being the m individual measurements x1l , x2l , . . . , xml . There, each of the m means x ¯i contributes just the l-th measured value. To clarify the approach, we draw upon an illustration: ¯1 x11 x12 . . . x1l . . . x1n ⇒ x x21 x22 . . . x2l . . . x2n ⇒ x ¯2 ... ... ... ... ... ... ... ... ¯m xm1 xm2 . . . xml . . . xmn ⇒ x ⇓



β¯k1 β¯k2

⇓ β¯kl





β¯kn ⇒ β¯k

According to (8.7), the β¯k are the arithmetic means of the n estimators β¯kl . The differences β¯kl − β¯k establish the empirical variances and covariances



1  ¯ βkl − β¯k β¯k l − β¯k ; n−1 n

sβ¯k β¯k =

k, k  = 1, . . . , r .

(8.9)

l=1

For convenience, we let them define the empirical variance–covariance matrix ⎞ ⎛ sβ¯1 β¯1 sβ¯1 β¯2 . . . sβ¯1 β¯r ⎟ ⎜ ⎜ sβ¯2 β¯1 sβ¯2 β¯2 . . . sβ¯2 β¯r ⎟ ⎟ , sβ¯ β¯ ≡ s2¯ . ⎜ sβ¯ = ⎜ (8.10) βk k k ... ... ... ⎟ ⎠ ⎝ ... sβ¯r β¯1 sβ¯r β¯2 . . . sβ¯r β¯r Inserting the differences β¯kl − β¯k =

m  i=1

into (8.9) we obtain

bik (xil − x ¯i ) ;

l = 1, . . . , n;

k = 1, . . . , r

8.1 Empirical Variance–Covariance Matrix

sβ¯k β¯k =

=

1 n−1 m 

81

⎤ m ⎡ m n    bik (xil − x ¯i ) ⎣ bjk (xjl − x ¯ j )⎦ i=1

l=1

j=1

(8.11)

bik bjk sij

i,j

in which the quantities 1  (xil − x ¯i ) (xjl − x ¯j ) ; n−1 n

sij =

i, j = 1, . . . , m

(8.12)

l=1

designate the elements of the empirical ⎛ s11 s12 ⎜s ⎜ 21 s22 s=⎜ ⎝ ... ...

variance–covariance matrix ⎞ . . . s1m . . . s2m ⎟ ⎟ ⎟ ... ... ⎠

(8.13)

sm1 sm2 . . . smm of the input data. By means of the column vectors bk ; k = 1, . . . , r of the matrix B, B = (b1

b2

···

br ) ,

(8.14)

we may cast the variances and covariances (8.11) into sβ¯k β¯k = bT k s b k ; Hence, (8.10) turns into ⎛ T b1 s b 1 ⎜ ⎜ bTs b ⎜ 2 1 sβ¯ = ⎜ ⎜ ⎜ ... ⎝ bT r s b1

bT 1 s b2

...

bT 2 s b2

...

...

...

bT r s b2

...

k, k  = 1, . . . , r .

bT 1 s br

(8.15)



⎟ ⎟ bT 2 s br ⎟ ⎟ = BTs B . ⎟ ... ⎟ ⎠ bT s b r r

(8.16)

For a moment, let the input data themselves be independent. Thus, for any fixed i, the order of the xil is indeterminate so that individuals may be interchanged. Clearly, permutations do not alter the β¯k , they alter, however, the β¯kl . At the same time, the diagonal elements of the matrix (8.13) are not affected, the non-diagonal elements, however, are. Moreover, the diagonal as well as the off-diagonal elements of the matrix (8.16) will alter. Again, this complies perfectly with the properties of the underlying statistical model and in particular with our aim to introduce confidence intervals according to Student.

82

8 Method of Least Squares

Each of the quantities β¯kl , as defined in (8.8), implies a set of input data x1l , x2l , . . . , xml , the elements of which may or may not be dependent. But let us remind that the consecutive sets l = 1, . . . , n are considered independent. From there, the β¯kl ; l = 1, 2, . . . , n are normally distributed and independent. Hence, we expect the intervals   m (n − 1) t P  √ β¯k ± bik bjk sij ; k = 1, . . . , r (8.17) n i,j to localize the components μk ; k = 1, . . . , r of the vector μβ¯ = (μ1

μ2

...

μr )T = β 0 + B T f

(8.18)

with probability P as given by Student’s tP (n − 1).

8.2 Propagation of Systematic Errors From (8.4) we read the vector f β¯ = B T f

(8.19)

of the propagated systematic errors, written in components fβ¯k =

m 

bik fi ;

k = 1, . . . , r .

(8.20)

i=1

The worst-case estimations are fs,β¯k =

m 

| bik | fs,i .

(8.21)

i=1

For equal systematic errors, fi = f , fs,i = fs ; i = 1, . . . , m the estimations turn out more favorable,   m     (8.22) bik  . fs,β¯k = fs    i=1

According to (8.21), the formalism seems to respond to a rising number m of input data with a possibly inappropriate growth in fs,β¯k , thus blotting out the gain of input information. This however does not apply, since the method of least squares brings about as much as a weighting of errors thus preventing pile ups. An example was given in (6.25).

8.3 Uncertainties of the Estimators

83

8.3 Uncertainties of the Estimators Combining the confidence intervals (8.17) with the worst-case estimations (8.21) issues the results β¯k − uβ¯k ≤ β0, k ≤ β¯k + uβ¯k ; uβ¯k

k = 1, . . . , r

  m m  tP (n − 1)   √ = bik bjk sij + | bik | fs,i n i,j i=1

(8.23)

or, if instead of (8.21) reference is taken to (8.22), β¯k − uβ¯k ≤ β0, k ≤ β¯k + uβ¯k ; uβ¯k

k = 1, . . . , r

    m m   tP (n − 1)     √ = bik bjk sij + fs  bik  .   n i,j i=1

(8.24)

As illustrated in Fig. 8.1, we should not be exclusively fixated on the numerical values of the least squares estimates. Rather, considering one and the same physical quantity, say, β0,k , there may be a set of varying uncertainty intervals coming from different adjustments. As each adjustment will certainly imply differing physical and metrological details, the actual positions and length of the respective uncertainty intervals cannot be expected to be identical. Nevertheless, they should collectively localize the common true value.

Fig. 8.1. Uncertainty intervals coming from different experiments and aiming at one and the same physical quantity, say β0,k , should mutually overlap

84

8 Method of Least Squares

8.4 Weighting Factors To boost the influence of the more accurate input data and to weaken the influence of the less accurate ones, so-called weight factors may be introduced. For convenience, weight factors are gathered within a weight matrix. The classical Gaussian error calculus, which dismisses unknown systematic errors, defines the weight matrix of an inconsistent linear system via the Gauss–Markov theorem. In a sense, this weight matrix attempts to issue an objective weighting procedure. The uncertainties of the adjusted estimators are considered minimal and thus called “optimal.” Unfortunately, the theorem has never been operative. Firstly, the so-defined weight matrix is based on the unknown theoretical variances and the unknown covariances1 of the input data and thus does not exist. Secondly, the theorem is known to fail in the presence of biases. Thirdly, at any rate, the uncertainties still depend on the choice of the design matrix A. Nevertheless, weight factors prove useful, given the altered results continue to localize the true values. However, as there is no theorem producing a weight matrix, be it an “optimal one” or not, it is left to the experimenters to decide on which of the input data to boost and which to weaken. Let us consider a diagonal matrix of weights G = diag{g1 , g2 , . . . , gm }

(8.25)

where the gi ; i = 1, . . . , m are taken to be the reciprocals of the uncertainties of the input data gi = 1/ux¯i ; i = 1, . . . , m .

(8.26)

The smaller the uncertainty ux¯i , the larger the weight. Left-multiplying the inconsistent linear system Aβ ≈ x ¯ by G, GAβ ≈ G¯ x,

(8.27)

) *−1 ˜ = (GA) (GA)T (GA) B

(8.28)

produces the solution vector ¯=B ˜ T G¯ β x,

of the weighted system. A weight matrix does not shift the components of the true solution vector β 0 as 1

To recall, theoretical covariances have a fixed sign; empirical covariances may be either positive or negative.

8.4 Weighting Factors

85

GAβ 0 = Gx0 re-establishes ) *−1 T T (GA) Gx0 β 0 = (GA) (GA) regardless of the weights chosen. This property per se proves self-evident; nevertheless, it turns out crucial in regard to the localization of the estimators’ true values. To assign uncertainties to the solution vector of the weighted system we start from ¯=B ˜ T G [x0 + (¯ x − μ) + f ] β (8.29) ˜ T G (¯ ˜ T Gf . x − μ) + B = β0 + B The empirical variance–covariance matrix of the solution vector takes the form

˜ T GT s G B ˜ (8.30) sβ¯ = B with s as given in (8.13). Let ˜ T Gf μ ˜ β¯ = β 0 + B

(8.31)

denote the expectation of the solution vector (8.29). Hence, there are confidence intervals   m (n − 1) t P ˜bik ˜bjk gi gj sij ; k = 1, . . . , r  √ (8.32) β¯k ± n i,j localizing the components of the vector μ ˜ β¯ with probability P as given by Student’s tP (n − 1). Finally, we consider the vector of the propagated systematic errors ˜ T Gf f˜β¯ = B

(8.33)

with components f˜β¯k =

m 

˜bik gi fi ;

k = 1, . . . , r

(8.34)

i=1

the worst-case estimates of which are given by f˜s,β¯k =

m  i=1

| ˜bik | gi fs,i ;

k = 1, . . . , r .

(8.35)

86

8 Method of Least Squares

In case of equal systematic errors this turns into  m     ˜bik gi  ; k = 1, . . . , r . f˜s,β¯k = fs   

(8.36)

i=1

After all, the result of the weighted adjustment is given by β¯k − uβ¯k ≤ β0,k ≤ β¯k + uβ¯k ;

uβ¯k

k = 1, . . . , r

  m m  tP (n − 1)  ˜bik ˜bjk gi gj sij +  √ = | ˜bik | gi fs,i n i,j i=1

(8.37)

or, alternatively, by β¯k − uβ¯k ≤ β0,k ≤ β¯k + uβ¯k ;

uβ¯k

k = 1, . . . , r

    m m   tP (n − 1)    ˜ ˜ ˜  √ = bik bjk gi gj sij + fs  bik gi  .   n i,j i=1

(8.38)

Remarkably enough, whether or not there is a weight matrix and in case there is one independent of the weights chosen, the inequalities given in (8.23), (8.24), (8.37), and (8.38) encase the true values β0,k ; k = 1, . . . , r. – In a sense, this observation may be seen to re-establish the order of least squares.

Fig. 8.2. The method of least squares is tied down to the input data, the design matrix A, the weight matrix G, and the error model

8.5 Example

87

As Fig. 8.2 illustrates, least squares adjustments are tied down to – – – –

the the the the

input data, design matrix A, weight matrix G and, in particular, to error model .

8.5 Example Under the error model discussed, the method of least squares provides robust and reliable estimators. Nevertheless, it might be of interest to unravel the method’s localization properties in detail and to explore where its safety might expire. To this end, we resort to a numerical example. Data Simulation Let us consider a linear system Aβ 0 = x0 with design matrix A, true solution vector β 0 , and true vector of input data x0 as given by ⎞ ⎞ ⎛ ⎛ 6 −1 −3 1 0 2 ⎛ ⎞ ⎜ −3 ⎟ ⎜ 0 −1 2 −3 1 ⎟ 1 ⎟ ⎟ ⎜ ⎜ ⎜ 15 ⎟ ⎜ 3 1 −2 −1 4 ⎟ ⎜2⎟ ⎟ ⎟ ⎜ ⎜ ⎜ ⎟ ⎟ ⎟ ⎜ ⎜ ⎟ A=⎜ ⎜ −2 1 −1 3 2 ⎟ ; x0 = ⎜ 19 ⎟ ; β 0 = ⎜ 3 ⎟ . ⎜ 11 ⎟ ⎜ 3 −1 2 1 0 ⎟ ⎝4⎠ ⎟ ⎟ ⎜ ⎜ ⎝ 13 ⎠ ⎝ −2 2 0 −1 3 ⎠ 5 5 0 2 −2 3 −1 We superpose random and systematic errors onto the components of the true vector x0 . To visualize the influence of the design matrix, we assume the random errors to follow identical normal densities with theoretical variances σi = 2 ∗ 10−4 ;

i = 1, . . . , 5 .

To begin with, Fig. 8.3 considers random errors alone. We observe that the difference between β¯3 and β0,3 nearly exhausts the confidence interval. Further, Fig. 8.4 explores the influence of systematic errors alone. Here, the interplay between the signs of the elements of the matrix B and the signs of the actual systematic errors fi ; i = 1, . . . , 5 are crucial. We put √ fs,i = 3 ∗ σi ; i = 1, . . . , 5

88

8 Method of Least Squares

Fig. 8.3. Data simulation, random errors alone

Fig. 8.4. Data simulation, systematic errors alone; fi = sign (bi,3 )fs,i ; i = 1, . . . , 5

and, in order to fathom the worst possiblity, fi = sign(bi,3 ) ∗ fs,i ;

i = 1, . . . , 5 .

Clearly, this choice will exhaust the systematic uncertainty of β¯3 . It appears meaningless to speculate about the chances of a coincidence of the two events yanked out with respect to β¯3 . We may, however, take the example to underscore the safety of the approach.

Part IV

Essence of Metrology

9 Dissemination of Units

Any system of physical units suffers from the distinction between defined and realized units. Defined units are theoretically conceived, faultless quantities. Realized units, by contrast, are to be implemented on the part of measuring devices and therefore affected by imperfections. This is why realized units are charged by uncertainties. On behalf of the Bureau International des Poids et Mesures (BIPM) [1], the universally accepted system of units is the SI.

9.1 Working Standards So-called working standards are intended for everyday use. They are to be derived from the realized SI units or primary standards. Let us postpone the design of multiples and sub-multiples of units and firstly tackle the linking of standards of equal nominal values. To meet the demands of practice, hierarchies of standards of falling accuracies have been established. Let Hk ; k = 0, 1, 2, . . . designate “the ranks” of the standards and assume increasing k-s to express decreasing accuracies. Let H0 mark the primary standard. Figure 9.1 sketches the flow of the true values within a hierarchy of SI standards of equal nominal value. The uncertainty of each standard is meant to localize the pertaining true value β0,k . Suppose the comparator operates according to βk = βk−1 + xk ;

k = 1, 2, . . .

(9.1)

where βk designates a standard of rank Hk , which is to be calibrated, βk−1 a standard of rank Hk−1 , which accomplishes the link-up and, finally, xk the difference displayed by the comparator. The link-up implies a loss of accuracy as the uncertainty of the primary standard adds to the uncertainty of the linking procedure. Let us consider n repeated measurements xkl = x0,k + (xkl − μk ) + fk ;

l = 1, . . . , n .

(9.2)

As usual, x0,k designates the true value of the displayed difference xkl , xkl −μk a random error, and fk a systematic error. The n repeated measurements issue an arithmetic mean and an empirical variance

92

9 Dissemination of Units

Fig. 9.1. A hierarchy of SI standards of equal nominal value and ranks H1 , H2 , . . . implying growing uncertainties. The uncertainty of each standard should localize the pertaining true value β0,k

1 xkl , n n

x ¯k =

l=1

1  2 (xkl − x ¯k ) . n−1 n

s2xk =

(9.3)

l=1

Let the systematic error fk be confined to − fs,k ≤ fk ≤ fs,k .

(9.4)

9.1 Working Standards

93

Thus, we consider the interval x ¯k − ux¯k ≤ x0,k ≤ x ¯k + ux¯k ;

ux¯k =

tP (n − 1) √ sxk + fs,k n

(9.5)

to localize the true value x0,k of the displayed differences xkl ; l = 1, . . . , n. Let β0 denote the unknown true value of a primary standard of nominal value N (β), say, N (β) = 1 kg so that N (β) = β0 + fN (β) ,

−uN (β) ≤ fN (β) ≤ uN (β) .

(9.6)

Fig. 9.2. Nominal value N (β) and true value β0 of a primary standard

Figure 9.2 illustrates the difference between β0 and N (β). Whenever the primary standard is used, β0 is effective. Let us link a working standard β1 of true value β0,1 to the primary standard. The physical error equation reads β1l = β0 + x1l = β0 + x0,1 + (x1l − μ1 ) + f1 ;

l = 1, . . . , n .

(9.7)

The sum of β0 and x0,1 on the right-hand side expresses the traceability of β0,1 to β0 via the true indication x0,1 of the comparator β0,1 = β0 + x0,1 .

(9.8)

Averaging the n values β1l ; l = 1, . . . , n as given in (9.7) produces ¯1 . β¯1 = β0 + x

(9.9)

Still, β¯1 is metrologically undefined. In order to arrive at a serviceable statement, we have to substitute N (β) for β0 ¯1 β¯1 = N (β) + x

(9.10)

94

9 Dissemination of Units

and to add the associated systematic error fN (β) to the uncertainty of the modified mean uβ¯1 =

tP (n − 1) √ sx1 + fs,1 + uN (β) . n

(9.11)

According to β¯1 − uβ¯1 ≤ β0,1 ≤ β¯1 + uβ¯1

(9.12)

we consider the interval β¯1 ± uβ¯1 to localize the true value β0,1 of the lower ranked standard β¯1 . In general, the true value β0,1 of β¯1 will differ from the true value β0 of the primary standard N (β). Let us repeat the procedure for k=2 β2l = β0,1 + x2l = β0,1 + x0,2 + (x2l − μ2 ) + f2 ;

l = 1, . . . , n .

Again, the right-hand side expresses the traceability of β0,2 to β0,1 via x0,2 , the true indication of the comparator, β0,2 = β0,1 + x0,2 = β0 + x0,1 + x0,2 .

(9.13)

The average of the n values β2l ; l = 1, . . . , n yields β¯2 = β0,1 + x ¯2 . As β0,1 is unknown, we have to substitute β¯1 for β0,1 ¯2 β¯2 = β¯1 + x

(9.14)

and to add the uncertainty of β¯1 to the uncertainty of the modified mean β¯2 uβ¯2 =

tP (n − 1) tP (n − 1) √ √ sx2 + fs,2 + uN (β) . sx1 + fs,1 + n n

(9.15)

After all, we consider the interval β¯2 ± uβ¯2 to localize the true value β0,2 of the lower ranked standard β¯2 , written explicitly β¯2 − uβ¯2 ≤ β0,2 ≤ β¯2 + uβ¯2 .

(9.16)

Ultimately, a standard of rank k reveals traceability according to β0,k = β0 +

k  i=1

x0,i .

(9.17)

9.2 Key Comparisons

95

9.2 Key Comparisons So-called key comparisons ensure the national metrology institutes to rely on equivalent SI standards. Let β¯(1) , β¯(2) , . . . , β¯(m) denote a selected group of standards of the same nominal value, of the same hierarchical rank, and with (i) true values β0 ; i = 1, . . . , m, Fig. 9.3. The true values of the standards need not coincide. However, it is indispensable that the standards’ uncertainties localize the respective true values. Key comparisons are implemented via round robins. Here, a suitable transfer standard, say, T is passed on from one participant to the next. Each time the transfer standard gets calibrated, Fig. 9.4. During the portage the physical properties of the transfer standard are expected to remain constant. If this happens to apply, each link-up relates to the same true value. Afterward, the mutual consistency of the results is to be scrutinized. The i-th participant, i ∈ 1, . . . , m, links the transfer standard

Fig. 9.3. The true values of standards holding the same metrological rank need not coincide nor must their uncertainties overlap. However, the respective uncertainties should localize the pertaining true values

T to his laboratory standard β¯(i) where the uncertainties of the laboratory (i) standards are considered to localize the belonging true values β0 , (i) β¯(i) − uβ¯(i) ≤ β0 ≤ β¯(i) + uβ¯(i) ;

i = 1, . . . , m .

For convenience, let us rewrite the inequalities as a set of equations (i) β¯(i) = β0 + fβ¯(i) ,

−uβ¯(i) ≤ fβ¯(i) ≤ uβ¯(i) ;

i = 1, . . . , m .

The physical error equations of the m link-ups read (i)

Tl

(i)

(i)

= β0 + xl ;

l = 1, . . . , n , + (i) (i) (i) = β0 + x0 + xl − μx(i) + fx(i) ;

i = 1, . . . , m

(9.18)

96

9 Dissemination of Units

Fig. 9.4. Round robin for a group of m standards β¯(i) ; i = 1, . . . , m implemented via a transfer standard T

and (i) T¯(i) = β0 + x ¯(i) ;

i = 1, . . . , m , + (i) (i) ¯(i) − μx(i) + fx(i) . = β0 + x0 + x

For practical reasons, we have to substitute estimators β¯(i) for the inaccessible (i) true values β0 and to include the imported uncertainty in the uncertainty uT¯(i) of the respective mean T¯(i) . Thus we have T¯(i) = β¯(i) + x ¯(i) ;

uT¯(i) = uβ¯(i) + ux¯(i) ;

i = 1, . . . , m .

(9.19)

Let T0 designate the true value of the transfer standard. Then, as traceability suggests to put (i)

(i)

T0 = β0 + x0 ; we have

    ¯(i)  T − T0  ≤ uT¯(i) ;

i = 1, . . . , m

i = 1, . . . , m .

(9.20)

(9.21)

Figure 9.5 depicts the result of a consistent round robin. We might wish to quantify differences of the kind T¯(i) − T¯(j) . Here, we should observe    ¯(i) ¯(j)   T − T  ≤ uβ¯(i) + ux¯(i) + uβ¯(j) + ux¯(j) ; i = 1, . . . , m . For completeness, we finally discuss what is called the key comparison reference value, abbreviated KCRV. The KCRV is nothing else but the weighted grand mean1 as defined in (6.23) 1

As addressed in Sect. 6.2, a group of means with unequal true values should not be averaged.

9.2 Key Comparisons

97

Fig. 9.5. In case of consistency one and the same horizontal line intersects each of the uncertainties uT¯ (i)

β¯ =

m 

wi T¯(i) ,

wi =

i=1

gi2 , m  2 gi

gi =

1 uT¯(i)

.

(9.22)

i=1

We derive the uncertainties ud¯i of the m differences d¯i = T¯(i) − β¯ ;

i = 1, . . . , m .

(9.23)

To this end, we insert (9.19) into (9.23), so that ¯(i) − d¯i = β¯(i) + x

m 

+ , wj β¯(j) + x ¯(j) .

j=1

Due to (9.18) we have (i) d¯i = β0 + fβ¯(i) + x ¯(i) −

m 

+ , (j) wj β0 + fβ¯(j) + x ¯(j) .

j=1

Inserting the error equations of the x ¯(i) ; i = 1, . . . , m yields , + (i) (i) ¯(i) − μx(i) + fx(i) d¯i = β0 + fβ¯(i) + x0 + x −

m 

+ , , + (j) (j) wj β0 + fβ¯(j) + x0 + x ¯(j) − μx(j) + fx(j) .

j=1

Putting fi = fx(i) + fβ¯(i)

(9.24)

98

9 Dissemination of Units

and considering (9.20), we arrive at m + ) , * 

(i) d¯i = T0 + x ¯ − μx(i) + fi − wj T0 + x ¯(j) − μx(j) + fj j=1



(i)

= x ¯

− μx(i) −

m 

+

(j)

¯ wj x

,

− μx(j) + fi −

j=1

m 

(9.25) wj fj .

j=1 (i)

Reverting to individual measurements xl we find  n  n m m ,  ,  1  + (i) 1  + (j) ¯ xl − μx(i) − xl − μx(j) wj wj fj di = + fi − n n j=1 j=1 l=1

=

1 n

n 

l=1

(9.26) ⎤ ⎡ m m + ,  + ,  (j) ⎣ x(i) − μx(i) − wj x − μx(j) ⎦ + fi − wj fj , l

l

j=1

l=1

j=1

i.e. m m ,  + , +  (i) (j) d¯i,l = xl − μx(i) − wj xl − μx(j) + fi − wj fj ; j =1

j=1

i = 1, . . . , m ;

(9.27)

l = 1, . . . , n .

Hence, we are in a position to define the n differences m + ,  + , (i) (j) d¯i,l − d¯i = xl − x ¯(i) − wj xl − x ¯(j) .

(9.28)

j=1

Let the empirical variances and covariances of the participants ,+ , 1  + (i) (j) xl − x ¯(i) xl − x ¯(j) ; n−1 n

sx(i) x(j) ≡ sij =

l=1

define an empirical variance–covariance matrix ⎛ ⎞ s11 s12 . . . s1m ⎜s ⎟ ⎜ 21 s22 . . . s2m ⎟ s=⎜ ⎟, ⎝ ... ... ... ... ⎠ sm1 sm2 . . . smm

sii ≡ s2i

and the weights wi an auxiliary vector w = (w1

w2

...

T

wm ) .

i, j = 1, . . . , m

9.2 Key Comparisons

99

Fig. 9.6. None of the quantities | T¯(i) − β¯ | should exceed the respective uncertainty ud¯i

After all, (9.28) produces s2di =

n m 

2 1  ¯ di,l − d¯i = s2i − 2 wj sij + wT s w . n−1 j=1

(9.29)

l=1

The worst case estimation of (9.24) yields fs,i = fs,x(i) + uβ¯(i) ;

i = 1, . . . , m .

Hence, the uncertainties of the differences (9.23) take the form   m  tP (n − 1)  s2i − 2 √ wj sij + wTs w ud¯i = n j=1 + (1 − wi ) fs,i +

m 

wj fs,j ;

(9.30)

i = 1, . . . , m .

j=1j=i

We are free to add to the second term on the right ±wi fs,i , (1 − wi ) fs,i +

m 

wj fs,j ± wi fs,i = fs,i − 2wi fs,i +

ud¯i

  m  tP (n − 1)  s2i − 2 √ = wj sij + wTs w n j=1 + fs,i − 2wi fs,i +

wj fs,j

j =1

j=1j=i

so that

m 

m  j=1

wj fs,j ;

i = 1, . . . , m .

(9.31)

100

9 Dissemination of Units

As the d0,i vanish, the quotations | d¯i − d0,i |≤ ud¯i turn into    ¯(i) ¯  T − β  ≤ ud¯ ; i = 1, . . . , m . i

(9.32)

Given everything has gone well, Fig. 9.6 would illustrate the result. In case of equal weights, we set wi = 1/m. Whether or not (9.32) proves practical appears somewhat questionable. Let us recall, the grand mean β¯ as defined in (9.22) is established through the calibrations T¯(i) themselves. Consequently, we cannot rule out that just by chance, correct calibrations come out incorrect and incorrect ones correct. After all, unrecognized effects might either compensate or boost each other. On the other hand, it appears simpler and might possibly be even more beneficial to check whether the uncertainties of the T¯(i) ; i = 1, . . . , m mutually overlap. If so, this at least announces compatibility of the β¯(1) , β¯(2) , . . . , β¯(m) .

10 Multiples and Sub-multiples

To establish a group of standards of varying nominal values, either multiples or sub-multiples, requires sophisticated linking procedures. As an example, we shall consider the down-scaling of the base unit of the mass, the kilogram.

10.1 Calibration Chains Via regulation, the sub-multiples and multiples of the kilogram are . . . , 10 g , 20 g , 50 g , 100 g , 200 g , 500 g , 1 kg , 2 kg , 5 kg , 10 kg , . . . (10.1) Let us confine ourselves to down-scaling and to consider weights of nominal values N (m1 ) = 500 g ,

N (m2 ) = 200 g ,

N (m4 ) = 100 g ,

N (m3 ) = 200 g

(10.2)

N (m5 ) = 100 g .

For convenience, we admit differences βk = mk − N (mk ) ;

k = 1, 2, . . .

(10.3)

to exist between the physical masses mk and their nominal values N (mk ). To control the link-up procedures, we additionally include a so-called check-weight, say, N (m6 ) = 300 g of known mass m6 and known uncertainty um6 and expect the two intervals (m6 ± um6 )before calibration

and

(m6 ± um6 )after calibration

to overlap. We assume the link-up to be established through a given primary standard ¯ designate of nominal value N = 1 kg and physically true value N0 . We let N the associate mean with uncertainty uN¯ , Fig. 10.1

102

10 Multiples and Sub-multiples

Fig. 10.1. Link-up standard with nominal value N = 1 kg, true value N0 , and ¯ mean value N

¯ − uN¯ ≤ N0 ≤ N ¯ + uN¯ N ¯ = N0 + fN¯ , N

(10.4)

−uN¯ ≤ fN¯ ≤ uN¯ .

We shall always liken groups of weights of equal nominal values, e.g. {m1 + m2 + m3 + m6 }

nominal value

1.2 kg

{m4 + m5 + N0 }

nominal value

1.2 kg

1st comparison

and assume l = 1, . . . , n repeated measurements. The observational equation reads 1 x1l . n n

¯1 ; {m1 + m2 + m3 + m6 } − {m4 + m5 + N0 } ≈ x

x ¯1 =

l=1

In physical terms, the true value of the primary standard enters. However, as ¯ for N0 and to increase the uncertainty N0 is unknown, we have to substitute N ux¯1 of the right-hand side by uN¯ up to uN¯ + ux¯1   ¯ ≈x {m1 + m2 + m3 + m6 } − m4 + m5 + N ¯1 uright-hand side = uN¯ + ux¯1 . Referring to the deviations as defined in (10.3), we have β1 + β2 + β3 − β4 − β5 + β6 ≈ x ¯1 + d¯

(10.5)

10.1 Calibration Chains

103

where ¯ −N d¯ = N

(10.6)

¯ and the nominal value N . Obviously, the denotes the difference between N true value of d¯ is d 0 = N0 − N

(10.7)

and the associated error equation d¯ = d0 + fN¯ ;

−uN¯ ≤ fN¯ ≤ uN¯ .

(10.8)

As we take from (10.5), the first row of the design matrix A is constituted by numbers +1 or −1, depending on whether the weight is a member of the first or of the second group of weights. Since there are other comparisons which do not concern all weights of the set (10.2), as e.g. ⎫ nominal value 300 g ⎬ {m6 } 2nd comparison ⎭ {m3 + m5 } nominal value 300 g implying ¯2 −β3 − β5 + β6 ≈ x the design matrix also exhibit zeros. After all, the first two rows of the design matrix A read 1

1

1

−1

−1

1

0

0

−1

0

−1

1

(10.9)

As there are r = 6 unknowns, m > 6 observational equations are needed. The rank of A is to be chosen to be 6. The exemplary weighing scheme presented below is made up of m = ¯, 16 comparisons. The first five comparisons include the link-up standard N which is indicated by a (+) sign to the right of the matrix. For convenience we introduce an auxiliary vector ¯ d¯ = (d 1

.23 ..

d4¯ | 10

p

.23 ..

04)T

(10.10)

m−p

where p = 5. Furthermore, the scheme assumes 14 comparisons to include the check-weight. Setting β = (β1 and

β2

...

T

βr ) ,

x ¯ = (¯ x1

x ¯2

...

T

x ¯m ) ;

r = 6 , m = 16

104

10 Multiples and Sub-multiples



1 ⎜1 ⎜ ⎜1 ⎜ ⎜1 ⎜ ⎜1 ⎜ ⎜1 ⎜ ⎜1 ⎜ ⎜1 A=⎜ ⎜1 ⎜ ⎜1 ⎜ ⎜0 ⎜ ⎜0 ⎜ ⎜0 ⎜ ⎜0 ⎜ ⎝0 0

⎞ 1 1 −1 −1 1 −1 1 1 1 1 ⎟ ⎟ 1 −1 1 1 1 ⎟ ⎟ 1 0 1 −1 1 ⎟ ⎟ 0 1 −1 1 1 ⎟ ⎟ −1 1 −1 −1 −1 ⎟ ⎟ −1 −1 1 1 −1 ⎟ ⎟ 1 −1 −1 −1 −1 ⎟ ⎟ −1 0 1 −1 −1 ⎟ ⎟ 0 −1 −1 1 −1 ⎟ ⎟ −1 0 −1 0 1 ⎟ ⎟ −1 0 −1 0 1 ⎟ ⎟ 0 −1 0 −1 1 ⎟ ⎟ 0 −1 0 −1 1 ⎟ ⎟ 0 0 1 −1 0 ⎠ 0 0 1 −1 0

(+) (+) (+) (+) (+)

the inconsistent linear system takes the form Aβ ≈ x ¯ + d¯.

(10.11)

The least squares solution vector is



¯ = AT A −1 AT x β ¯ + d¯ = B T x ¯ + d¯

−1 B = A AT A = (bik ) ;

(10.12) i = 1, . . . , m ;

k = 1, . . . , r ,

in components β¯k =

m 

bik x ¯i + d¯

p 

bik ;

k = 1, . . . , r .

x ¯i = x0,i + (¯ xi − μi ) + fi ,

d¯ = d0 + fN¯

i=1

(10.13)

i=1

Inserting the error equations

we find β¯k =

m 

bik [x0,i + (¯ xi − μi ) + fi ] + (d0 + fN¯ )

i=1

p 

bik ;

k = 1, . . . , r .

i=1

Obviously, the β0,k =

m  i=1

bik x0,i + d0

p 

bik ;

k = 1, . . . , r

(10.14)

i=1

designate the flow of the true values β0,k of the estimators β¯k . The propagated systematic errors turn out as

10.1 Calibration Chains

fs,β¯k =

m 

| bik

i=1

 p      | fs,i + uN¯  bik  ;  

k = 1, . . . , r .

105

(10.15)

i=1

Returning to individual measurements, (10.13) yields β¯kl =

m 

bik xil + d¯

i=1

p 

bik ;

k = 1, . . . , r .

(10.16)

i=1

Hence, the differences β¯kl − β¯k =

m 

bik (xil − x ¯i ) ;

k = 1, . . . , r ;

l = 1, . . . , n

i=1

produce the empirical variances and covariances of the components of the solution vector n



1  ¯ sβk βk = βkl − β¯k β¯k l − β¯k n−1 l=1

=

=

1 n−1 m 

m n   l=1

⎤ ⎡ m  bik (xil − x ¯i ) ⎣ bjk (xjl − x ¯ j )⎦

i=1

bik bjk sij ,

(10.17)

j=1

sβk βk ≡ s2βk ;

k, k  = 1, . . . , r .

i,j

Here, the 1  (xil − x ¯i ) (xjl − x ¯j ) ; n−1 n

sij =

i, j = 1, . . . , m

l=1

denote the empirical variances and covariances of the input data which we gather within a matrix s, as done in (8.13). Thus, we have ⎞ ⎛ sβ¯1 β¯1 sβ¯1 β¯2 . . . sβ¯1 β¯r ⎟ ⎜ ⎜ sβ¯2 β¯1 sβ¯2 β¯2 . . . sβ¯2 β¯r ⎟ ⎟ = B Ts B . ⎜ sβ¯ = ⎜ (10.18) ... ... ... ⎟ ⎠ ⎝ ... sβ¯r β¯1 sβ¯r β¯2 . . . sβ¯r β¯r After all, the uncertainties of the adjusted weights β¯k ; k = 1, . . . , r take the form tP (n − 1) √ sβ¯k + fs,β¯k ; k = 1, . . . , r uβ¯k = n   p   m m    tP (n − 1)     √ bik bjk sij + | bik | fs,i + uN¯  bik  . =   n i,j i=1 i=1

(10.19)

106

10 Multiples and Sub-multiples

Let us finally introduce weighting factors as discussed in Sect. 8.4. Leftmultiplying the inconsistent linear system (10.11) by a weighting matrix G = diag{g1 , g2 , . . . , gm } produces ¯ GAβ ≈ G(¯ x + d) with d¯ as given in (10.10). Referring to ) *−1 + , ˜ = (GA) (GA)T (GA) = ˜bik ; B

(10.20)

i = 1, . . . , m ;

k = 1, . . . , r

the components of the solution vector read β¯k =

m 

˜bik gi x ¯i + d¯

p 

i=1

˜bik gi ;

k = 1, . . . , r .

(10.21)

i=1

The error equations x ¯i = x0,i + (¯ xi − μi ) + fi ,

d¯ = d0 + fN¯

yield β¯k =

m 

˜bik gi [x0,i + (¯ xi − μi ) + fi ] + (d0 + fN¯ )

i=1

p 

˜bik gi ;

k = 1, . . . , r .

i=1

As a weight matrix does not shift the components of the true solution vector β 0 , we have β0,k =

m 

˜bik gi x0,i + d0

i=1

p 

˜bik gi ;

k = 1, . . . , r

(10.22)

i=1

Furthermore, we observe propagated systematic errors  p  m     ˜bik gi  ; k = 1, . . . , r . | ˜bik | gi fs,i + uN¯  fs,β¯k =   i=1

(10.23)

i=1

Splitting (10.21) up into individual measurements β¯kl =

m 

˜bik gi xil + d¯

i=1

p 

˜bik gi ;

k = 1, . . . , r .

i=1

we obtain the differences β¯kl − β¯k =

m  i=1

˜bik gi (xil − x ¯i ) ;

k = 1, . . . , r ;

l = 1, . . . , n

(10.24)

10.1 Calibration Chains

107

Thus, the empirical variances and covariances of the components of the solution vector take the form



1  ¯ βkl − β¯k β¯k l − β¯k n−1 n

sβk βk =

l=1

=

=

1 n−1 m 

m n   l=1

⎤ ⎡ m  ˜bik gi (xil − x ˜bjk gj (xjl − x ¯i ) ⎣ ¯j )⎦ (10.25)

i=1

j=1

sβk βk ≡ s2βk ;

k, k  = 1, . . . , r

1  (xil − x ¯i ) (xjl − x ¯j ) ; n−1

i, j = 1, . . . , m

˜bik ˜bjk gi gj sij ,

i,j

where n

sij =

l=1

denote the elements of empirical variance–covariance matrix s of the input data. Hence, ⎛

sβ¯1 β¯1

⎜ ⎜ sβ¯2 β¯1 sβ¯ = ⎜ ⎜ ... ⎝ sβ¯r β¯1

sβ¯1 β¯2

...

sβ¯2 β¯2

...

...

...

sβ¯r β¯2

...

sβ¯1 β¯r



⎟ sβ¯2 β¯r ⎟

⎟=B ˜ T GT s G B ˜. ⎟ ... ⎠ sβ¯r β¯r

(10.26)

After all, the weighted adjustment produces uncertainties uβ¯k =

tP (n − 1) √ sβ¯k + fs,β¯k ; n

k = 1, . . . , r

  p  (10.27)  m m    tP (n − 1)   ˜bik ˜bjk gi gj sij + ˜bik gi  .  √ = | ˜bik | gi fs,i + uN¯    n i,j i=1 i=1

Data Simulation Deploying the preceding weighing scheme we link the set of masses N (m1 ) = 500 g ,

N (m2 ) = 200 g ,

N (m4 ) = 100 g ,

N (m3 ) = 200 g

N (m5 ) = 100 g

108

10 Multiples and Sub-multiples

and a check-weight N (m6 ) = 300 g , holding a formal position, to a standard of nominal value N = 1 kg . Let us assume N0 = 1000 , 000 080 g ¯ = 1000 , 000 100 g N uN¯ = d0 = d¯ =

0 , 000 050 g 0 , 000 080 g 0 , 000 100 g

m0,1 = 500 , 010 000 g m0,2 = 200 , 020 000 g m0,3 = 199 , 970 000 g m0,4 = 100 , 005 000 g m0,5 = 99 , 993 000 g m0,6 = 300 , 050 000 g . and further that the first five comparisons, labeled by a (+) sign, are carried out on a balance with a standard deviation of σ1 = 0, 000 010 g and the remaining eleven on a balance with σ2 = 0, 000 0002 g. As the link-up mass ¯ occurs likewise in each of the first five comparisons, the error propagation N asks us to feed in the pertaining systematic error flatly five times. We suppose

Fig. 10.2. The a priori and a posteriori intervals m6 ± um6 and β¯6 ± uβ¯6 , allocated to the check weight, should localize the respective true values m0,6 and β0,6

10.1 Calibration Chains

109

the remaining eleven comparisons being free of systematic errors. Table 10.1 renders the simulated input data x ¯i ; i = 1, . . . , 16, the underlying true values x0,i and the uncertainties ux¯i . Figure 10.2 suggests, here of course in a more formal sense, that the a priori interval m6 ± um6 and the a posteriori interval β¯6 ± uβ¯6 , as allocated to the check-weight on the part of the adjustment, are expected to overlap. Ultimately, Table 10.2 summarizes the results. Table 10.1. Data Simulation: Input Data, True Values, and Uncertainties i

x ¯i

x0,i

ux¯i

1 2

0. 051 915 0 0. 007 914 0

0. 051 920 0 0. 007 920 0

0. 000 014 0 0. 000 015 0

3 4

0. 107 909 0 0. 091 923 0

0. 107 920 0 0. 091 920 0

0. 000 010 0 0. 000 015 0

5 6 7

0. 017 917 0 −0. 087 999 9 −0. 032 000 0

0. 017 920 0 −0. 088 000 0 −0. 032 000 0

0. 000 016 0 0. 000 001 3 0. 000 001 7

8 9

0. 011 998 4 −0. 047 999 6

0. 012 000 0 −0. 048 000 0

0. 000 001 2 0. 000 001 2

10 11 12

−0. 021 998 5 0. 024 999 6 0. 025 001 2

−0. 022 000 0 0. 025 000 0 0. 025 000 0

0. 000 001 6 0. 000 002 1 0. 000 002 2

13 14

0. 086 999 5 0. 086 999 7

0. 087 000 0 0. 087 000 0

0. 000 001 9 0. 000 001 1

15 16

0. 011 999 9 0. 011 999 0

0. 012 000 0 0. 012 000 0

0. 000 002 3 0. 000 001 1

Table 10.2. Link-up of a Mass Decade: Estimates of the Mass Differences, True Values, and Uncertainties k

β¯k

1

0.010007

0.0100

0.000029

2 3

0.020002 −0.029997

0.0200 −0.0300

0.000012 0.000012

4 5 6

0.005001 −0.006998 0.050004

0.0050 −0.0070 0.0500

0.000006 0.000006 0.000017

β0,k

uβ¯k

110

10 Multiples and Sub-multiples

10.2 Pairwise Comparisons Occasionally, within a set of physical quantities of the same quality (masses, lengths, voltages, etc.) only differences between two individuals are measurable. Consider, e.g., r = 4 quantities βk ; k = 1, . . . , r. Then, there are either r(r − 1)/2 or r(r − 1), i.e. six or twelve measurable differences, respectively, depending on whether not only the (βi −βj ) but also their inversions (βj −βi ) matter. Let us refer to the first, simpler situation β1 − β2 β1 − β3 β1 − β4 β2 − β3 β2 − β4 β3 − β4

≈x ¯1 ≈x ¯2 ≈x ¯3 ≈x ¯4 ≈x ¯5 ≈x ¯6 .

Obviously, the first and the second relation establish the fourth one, the first and the third the fifth one and, finally, the second and the third the sixth one. Consequently, the design matrix ⎞ 1 −1 0 0 ⎜ 1 0 −1 0 ⎟ ⎟ ⎜ ⎜ 1 0 0 −1 ⎟ ⎟ ⎜ A=⎜ ⎟ ⎜ 0 1 −1 0 ⎟ ⎟ ⎜ ⎝ 0 1 0 −1 ⎠ 0 0 1 −1 ⎛

has rank 3. As has been outlined in [28], Sect. 7.3 constraints are needed. To have an example, let us consider h1 β1 + h2 β2 + h3 β3 + h4 β4 = y . Introducing an auxiliary vector H = (h1

h2

h3

(10.28)

h4 ) we may write

Hβ = y . As has been shown [28], the solution vector ¯ = (AT A + H T H)−1 (AT x + H T y) β satisfies ¯= y. Hβ

(10.29)

10.2 Pairwise Comparisons

111

To abbreviate the notation, we resort to auxiliary matrices Γ = A(AT A + H T H)−1

and

Λ = H(AT A + H T H)−1

so that ¯ = Γ T x + ΛT y β Putting m = r(r − 1)/2 and Γ = (γik ) ,

Λ = (λ1k );

i = 1, . . . , m;

k = 1, . . . , r

the components of the solution vector turn out as β¯k =

m 

γik x ¯i + λ1k y;

k = 1, . . . , r .

(10.30)

i=1

Inserting the error equations x ¯i = x0,i + (¯ xi − μi ) + fi ;

i = 1, . . . , m

in (10.30) β¯k =

m 

γik [x0,i + (¯ xi − μi ) + fi ] + λ1k y ;

k = 1, . . . , r

(10.31)

i=1

issues the flow of true values β0,k of the components β¯k β0,k =

m 

γik x0,i + λ1k y ;

k = 1, . . . , r

(10.32)

i=1

and, beyond, the propagated systematic errors fβ¯k =

m 

γik fi ;

k = 1, . . . , r

(10.33)

i=1

Subtracting (10.30) from β¯kl =

m 

γik xil + λ1k y ;

l = 1, . . . , n;

k = 1, . . . , r

i=1

produces the differences β¯kl − β¯k =

m  i=1

γik (xil − x ¯i ) ;

l = 1, . . . , n

k = 1, . . . , r .

(10.34)

112

10 Multiples and Sub-multiples

Thus, the solution vector’s empirical variances and covariances are



1  ¯ βkl − β¯k β¯k l − β¯k n−1 l=1 ⎤ ⎡ m m n  1   = γik (xil − x ¯i ) ⎣ γjk (xjl − x ¯ j )⎦ n−1 i=1 j=1 n

sβ¯k β¯k =

(10.35)

l=1

=

m 

sβ¯k β¯k ≡ s2β¯k ;

k, k  = 1, . . . , r

1  (xil − x ¯i ) (xjl − x ¯j ) ; n−1

i, j = 1, . . . , m

γik γjk sij ,

i,j

where, again, the n

sij =

l=1

denote the elements of the empirical variance–covariance matrix s of the input data as given in (8.13). For convenience, we gather the sβ¯k β¯k within a matrix ⎛

sβ¯1 β¯1

sβ¯1 β¯2

...

sβ¯2 β¯2

...

...

...

sβ¯r β¯2

...

sβ¯1 β¯r



⎟ sβ¯2 β¯r ⎟ ⎟ = Γ Ts Γ . ... ⎟ ⎠ sβ¯r β¯r

(10.36)

tP (n − 1) √ sβ¯k + fs,β¯k ; k = 1, . . . , r n   m m  tP (n − 1)   √ = γik γjk sij + | γik | fs,i n i,j i=1

(10.37)

⎜ ⎜ sβ¯2 β¯1 sβ¯ = ⎜ ⎜ ... ⎝ sβ¯r β¯1

After all, the overall uncertainties uβ¯k =

produce intervals β¯k ± uβ¯k ; k = 1, . . . , r which are meant to localize the true values β¯0,k as given in (10.32).

11 Founding Pillars

The consistency of physical units and physical constants for one and the traceability of measuring processes for another constitute the founding pillars of metrology.

11.1 Consistency Since the days of James Clerk Maxwell, metrologists strive to link physical units to atomic constants. Should this ever be accomplished, the realizations of the individual primary standards were still charged by imperfections rendering the system of physical units and physical constants inconsistent. To clear discrepancies and contradictions a least squares adjustment procuring consistent units and constants proves beneficial. The idea is to submit an appropriate set of fundamental physical constants, as determined experimentally, to a least squares adjustment. To emphasize: The least squares procedure is commissioned to numerically shift those constants belatedly, so that any inconsistencies due to experimental imperfections vanish. Clearly, this is a gimmick, however perfectly admissible – as long as the intervals β¯k − uβ¯k ≤ β0,k ≤ β¯k + uβ¯k ;

k = 1, . . . , r ,

as issued by least squares, continue to localize the respective true values. Let us once more allude to the role of weighting factors which we know to shift estimators and shrink belonging uncertainties, Sect. 2.5. Nevertheless, with respect to the error model pursued here, the setting of weighting factors is in order as the intervals spanned by the shifted estimators and shrunken uncertainties maintain the localization of true values, which has been discussed extensively. Hence, a least squares adjustment of fundamental physical constants is in a position to provide physics with a consistent system of physical units and constants.

114

11 Founding Pillars

11.2 Traceability The everlasting pursuit of true values carries into effect what metrology is per se. Table 11.1 resumes the varying appearances of traceability met so far.

Table 11.1. Traceability in Error Propagation and Basic Link Ups Case

Transfer of true values

Definition

β0,k − β0,i = x0

Functions

x0,1 , x0,2 , . . .

Working standards

β0,k = β0 +

Key comparisons Least squares Calibration chains

(i)

T0 = β0 +

i=1 (i) x0

(1.1)



Φ(x0,1 , x0,2 , . . .)

x0,i ; ;

k = 1, 2 . . .

i = 1, . . . , m

β 0 = B x0 β0,k =

m 

β0,k =



bik x0,i + d0

Fig. 7.1 (9.17) (9.20)

T

i=1 m

Pairwise comparisons

k 

Reference

(2.15) p 

bik ;

k = 1, . . . , r

(10.14)

i=1

γik x0,i + λ1k y ;

k = 1, . . . , r

(11.32)

i=1

As metrology heavily rests on calibration chains, the keeping of traceability proves essential for the consistency of units, measures, and physical constants.

Part V

Fitting of Straight Lines

12 Preliminaries

The fitting of straight lines is based on data pairs (x1 , y1 ),

(x2 , y2 ),

...

, (xm , ym ) ;

m > 2.

12.1 Distinction of Cases We shall see about three fitting situations, Table 12.1: Table 12.1. Fitting of Straight Lines, Three Cases Case

Abscissas

Ordinates

(i) (ii) (iii)

Error-free Error-free Repeated measurements

Individual measurements Repeated measurements Repeated measurements

Case (i) considers the metrological standard situation. While the abscissas are assumed correct, the ordinates are taken to be individual measurements. Each ordinate is supposed to hold a particular random error, stemming from one and the same normal distribution, and a common unknown systematic error. Case (ii), likewise relying on exact abscissas, accounts for repeated measurements of the ordinates. Here, the scattering of the random errors as well as the actual values of the unknown systematic errors may vary from ordinate to ordinate. Case (iii), finally, considers erroneous abscissas and erroneous ordinates with varying theoretical variances and varying unknown systematic errors. There are differing paths to fit a straight line. We may consider distances parallel to the y-axis, parallel to the x-axis, or perpendicular to the straight line itself. However, regardless of the approach pursued, we expect the error model to issue uncertainties which localize the straight line’s true parameters—though the uncertainties themselves may differ. For an illustration, we consider the y-intercept of a straight line fit y(x) = β1 + β2 x. Estimators β¯1 and uncertainties uβ¯1 of the paths may differ; nevertheless, the results, say,

118

12 Preliminaries

Fig. 12.1. Localization property of results β¯1 ± uβ¯1 of varying paths; β0,1 marking the true value

β¯1 ± uβ¯1 |path 1 ;

β¯1 ± uβ¯1 |path 2 ;

β¯1 ± uβ¯1 |path 3

are likewise expected to localize the true value β0,1 , Fig. 12.1.

12.2 True Straight Line Let y(x) = β0,1 + β0,2 x

(12.1)

be the equation of a straight line with parameters β0,1 and β0,2 designating the y-intercept and slope, respectively. Let there be m data pairs (x0,1 , y0,1 ),

(x0,2 , y0,2 ),

...

, (x0,m , y0,m )

(12.2)

fulfilling (12.1), in matrix form Aβ 0 = y 0

(12.3)

with ⎛

1 x0,1 ⎜ 1 x 0,2 ⎜ A=⎜ ⎝··· ···

⎞ ⎟ ⎟ ⎟, ⎠

 β0 =

⎛ β0,1 β0,2

,

⎜ ⎜ y0 = ⎜ ⎝

1 x0,m

y0,1 y0,2 ···

⎞ ⎟ ⎟ ⎟. ⎠

(12.4)

y0,m

Given the design matrix A has rank 2, the linear system (12.3) reproduces β 0 = B T y0 ,



−1 B = A AT A .

(12.5)

12.2 True Straight Line

119

We address (12.1) as the true straight line, (12.2) as true input data and β 0 as true solution vector. On substituting erroneous input data for the true ones, (12.3) breaks down. As measured data suffer from measuring errors, the true straight line proves inaccessible. Nevertheless, we may fit to the erroneous input data a least squares line and assess the true parameters β0,1 , β0,2 via estimators β¯1 , β¯2 and associated uncertainties uβ¯1 , uβ¯2 . Should no set of true data exist, the adjustment would be ill defined right from the outset. This, in fact, breeds a problem: The least squares solution vector, which is due to an orthogonal projection, is quite insensitive to the metrological properties of the input data. Thus, whenever the design matrix has full rank there is a solution vector, be the input data meaningful or not. This observation however does not abate the usefulness of the method of least squares per se, rather it puts the responsibility of the experimenter in demand: The method of least squares is limited to accomplish a smart reallocation of measurement errors but is in no way apt to cure inappropriate physical conditions. After all, the existence of a true straight line is a must.

13 Straight Lines: Case (i)

Case (i), Table 12.1 assumes correct abscissas and individually measured ordinates.

13.1 Fitting Conditions Given m > 2 data pairs (x0,1 , y1 ),

(x0,2 , y2 ),

...

(x0,m , ym )

(13.1)

with error-free abscissas x0,i and measured, i.e. erroneous, ordinates yi = y0,i + (yi − μyi ) + fyi ;

i = 1, . . . , m .

(13.2)

As usual, we formally assign random variables Yi to the measured ordinates yi so that the μyi denote the expectations E{Yi } of the Yi . We suppose the random errors (yi − μyi ) ; i = 1, . . . , m to stem from one and the same normal distribution and thus to have a common theoretical variance, say, σy2 . At present, we let each of the ordinates be charged by the same unknown systematic error fyi = fy ;

i = 1, . . . , m

−fs,y ≤ fy ≤ fs,y .

(13.3)

As yet the uncertainties uyi of the ordinates are unknown. Thus we are not in a position to localize the true values y0,i of the yi via y i ± u yi ;

i = 1, . . . , m .

We shall, however, be in a position to deliver the uyi belatedly.

13.2 Orthogonal Projection The inconsistent, over-determined, linear system to be submitted to least squares reads

122

13 Straight Lines: Case (i)

β1 + β2 x0,i ≈ yi ;

i = 1, . . . , m > 2 .

(13.4)

We are looking for a least squares line y(x) = β¯1 + β¯2 x fitting the data (13.1). With ⎛ ⎞ 1 x0,1 ⎜ 1 x0,2 ⎟ ⎜ ⎟ A=⎜ ⎟, ⎝··· ··· ⎠ 1

 β=

(13.5) ⎛

β1 β2

⎞ y1 ⎜y ⎟ ⎜ 2⎟ y=⎜ ⎟ ⎝···⎠

,

x0,m

ym

the matrix notation of (13.4) is Aβ ≈ y .

(13.6)

The orthogonal projection of the vector y of observations onto the column space of the matrix A produces the solution vector ¯ = BTy , β

β¯k =

m 

bik yi ;

k = 1, 2

(13.7)

i=1

where



b11

b12



−1 ⎜ ⎜ b21 B = A AT A =⎜ ⎝ ···

b22 ···

bm1

⎞ ⎟ ⎟ ⎟ ⎠

bm2

the matrix elements being ⎡  m m  x20,j − x0,1 x0,j ⎢ j=1 j=1 ⎢ ⎢  m m  ⎢ x20,j − x0,2 x0,j ⎢ 1 ⎢ j=1 j=1 B= ⎢ D⎢ ··· ⎢ ⎢ ⎢ m m ⎣  x20,j − x0,m x0,j j=1

− −

j=1

m 

x0,j + m x0,2

j=1

··· −

m 

x0,j + m x0,m

j=1

with D =| AT A |= m

⎤ x0,j + m x0,1

j=1

j=1

m 

m 

x20,j

⎤2 ⎡ m  −⎣ x0,j ⎦ . j=1

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

(13.8)

13.3 Uncertainties of the Input Data

123

13.3 Uncertainties of the Input Data The presence of unknown systematic errors, as specified in (13.3), does not suspend the minimized sum of squared residuals



¯ = y − Aβ ¯ T y − Aβ ¯ Q

(13.9)

to issues an estimate s2y =

¯ Q m−2

(13.10)

of the theoretical variance of the input data, E{Sy2 } = σy2 . The empirical variance s2y has degrees of freedom ν = m − 2. Let ⎛

⎞ 1 ⎜ ⎟ f = fy ⎝ · · · ⎠ 1 designates an m × 1−vector. Then, for (13.10) to be valid, the condition P = A(AT A)−1 AT

f = Pf ,

has to be fulfilled [28]. Disregarding random errors for the moment, we have ⎞ ⎛ ⎞ ⎛  1 y0,1 β0,1 + fy ⎟ ⎜ ⎟ ⎜ A = ⎝ · · · ⎠ + fy ⎝ · · · ⎠ β0,2 1 y0,m and thus 



fy

A 0

⎞ 1 ⎜ ⎟ = fy ⎝ · · · ⎠ = f . 1

Left-multiplying by P yields 

fy = Pf

A 0

so that f = P f . Hence, we may narrow down the positions of the expectations

(13.11)

124

13 Straight Lines: Case (i)

E{Yi } = μyi ;

i = 1, . . . , m

by confidence intervals yi − tP (m − 2) sy ≤ μyi ≤ yi + tP (m − 2) sy ;

i = 1, . . . , m

(13.12)

of probability P . But then, the overall uncertainties of the measured ordinates yi ; i = 1, . . . , m turn out to be uyi = tP (m − 2) sy + fs,y ;

i = 1, . . . , m .

(13.13)

i = 1, . . . , m .

(13.14)

After all, the intervals yi − uyi ≤ y0,i ≤ yi + uyi ;

are intended to localize the true values y0,i . Albeit our initial knowledge was meager and implied no hints to suchlike statements, our requesting the yi to establish a straight line obviously revealed this information. To emphasize, the proceeding applies only, given the yi ; i = 1, . . . , m are charged by one and the same systematic error.

13.4 Uncertainties of the Components of the Solution Vector Random Errors ¯ with Though we have got no repeated measurements, we may still endue β an empirical variance–covariance matrix. Let us first state the expectations of the components β¯1 and β¯2 , E{β¯1 } = μβ¯1 =

m 

bi1 μyi ,

E{β¯2 } = μβ¯2 =

i=1

m 

bi2 μyi

(13.15)

i=1

where μyi = E {Yi } = y0,i + fy .

(13.16)

This brings forth the theoretical variances and the theoretical covariance 5 2 ! m m

2 "   2 =E σ ¯ = E β¯1 − μ ¯ bi1 (Yi − μy ) b2 = σ2 β1

β1

σβ¯1 β¯2 = σy2

y

i

i=1 m  i=1

bi1 bi2 ,

σβ2¯2 = σy2

Obviously, the empirical counterparts are

m  i=1

b2i2 .

i=1

i1

13.4 Uncertainties of the Components of the Solution Vector

s2β¯1 = s2y

m  i=1

b2i1 ,

sβ¯1 β¯2 = s2y

m 

bi1 bi2 ,

i=1

s2β¯2 = s2y

m  i=1

b2i2 .

125

(13.17)

After all, the empirical variance–covariance matrix of the solution vector turns out to be ⎞ ⎛ sβ¯1 β¯1 sβ¯1 β¯2 ⎠; sβ¯ = ⎝ sβ¯1 β¯1 ≡ s2β¯1 , sβ¯2 β¯2 ≡ s2β¯2 sβ¯2 β¯1 sβ¯2 β¯2 the elements of which having degrees of freedom ν = m − 2. Systematic Errors The formal decomposition β¯k =

m 

bik [y0,i + (yi − μyi ) + fy ] ;

k = 1, 2

(13.18)

i=1

issues the propagated systematic errors fβ¯k = fy

m 

bik ;

k = 1, 2 .

i=1

As m 

m 

bi1 = 1 ,

i=1

bi2 = 0 ,

(13.19)

i=1

we observe fβ¯1 = fy

(13.20)

fβ¯2 = 0 . Overall Uncertainties We localize the parameters μβ¯1 = β0,1 + fy and μβ¯2 = β0,2 as given in (13.15) via confidence intervals β¯1 − tP (m − 2) s ¯ ≤ μ ¯ ≤ β¯1 + tP (m − 2) s ¯ β1

β1

β1

β¯2 − tP (m − 2) sβ¯2 ≤ μβ¯2 ≤ β¯2 + tP (m − 2) sβ¯2 of probability P . Hence, the uncertainties of the components β¯1 and β¯2 take the form uβ¯1 = tP (m − 2) sβ¯1 + fs,y uβ¯2 = tP (m − 2) sβ¯2 .

(13.21)

The unknown systematic error shifts the straight line parallel to itself. As this does not affect the slope, fy does not enter uβ¯2 .

126

13 Straight Lines: Case (i)

13.5 Uncertainty Band Random Errors Inserting the formal decompositions β¯k =

m 

bik yi = βk,0 +

i=1

m 

bik (yi − μyi ) + fy

i=1

m 

bik ;

k = 1, 2

(13.22)

i=1

into the fitted straight line y(x) = β¯1 + β¯2 x produces y(x) = β0,1 + β0,2 x +

m 

(bi1 + bi2 x)(yi − μyi ) + fy .

(13.23)

i=1

The expectation E{Y (x)} = μy(x) = β0,1 + β0,2 x + fy induces us to introduce the x-dependent theoretical variance 2 σy(x) = E{(Y (x) − μy¯(x) )2 } = σy2

m 

(bi1 + bi2 x)2 ;

  σy2 = E (Yi − μyi )2 .

i=1

Thus

 m  tP (m − 2) sy  (bi1 + bi2 x)2

(13.24)

i=1

addresses the uncertainty due to random errors. Systematic Errors The formal decomposition (13.23) issues fy(x) = fy .

(13.25)

Overall Uncertainty Combining (13.24) with the worst-case estimation of (13.25) puts forth the uncertainty band y(x) ± uy(x) # uy(x) = tP (m − 2) sy

m 

(13.26) (bi1 + bi2

x)2

+ fs,y .

i=1

We expect the bordering lines, set up in symmetry to the least squares line (13.5), to localize the true straight line (12.1).

13.6 EP -Region

127

13.6 EP -Region The EP -region localizes the pair of true values (β0,1 , β0,2 ) with respect to the pair of estimators (β¯1 , β¯2 ). Confidence Ellipse Combining μβ¯1 = β0,1 + fy ,

μβ¯2 = β0,2

with β¯k =

m 

bik [y0,i + (yi − μyi ) + fy ] ;

k = 1, 2

i=1

yields β¯k = μβ¯k +

m 

bik (yi − μyi ) ;

k = 1, 2 .

i=1

As discussed in Appendix C, Hotellings’s ellipse is given by sβ¯2 β¯2 (β¯1 − μβ¯1 )2 − 2sβ¯1 β¯2 (β¯1 − μβ¯1 )(β¯2 − μβ¯2 ) + sβ¯1 β¯1 (β¯2 − μβ¯2 )2 = t2 (2, m − 2) |sβ¯ | . From this we draw the confidence ellipse s ¯ ¯ (β1 − β¯1 )2 − 2s ¯ ¯ (β1 − β¯1 )(β2 − β¯2 ) + s ¯ β2 β2

β1 β¯1 (β2

β1 β2

− β¯2 )2 (13.27)

= t2P (2, m − 2)|sβ¯ | being centered in (β¯1 , β¯2 ). We expect the confidence ellipse to localize the point  μβ¯1 μβ¯ = (13.28) μβ¯2 with probability P . Security Polygon Due to the measuring conditions of error-free abscissas and erroneous ordinates charged by one and the same unknown systematic error, the security polygon degenerates into an interval −fs,y ≤ fβ¯1 ≤ fs,y

(13.29)

fβ¯2 = 0 . For convenience, let us visualize this interval as a “stick” of length 2fs,y .

128

13 Straight Lines: Case (i)

Mergence of Ellipse and Interval The mergence of the confidence ellipse (13.27) and the degenerated security polygon (13.29) is discussed in Appendix G. Figure 13.1 displays the true straight line (12.1), the fitted straight line (13.5), the uncertainties (13.21) of the estimators β¯1 , β¯2 , the uncertainty band (13.26) and, finally, the EP-region, meant to localize the tuple of true values β0,1 and β0,2 . The illustrations are based on simulated data implying known true values and, in particular, extensive graphical scale transformations, Appendix A. Equivalence of Uncertainty and EP-Region Intuitively, we expect the uncertainty band and the EP-region as depicted in Fig. 13.1 to be equivalent. Remarkably enough, it proves doable to transfer one into the other. The procedure is outlined in Chap. 16.

13.6 EP -Region

129

Fig. 13.1. Straight lines, case (i). Left: least squares line, uncertainty band, and true straight line (dashed ). Top right: uncertainty intervals localizing the straight line’s true parameters β0,1 and β0,2 . Equal systematic errors do not affect β¯2 . Bottom right: EP-region, localizing the pair of true values β0,1 , β0,2

14 Straight Lines: Case (ii)

Case (ii), Table 12.1 assumes correct abscissas and repeatedly measured ordinates. As now the empirical variances of the ordinates are directly accessible the scattering of the random errors may vary from ordinate to ordinate. Beyond, varying unknown systematic errors are admitted. As a matter of course, the minimized sum of squared residuals still enters the construction of the solution vector, it proves however no longer serviceable with respect to assessing uncertainties.

14.1 Fitting Conditions We refer to m > 2 data pairs (x0,1 , y¯1 ),

(x0,2 , y¯2 ),

...

(x0,m , y¯m )

(14.1)

with error-free abscissas and ordinates being arithmetic means 1 y¯i = yil = y0,i + (¯ yi − μy¯i ) + fy¯i ; n n

i = 1, . . . , m

l=1

μy¯i = E{Y¯i } ;

−fs,¯yi ≤ fy¯i ≤ fs,¯yi .

According to the idea to stick to well-defined measuring conditions, each of the y¯i implies the same number n of repeated measurements. The i-specific empirical variances 1  2 (yil − y¯i ) ; n−1 n

s2y¯i =

i = 1, . . . , m

l=1

and the limits ±fs,¯yi of the unknown systematic errors issue the uncertainties uy¯i of the input data. We have y¯i ± uy¯i ,

uy¯i =

tP (n − 1) √ sy¯i + fs,¯yi ; n

i = 1, . . . , m .

We expect the intervals y¯i − uy¯i ≤ y0,i ≤ y¯i + uy¯i ;

i = 1, . . . , m .

to localize the true values y0,i of the ordinates.

(14.2)

132

14 Straight Lines: Case (ii)

14.2 Orthogonal Projection Let us transfer the inconsistent, over-determined, linear system β1 + β2 x0,i ≈ y¯i ;

i = 1, . . . , m > 2

(14.3)

to Aβ ≈ y¯ where ⎛

1 ⎜ 1 ⎜ A=⎜ ⎝··· 1

⎞ x0,1 x0,2 ⎟ ⎟ ⎟, ··· ⎠

⎛ β=⎝

β1



⎞ y¯1 ⎜ y¯ ⎟ ⎜ 2⎟ y¯ = ⎜ ⎟. ⎝···⎠

⎞ ⎠,

β2

x0,m

y¯m

The orthogonal projection yields β¯k =

¯ = BTy , β

m 

bik y¯i ;

k = 1, 2

(14.4)

i=1

with

−1 = (bik ) ; B = A AT A

i = 1, . . . , m;

k = 1, 2 .

¯ appoint the least squares The components β¯k ; k = 1, 2 of the solution vector β line y(x) = β¯1 + β¯2 x .

(14.5)

14.3 Uncertainties of the Components of the Solution Vector Random Errors In view of Sect. 7.3, we notionally introduce an ensemble of least squares lines based on the data sets (y1l , y2l , . . . , yml ) ; l = 1, . . . , n issuing estimators β¯kl =

m 

bik yil ,

l = 1, . . . , n ;

k = 1, 2 .

i=1

Inserting 1 yil ; n n

y¯i =

l=1

i = 1, . . . , m

(14.6)

14.3 Uncertainties of the Components of the Solution Vector

133

into (14.4) yields β¯k =

m 

 bik

i=1

 m  n n 1 1  yil = bik yil n n i=1 l=1

l=1

1¯ βkl ; k = 1, 2 . n n

=

l=1

The differences β¯kl − β¯k =

m 

bik (yil − y¯i )

i=1

produce the elements



1  ¯ βkl − β¯k β¯k l − β¯k n−1 l=1 ⎤ ⎡ m m n  1   = bik (yil − y¯i ) ⎣ bjk (yjl − y¯j )⎦ n−1 i=1 j=1 n

sβ¯k β¯k =

l=1

=

m 

bik bjk sij ;

k, k  = 1, 2

i,j=1

of the empirical variance–covariance matrix of the solution vector in which the 1  (yil − y¯i )(yjl − y¯j ) ; n−1 n

sij =

i, j = 1, . . . , m

l=1

designate the elements s = (sij ) ;

i, j = 1, . . . , m

(14.7)

of the empirical variance–covariance matrix of the input data, each having degrees of freedom ν = n − 1 . After all, we have ⎞ ⎛ sβ¯1 β¯1 sβ¯1 β¯2 ⎠ = B T s B, sβ¯ = ⎝ sβ¯k β¯k ≡ s2β¯k . (14.8) sβ¯2 β¯1 sβ¯2 β¯2 Systematic Errors The formal decompositions β¯k =

m  i=1

bik [y0,i + (¯ yi − μy¯i ) + fy¯i ] ;

k = 1, 2

(14.9)

134

14 Straight Lines: Case (ii)

issue the propagated systematic errors fβ¯k =

m 

bik fy¯i ;

k = 1, 2

(14.10)

i=1

their worst-case estimations being fs,β¯k =

m 

| bik | fs,¯yi ;

k = 1, 2 .

(14.11)

i=1

In case of equal fy¯i and fs,¯yi , fy¯i = fy ,

fs,¯yi = fs,y ;

i = 1, . . . m (14.12)

−fs,y ≤ fy ≤ fs,y , due to (13.19) we have fβ¯1 = fy

(14.13)

fβ¯2 = 0 .

Confidence Intervals and Overall Uncertainties We localize the expectations E{β¯1 } = μβ¯1 = β0,1 + fβ¯1 (14.14) E{β¯2 } = μβ¯2 = β0,2 + fβ¯2 through confidence intervals tP (n − 1) tP (n − 1) √ √ sβ¯1 ≤ μβ¯1 ≤ β¯1 + sβ¯1 β¯1 − n n tP (n − 1) tP (n − 1) √ √ sβ¯2 ≤ μβ¯2 ≤ β¯2 + sβ¯2 β¯2 − n n of probability P . Thus, the overall uncertainties of the β¯k take the form uβ¯1 =

tP (n − 1) √ sβ¯1 + fs,β¯1 n (14.15)

uβ¯2

tP (n − 1) √ sβ¯2 + fs,β¯2 . = n

14.4 Uncertainty Band

135

Equal systematic errors give way to tP (n − 1) √ sβ¯1 + fs,y n

uβ¯1 = uβ¯2

tP (n − 1) √ sβ¯2 = n

(14.16)

as fβ¯2 = 0. At any rate, the result reads β¯k ± uβ¯k ;

k = 1, 2 .

14.4 Uncertainty Band Random Errors The β¯kl as defined in (14.6) suggest an ensemble of n straight lines yl (x) = β¯1l + β¯2l x ;

l = 1, . . . , n .

Summed over l reestablishes the least squares line 1 yl (x) = β¯1 + β¯2 x . n n

y(x) =

l=1

Due to β¯kl − β¯k =

m 

bik (yil − y¯i )

i=1

we have yl (x) − y(x) = (β¯1l − β¯1 ) + (β¯2l − β¯2 ) x =

m 

(bi1 + bi2 x)(yil − y¯i ) .

i=1

Hence, for any fixed x the empirical variance is given by 1  (yl (x) − y(x))2 = bTs b . n−1 n

s2y(x) =

(14.17)

l=1

Here b = [ (b11 + b12 x)

(b21 + b22 x)

...

(bm1 + bm2 x) ]T

denotes an auxiliary vector and s the matrix (14.7). After all, the term tP (n − 1) √ sy(x) n devises the uncertainty due to random errors.

(14.18)

136

14 Straight Lines: Case (ii)

Systematic Errors Inserting β¯k = β0,k +

m 

bik (¯ yi − μy¯i ) +

i=1

m 

bik fy¯i ;

k = 1, 2

(14.19)

i=1

into y(x) = β¯1 + β¯2 x yields y(x) = β0,1 + β0,2 x +

m 

(bi1 + bi2 x)(¯ yi − μy¯i )

i=1

+

m 

(14.20)

(bi1 + bi2 x)fy¯i .

i=1

Hence, for any fixed x the propagated systematic error is issued as fy(x) =

m 

(bi1 + bi2 x)fy¯i

i=1

its worst-case estimation being fs,y(x) =

m 

| bi1 + bi2 x | fs,¯yi .

(14.21)

i=1

Ultimately, due to (13.19), equal systematic errors as defined in (14.12) issue fy(x) = fs,y .

(14.22)

Overall Uncertainty From (14.18) and (14.21) we draw the uncertainty band y(x) ± uy(x) uy(x) =

m  tP (n − 1) √ sy(x) + | bi1 + bi2 x | fs,¯yi . n i=1

(14.23)

14.5 EP -Region

137

In case of equal systematic errors this turns into y(x) ± uy(x) uy(x) =

(14.24)

tP (n − 1) √ sy(x) + fs,y . n

At any rate, the uncertainty band is meant to localize the true straight line (12.1).

14.5 EP -Region The EP -region localizes the pair of true values (β0,1 , β0,2 ) with respect to the pair of estimators (β¯1 , β¯2 ). Confidence Ellipsoid Considering (14.9), β¯k = μβ¯k +

m 

bik (¯ yi − μy¯i ) ,

μβ¯k = β0,k +

i=1

m 

bik fy¯i ;

k = 1, 2

i=1

and sβ¯ as given in (14.8), Hotelling’s ellipse takes the form 2 ¯ − μ ¯)T s−1 ¯ − μ ¯) = tP (2, n − 1) . (β ( β ¯ β β β n

The associated confidence ellipse reads 2 ¯ T s−1 ¯ = tP (2, n − 1) . (β − β) (β − β) ¯ β n

(14.25)

Inserting s−1 ¯ β



−1 = BTs B =

⎛ ⎞ 1 ⎝ sβ¯2 β¯2 − sβ¯1 β¯2 ⎠ | sβ¯ | −s ¯ ¯ s ¯ ¯ β2 β1

β1 β1

| sβ¯ |= sβ¯1 β¯1 sβ¯2 β¯2 − s2β¯1 β¯2 we obtain

2





2 sβ¯2 β¯2 β1 − β¯1 − 2sβ¯1 β¯2 β1 − β¯1 β2 − β¯2 + sβ¯1 β¯1 β2 − β¯2 = | sβ¯

t2 (2, n − 1) . | P n

(14.26)

138

14 Straight Lines: Case (ii)

The confidence ellipse is expected to localize the point  μβ¯1 μβ¯ = μβ¯2

(14.27)

with probability P . Security Polygon While the fy¯1 , fy¯1 , . . . , fy¯m are “scanning” the set of points lying within or on the faces of the m-dimensional hypercuboid −fs,¯yi ≤ fy¯i ≤ fs,¯yi ;

i = 1, . . . , m

the propagated systematic errors (14.10), fβ¯k =

m 

bik fy¯i ;

k = 1, 2 ,

i=1

span a polygon, Appendix F. Putting, however, fy¯i = fy ; fs,¯yi = fs,y ; i = 1, . . . , m the polygon degenerates into an interval −fs,y ≤ fβ¯1 ≤ fs,y

(14.28)

fβ¯2 = 0 . For convenience, we take it as a “stick” of length 2fs,y , Appendix G. Mergence of the Confidence Ellipse Either with an Interval or with a Polygon The mergence of a confidence ellipse either with a non-degenerated or with a degenerated security polygon is addressed in Appendix G. Figure 14.1 displays the true straight line (12.1), the fitted straight line (14.5), the uncertainties (14.15) of the estimators β¯1 , β¯2 , the uncertainty band (14.23) and, finally, the EP-region, meant to localize the tuple of true values β0,1 and β0,2 . Figure 14.2 considers equal systematic errors and renders the true straight line (12.1), the fitted straight line (14.5), the uncertainties (14.16) of the estimators β¯1 , β¯2 , the uncertainty band (14.24) and, ultimately, the EP-region, intended to localize the tuple of true values β0,1 and β0,2 . The illustrations are based on simulated data implying known true values and, in particular, extensive graphical scale transformations, Appendix A.

14.5 EP -Region

139

Fig. 14.1. Straight line, case (ii). Left: least squares line, uncertainty band, and true straight line (dashed ). Top right: uncertainty intervals localizing the straight line’s true parameters β0,1 and β0,2 . Bottom right: EP-region, localizing the tuple of true values β0,1 , β0,2

140

14 Straight Lines: Case (ii)

Fig. 14.2. Straight lines, case (ii), equal systematic errors. Left: least squares line, uncertainty band, and true straight line (dashed ). Top right: uncertainty intervals localizing the straight line’s true parameters β0,1 and β0,2 . Equal systematic errors do not affect β¯2 . Bottom right: EP-region, localizing the tuple of true values β0,1 , β0,2

15 Straight Lines: Case (iii)

Case (iii), Table 12.1 considers erroneous abscissas and erroneous ordinates. Presumably, this is the most realistic situation causing the assessment of uncertainties, however, to be somewhat intricate.

15.1 Fitting Conditions Let there be m > 2 pairs of arithmetic means (¯ x1 , y¯1 ),

(¯ x2 , y¯2 ),

...

(¯ xm , y¯m )

each covering n repeated measurements n 1 x ¯i = xil = x0,i + (¯ xi − μx¯i ) + fx¯i ; n

(15.1)

i = 1,... ,m

l=1

¯ i } = μx¯ ; E{X i

−fs,¯xi ≤ fx¯i ≤ fs,¯xi

and 1 yil = y0,i + (¯ yi − μy¯i ) + fy¯i ; n n

y¯i =

i = 1,... ,m

l=1

E{Y¯i } = μy¯i ; The empirical variances n 1  2 s2x¯i = (xil − x ¯i ) , n−1 l=1

−fs,¯yi ≤ fy¯i ≤ fs,¯yi . 1  2 (yil − y¯i ) ; n−1 n

s2y¯i =

i = 1, . . . , m

l=1

and the error limits ±fs,¯x1 and ±fs,¯yi issue the uncertainties of the input data tP (n − 1) √ sx¯i + fs,¯xi ux¯i = x ¯i ± ux¯i , n i = 1, . . . , m . (15.2) tP (n − 1) √ y¯i ± uy¯i , sy¯i + fs,¯yi . uy¯i = n The intervals x ¯i ± ux¯i and y¯i ± uy¯i are intended to localize the true values x0,i and y0,i , respectively.

142

15 Straight Lines: Case (iii)

15.2 Orthogonal Projection The input data produce an inconsistent, over-determined, linear system ¯i ≈ y¯i ; β1 + β2 x

i = 1, . . . , m > 2,

(15.3)

in matrix form Aβ ≈ y¯ where



1 x ¯1





⎜ 1 x ¯2 ⎟ ⎟ ⎜ A=⎜ ⎟, ⎝··· ··· ⎠

β=

⎛ β1

y¯1



⎜ y¯ ⎟ ⎜ 2⎟ y¯ = ⎜ ⎟. ⎝···⎠

,

β2

1 x ¯m

y¯m

The orthogonal projection yields ¯ = B T y¯ , β

β¯k =

m 

bik y¯i ;

k = 1, 2

(15.4)

i=1

with

−1 = (bik ) ; B = A AT A

i = 1, . . . , m;

k = 1, 2 .

The matrix B is given by ⎡  m m  x ¯2j − x ¯1 x ¯j ⎢ j=1 j=1 ⎢ ⎢  m m  ⎢ x ¯2j − x ¯2 x ¯j ⎢ 1 ⎢ j=1 j=1 B= ⎢ D⎢ ··· ⎢ ⎢ ⎢ m m ⎣  x ¯2j − x ¯m x ¯j j=1



m 

⎤ x ¯j + m¯ x1

⎥ ⎥ ⎥ − x ¯j + m¯ x2 ⎥ ⎥ ⎥ j=1 ⎥ ⎥ ··· ⎥ ⎥ ⎥ m ⎦  − x ¯j + m¯ xm

j=1

j=1 m 

(15.5)

j=1

in which D = | AT A | . The bi,k bring to bear the errors of the abscissas. Clearly, these errors as well as the errors of the ordinates will have to be considered in the error propagation to come. ¯ devise the After all, the components β¯k ; k = 1, 2 of the solution vector β least squares line y(x) = β¯1 + β¯2 x .

(15.6)

15.3 Series Expansion of the Solution Vector

143

15.3 Series Expansion of the Solution Vector Let us submit the β¯k ; k = 1, 2 to series expansions throughout a neighborhood of the point (x0,1 , . . . , x0,m ; y0,1 , . . . , y0,m ) , firstly, with respect to the n points (x1l , x2l , . . . , xml ; y1l , y2l , . . . , yml );

l = 1, . . . , n

yielding β¯kl (x1l , . . . , xml ; y1l , . . . , yml ) = β¯k (x0,1 , . . . , x0,m ; y0,1 , . . . , y0,m ) m m   ∂ β¯k ∂ β¯k (xil − μx¯i ) + (yil − μy¯i ) + · · · + ∂x0,i ∂y0,i i=1 i=1 m m   ∂ β¯k ∂ β¯k + fx¯i + fy¯ + · · · ; ∂x0,i ∂y0,i i i=1 i=1

(15.7)

k = 1, 2

and, secondly, with respect to the sample-dependent means (¯ x1 , x ¯2 , . . . , x ¯m ; y¯1 , y¯2 , . . . , y¯m ) issuing β¯k (¯ x1 , . . . , x ¯m ; y¯1 , . . . , y¯m ) = β¯k (x0,1 , . . . , x0,m ; y0,1 , . . . , y0,m ) +

m m   ∂ β¯k ∂ β¯k (¯ xi − μx¯i ) + (¯ yi − μy¯i ) + · · · ∂x0,i ∂y0,i i=1 i=1

+

m m   ∂ β¯k ∂ β¯k fx¯i + fy¯ + · · · ; ∂x0,i ∂y0,i i i=1 i=1

(15.8)

k = 1, 2 .

As there is no other choice, we approximate the derivatives in x0,1 , . . . , y0,m through derivatives in x ¯1 , . . . , y¯m . Further, for convenience, we assign the coefficients ci1 =

∂ β¯1 , ∂x ¯i

ci+m,1 =

∂ β¯1 ∂ y¯i i = 1, . . . , m

ci2

∂ β¯2 = , ∂x ¯i

to an auxiliary matrix

ci+m,2

∂ β¯2 = ∂ y¯i

144

15 Straight Lines: Case (iii)

 T

C =

c11

c21

···

c2m,1

c12

c22

···

c2m,2

.

(15.9)

The coefficients cik are given in Appendix B. Beyond, we put vil = xil ,

vi+m,l = yil

v¯i = x ¯i ,

v¯i+m = y¯i

μi = μx¯i ,

μi+m = μy¯i

fi = fx¯i ,

fi+m = fy¯i

fs,i = fs,¯xi ,

fs,i+m = fs,¯yi .

After all, linearizing the expansions (15.7) and (15.8) produces β¯kl = β0,k +

2m 

cik (vil − μi ) +

2m 

i=1

i=1

2m 

2m 

cik fi ;

k = 1, 2 ;

l = 1, . . . , n (15.10)

β¯k = β0,k +

cik (¯ vi − μi ) +

i=1

cik fi

i=1

as β¯k (x0,1 , . . . , y0,m ) = β0,k . Subtraction yields β¯kl − β¯k =

2m 

cik (vil − v¯i ) ;

k = 1, 2 .

(15.11)

i=1

Notionally, we consider an ensemble of least squares lines based on the respective l-th repeated measurements of the m measuring points i = 1, . . . , m, i.e. on (x1l , . . . , xml ; y1l , . . . , yml ) ;

l = 1, . . . , n

so that 1¯ β¯k = βkl ; n n

k = 1, 2 .

(15.12)

l=1

While (15.11) devises the propagation of random errors, the propagated systematic errors are to be taken from fβ¯k =

2m  i=1

cik fi ;

k = 1, 2 .

(15.13)

15.4 Uncertainties of the Components of the Solution Vector

145

15.4 Uncertainties of the Components of the Solution Vector Random Errors Invoking (15.11), the elements sβ¯k β¯k of the empirical variance–covariance matrix  sβ¯1 β¯1 sβ¯1 β¯2 , sβ¯k β¯k ≡ s2β¯k sβ¯ = sβ¯2 β¯1 sβ¯2 β¯2 ¯ take the form of the solution vector β



1  ¯ βkl − β¯k β¯k l − β¯k = n−1 l=1 ⎤  ⎡ 2m  2m n    1 = cik (vil − v¯i ) ⎣ cjk (vjl − v¯j )⎦ n−1 i=1 j=1 n

sβ¯k β¯k

l=1

=

2m 

k, k  = 1, 2

cik cjk sij ;

i,j=1

in which the 1  (vil − v¯i )(vjl − v¯j ) ; n−1 n

sij =

i, j = 1, . . . , 2m

l=1

designate the elements of the empirical variance–covariance matrix s = (sij ) ;

i, j = 1, . . . , 2m

(15.14)

of the input data, each having degrees of freedom ν = n − 1 . Eventually, deploying (15.9), the empirical variance–covariance matrix of the solution vector is issued by sβ¯ = C Ts C .

(15.15)

Systematic Errors In case of equal systematic errors and equal error bounds fx¯i = fx ,

fs,¯xi = fs,x

− fs,x ≤ fx ≤ fs,x i = 1, . . . , m

fy¯i = fy ,

fs,¯yi = fs,y

− fs,y ≤ fy ≤ fs,y

(15.16)

146

15 Straight Lines: Case (iii)

(15.13) passes into fβ¯k =

m 

cik fx¯i +

i=1

= fx

m 

m 

ci+m,k fy¯i

i=1

cik + fy

i=1

m 

ci+m,k .

i=1

But as m 

m 

ci1 = −fβ¯2 ,

i=1

i=1

m 

m 

ci,2 = 0 ,

i=1

ci+m,1 = 1 (15.17) ci+m,2 = 0

i=1

the fβ¯k get reduced to fβ¯1 = −fx β¯2 + fy (15.18) fβ¯2 = 0 . Confidence Intervals and Overall Uncertainties We localize the expectations E{β¯1 } = μβ¯1 = β0,1 +

2m 

ci1 fi

i=1

E{β¯2 } = μβ¯2 = β0,2 +

2m 

ci2 fi

i=1

by confidence intervals tP (n − 1) tP (n − 1) √ √ β¯1 − sβ¯1 ≤ μβ¯1 ≤ β¯1 + sβ¯1 n n tP (n − 1) tP (n − 1) √ √ sβ¯2 ≤ μβ¯2 ≤ β¯2 + sβ¯2 β¯2 − n n of probability P . Hence, the overall uncertainties of the components of the solution vector turn out to be  tP (n − 1) √ sβ¯k + | cik | fs,i ; n i=1 2m

uβ¯k =

k = 1, 2 .

(15.19)

15.5 Uncertainty Band

147

Equal systematic errors, by contrast, suggest uβ¯1 =

tP (n − 1) √ sβ¯1 + fs,x | β¯2 | +fs,y n (15.20)

uβ¯2

tP (n − 1) √ sβ¯2 . = n

Equal systematic errors do not affect the slope. They shift, however, the straight line parallel to itself. At any rate, the final result reads β¯k ± uβ¯k ;

k = 1, 2 .

15.5 Uncertainty Band Random Errors Let the β¯kl as addressed in (15.10) induce an ensemble of n least square lines yl (x) = β¯1l + β¯2l x ;

l = 1, . . . , n

which, following (15.12), reestablishes (15.6) 1 yl (x) . n n

y(x) =

l=1

For any fixed x, the differences



yl (x) − y(x) = β¯1l − β¯1 + β¯2l − β¯2 x

=

2m 

(ci1 + ci2 x)(vi1 − v¯i ) ;

l = 1, . . . , n

i=1

bring forth an empirical variance 1  (yl (x) − y(x))2 = cTs c . n−1 n

s2y(x) =

(15.21)

l=1

Here, s denotes the empirical variance–covariance matrix as quoted in (15.14) and c an auxiliary vector c = (c11 + c12 x

c21 + c22 x

···

T

c2m,1 + c2m,2 x) .

148

15 Straight Lines: Case (iii)

Systematic Errors Inserting (15.10) into (15.6) produces y(x) = β0,1 + β0,2 x +

2m 

(ci1 + ci2 x) (¯ vi − μi ) +

2m 

i=1

(ci1 + ci2 x) fi .

i=1

Thus, for a given x the systematic error is given by fy(x) =

2m 

(ci1 + ci2 x) fi

(15.22)

i=1

the worst-case estimation of which being fs,y(x) =

2m 

| ci1 + ci2 x | fs,i .

(15.23)

i=1

Considering equal systematic errors as introduced in (15.16), due to (15.17) we have fs,y(x) = | β¯2 | fs,x + fs,y .

(15.24)

Overall Uncertainty After all, the uncertainty band takes the form y(x) ± uy(x) uy(x) =

2m  tP (n − 1) √ sy(x) + | ci1 + ci2 x | fs,i . n i=1

(15.25)

For equal systematic errors this passes into y(x) ± uy(x) uy(x) =

tP (n − 1) √ sy(x) + | β¯2 | fs,x + fs,y . n

(15.26)

At any rate, we expect the uncertainty band to hold the true straight line (12.1).

15.6 EP -Region The EP -region localizes the pair of true values (β0,1 , β0,2 ) with respect to the pair of estimators (β¯1 , β¯2 ).

15.6 EP -Region

149

Confidence Ellipsoid Let us rewrite (15.10) according to β¯k = μβ¯k +

2m 

cik (¯ vi − μi ) ;

μβ¯k = β0,k +

i=1

2m 

cik fi ;

k = 1, 2 .

i=1

With sβ¯ as given in (15.15), Hotelling’s ellipse reads 2 ¯ − μ ¯) = tP (2, n − 1) . ¯ − μ ¯)T s−1 ( β (β ¯ β β β n

Hence, the confidence ellipse t2P (2, n − 1) ¯ T s−1 ¯ (β − β) (β − β) = ¯ β n localizes the point

⎛ μβ¯ = ⎝

μβ¯1

(15.27)

⎞ ⎠

(15.28)

μβ¯2

with probability P . Security Polygon Letting the fi “scan” the set of points lying within or on the faces of the 2m-dimensional hypercuboid −fs,i ≤ fi ≤ fs,i

i = 1, . . . , 2m

the propagated systematic errors fβ¯k =

2m 

cik fi ;

k = 1, 2

(15.29)

i=1

span a polygon. In case of equal systematic errors we have, considering (15.18), fβ¯1 = −fx β¯2 + fy fβ¯2 = 0 . This causes the security polygon to degenerate into an interval − ( fs,x | β¯2 | +fs,y ) ≤ fβ¯1 ≤ fs,x | β¯2 | +fs,y .

(15.30)

Just for convenience, we notionally conceive (15.30) as a “stick” of length 2(fs,x | β¯2 | +fs,y ).

150

15 Straight Lines: Case (iii)

Mergence of Ellipse Either with Interval or with Polygon The merging of a confidence ellipse either with a non-degenerated or with a degenerated security polygon is addressed in Appendix G. Figure 15.1 displays the true straight line (12.1), the fitted straight line (15.6), the uncertainties (15.19) of the estimators β¯1 , β¯2 , the uncertainty band (15.25) and, finally, the EP-region, meant to localize the tuple of true values β0,1 and β0,2 . Figure 15.2 considers equal systematic errors and renders the true straight line (12.1), the fitted straight line (15.6), the uncertainties (15.20) of the estimators β¯1 , β¯2 , the uncertainty band (15.26) and, ultimately, the EPregion, intended to localize the tuple of true values β0,1 and β0,2 . Again, the illustrations are based on simulated data implying known true values and, in particular, extensive graphical scale transformations, Appendix A.

15.6 EP -Region

151

Fig. 15.1. Straight lines, case (iii). Left: least squares line, uncertainty band, and true straight line (dashed ). Top right: uncertainty intervals localizing the straight line’s true parameters β0,1 and β0,2 . Bottom right: EP-region, localizing the tuple of true values β0,1 , β0,2

152

15 Straight Lines: Case (iii)

Fig. 15.2. Straight lines, case (iii), equal systematic errors. Left: least squares line, uncertainty band, and true straight line (dashed ). Top right: uncertainty intervals localizing the straight line’s true parameters β0,1 and β0,2 . Equal systematic errors do not affect β¯2 . Bottom right: EP-region, localizing the tuple of true values β0,1 , β0,2

Part VI

Fitting of Planes

16 Preliminaries

The adjustment of planes calls for data triples (x1 , y1 , z1 ),

(x2 , y2 , z2 ),

...

, (xm , ym , zm ) ;

m > 3.

16.1 Distinction of Cases We shall address three fitting situations, Table 16.1: Table 16.1. Fitting of Planes, Three Cases Case

x and y coordinates

z coordinates

(i) (ii) (iii)

Error-free Error-free Repeated measurements

Individual measurements Repeated measurements Repeated measurements

Case (i) assumes correct x, y-coordinates and erroneous, individually measured z-values. Each of the z−s is supposed to hold a particular random error, stemming from one and the same normal distribution, and a common unknown systematic error. Case (ii), likewise relying on exact x, y-coordinates, accounts for repeated measurements of the z−s. The scattering of the random errors as well as the actual values of the unknown systematic errors may vary from one measured z-value to the next. Case (iii), ultimately, admits measuring errors in all three coordinates with varying theoretical variances and varying unknown systematic errors.

16.2 True Plane Let the equation of a plane z(x, y) = β0,1 + β0,2 x + β0,3 y

(16.1)

156

16 Preliminaries

be fulfilled by the m data triples (x0,1 , y0,1 , z0,1 ),

(x0,2 , y0,2 , z0,2 ),

...

, (x0,m , y0,m , z0,m ) .

(16.2)

Written in matrices, this reads Aβ 0 = z 0

(16.3)

where ⎛

1 x0,1 y0,1

⎜ 1 x 0,2 y0,2 ⎜ A=⎜ ⎝··· ··· ··· 1 x0,m y0,m

⎞ ⎟ ⎟ ⎟, ⎠



β0,1



⎟ ⎜ β0 = ⎝ β0,2 ⎠ , β0,3



z0,1

⎜z ⎜ 0,2 z0 = ⎜ ⎝ ···

⎞ ⎟ ⎟ ⎟. ⎠

(16.4)

z0,m

Given A has rank 3, (16.3) reproduces β 0 = B T z0 ,



−1 B = A AT A .

(16.5)

We address (16.1) as the true plane, (16.2) as the true data triples and the vector β 0 the true solution vector. Feeding in empirical data clearly suspends (16.3) and thus conceals β 0 . Nevertheless, we may fit a least squares plane to the set of defective data and attempt to localize the components β0,1 , β0,2 , and β0,3 of β 0 via uncertainty intervals.

17 Planes: Case (i)

Case (i) of Table 16.1 assumes error-free x, y-coordinates and erroneous, individually measured z-coordinates.

17.1 Fitting Conditions The adjustment relies on m > 3 data triples (x0,1 , y0,1 , z1 ),

(x0,2 , y0,2 , z2 ),

...

(x0,m , y0,m , zm ) .

(17.1)

The x0,i , y0,i are considered correct and the zi erroneous following zi = z0,i + (zi − μzi ) + fzi ;

i = 1, . . . , m

(17.2)

with expectations μzi = E{Zi }. We presume the random errors zi − μzi ; i = 1, . . . , m to be due to a common normal density and hence liable to a uniform theoretical variance, say, σz2 . At present, we let the zi be charged by the same systematic error fzi = fz ;

i = 1, . . . , m

−fs,z ≤ fz ≤ fs,z .

(17.3)

For the time being, we are not in a position to specify the uncertainties of zi -data.

17.2 Orthogonal Projection The inconsistent, over-determined, linear system reads β1 + β2 x0,i + β3 y0,i ≈ zi ;

i = 1, . . . , m > 3 .

(17.4)

We are in search of a least squares plane z(x) = β¯1 + β¯2 x + β¯3 y

(17.5)

158

17 Planes: Case (i)

fitting the data set (17.1). Putting ⎞ 1 x0,1 y0,1 ⎟ ⎜ 1 x 0,2 y0,2 ⎟ ⎜ A=⎜ ⎟, ⎝··· ··· ··· ⎠ 1 x0,m y0,m ⎛



β1

⎞ z1 ⎜z ⎟ ⎜ 2⎟ z=⎜ ⎟ ⎝···⎠ ⎛



⎜ ⎟ β = ⎝ β2 ⎠ , β3

zm

the matrix form of (17.4) turns out to be Aβ ≈ z .

(17.6)

The orthogonal projection produces the solution vector ¯ = BTz , β

β¯k =

m 

bik zi ;

k = 1, 2, 3

(17.7)

i=1

with

−1 B = A AT A = (bik ) ;

i = 1, . . . , m;

k = 1, 2, 3 .

(17.8)

¯ issue the least squares plane The components β¯k of the solution vector β (17.5).

17.3 Uncertainties of the Input Data According to the assumptions made, the minimized sum of squared residuals



¯ = z − Aβ ¯ ¯ T z − Aβ Q provides an estimate s2z =

¯ Q m−3

(17.9)

of the common theoretical variance σz2 = E{Sz2 } of the input data. The empirical variance s2z has degrees of freedom ν = m−3. The extra information (17.9) obviously stems from our requesting the data set (17.1) to establish a least squares plane. Hence, we may firstly localize the expectations μzi = z0,i + fz ;

i = 1, . . . , m

(17.10)

17.4 Uncertainties of the Components of the Solution Vector

159

by means of confidence intervals zi − tP (m − 3) sz ≤ μzi ≤ zi + tP (m − 3) sz ;

i = 1, . . . , m ,

(17.11)

Appendix H, and secondly provide the zi with overall uncertainties uzi = tP (m − 3) sz + fs,z ;

i = 1, . . . , m .

(17.12)

i = 1, . . . , m

(17.13)

After all, we expect the intervals zi − uzi ≤ z0,i ≤ zi + uzi ;

to localize the true values z0,i of the measured zi .

17.4 Uncertainties of the Components of the Solution Vector Random Errors Albeit there are no repeated measurements, we may nevertheless assign an empirical variance–covariance matrix to the components of the solution vector ¯ Deploying the expectations β. E{β¯k } = μβ¯k =

m 

bik μzi ;

k = 1, 2, 3

i=1

of the β¯k , as a start, we formalize the theoretical variances and the theoretical covariance   σ ¯ ¯  = E (β¯k − μ ¯ )(β¯k  − μ ¯  ) βk βk

=E

5 m 

βk

 bik (Zi − μzi )

i=1

m 



βk

-

bjk Zj − μzj

j=1

= σz2

m 

bik bik  ;

i=1

k, k  = 1, 2, 3 their empirical counterparts being sβ¯k β¯k = s2z

m 

bik bik  ;

k, k  = 1, 2, 3

i=1

each of degrees of freedom ν = m − 3. Hence, the empirical variance– covariance matrix of the solution vector reads ⎞ ⎛ sβ¯1 β¯1 sβ¯1 β¯2 sβ¯1 β¯3 ⎟ ⎜ sβ¯ = ⎝ sβ¯2 β¯1 sβ¯2 β¯2 sβ¯2 β¯3 ⎠ . (17.14) sβ¯3 β¯1

sβ¯3 β¯2

sβ¯3 β¯3

160

17 Planes: Case (i)

Systematic Errors The formal decomposition β¯k =

m 

bik [z0,i + (zi − μzi ) + fz ] ;

k = 1, 2, 3

i=1

suggests the systematic errors fβ¯k = fz

m 

bik ;

k = 1, 2, 3 .

i=1

As m 

m 

bi1 = 1 ,

i=1

bi2 = 0 ,

i=1

m 

bi3 = 0

(17.15)

i=1

we have fβ¯1 = fz fβ¯2 = 0

(17.16)

fβ¯3 = 0 which obviously is what we expect. Confidence Intervals and Overall Uncertainties Given we localize the expectations   E β¯1 = μβ¯1 = β0,1 + fz   E β¯2 = μβ¯2 = β0,2   E β¯3 = μβ¯3 = β0,3

(17.17)

of the components β¯k by confidence intervals β¯1 − tP (m − 3) s ¯ ≤ μ ¯ ≤ β¯1 + tP (m − 3) s ¯ β1

β1

β1

β¯2 − tP (m − 3) sβ¯2 ≤ μβ¯2 ≤ β¯2 + tP (m − 3) sβ¯2

(17.18)

β¯3 − tP (m − 3) sβ¯3 ≤ μβ¯3 ≤ β¯3 + tP (m − 3) sβ¯3 the uncertainties of the estimators β¯1 , β¯2 , β¯3 take the form uβ¯1 = tP (m − 3) sβ¯1 + fs,z uβ¯2 = tP (m − 3) sβ¯2

(17.19)

uβ¯3 = tP (m − 3) sβ¯3 . Neither uβ¯2 nor uβ¯3 holds a systematic error, uβ¯1 however includes fs,z as fz shifts the plane parallel to itself either down or up the z-axis.

17.5 EP C-Region

161

Uncertainty Bowls Inserting the β¯k =

m 

bik zi

i=1

= βk,0 +

m 

bik (zi − μzi ) + fz

i=1

m 

(17.20) bik ;

k = 1, 2, 3

i=1

into the least squares plane z(x, y) = β¯1 + β¯2 x + β¯3 y produces z(x, y) = β0,1 + β0,2 x + β0,3 y +

m 

(bi1 + bi2 x + bi3 y)(zi − μzi ) + fz .

i=1

For any fixed pair (x, y) the expectation of Z(x, y) is given by E {Z(x, y)} = μz(x,y) = β0,1 + β0,2 x + β0,3 y + fz . Thus, the theoretical variance in (x, y) is given by   2 = E (Z(x, y) − μz(x,y) )2 σz(x,y) = σz2

m 

(bi1 + bi2 x + bi3 y)2 ,

  σz2 = E (Zi − μzi )2 .

i=1

Substituting s2z for σz2 and submitting fz to a worst-case estimation brings forth the uncertainty surfaces z(x, y) ± uz(x,y) # uz(x,y) = tP (m − 3) sz

m 

(17.21) (bi1 + bi2 x + bi3

y)2

+ fs,z

i=1

The surfaces, to be placed in symmetry to the least squares plane (17.5), tellingly suggest to be termed “uncertainty bowls.” They span a spatial region which we expect to localize the true plane (16.1).

17.5 EP C-Region The EP C-region localizes the triple of true values (β0,1 , β0,2 , β0,3 ) with respect to the triple of estimators (β¯1 , β¯2 , β¯3 ) .

162

17 Planes: Case (i)

Confidence Ellipsoid Inserting the expectations (17.17) into the decompositions (17.20) we find β¯k = μβ¯k +

m 

bik (zi − μzi ) ;

k = 1, 2, 3 .

i=1

Following Appendix C, Hotelling’s ellipsoid takes the form 2 ¯ − μ ¯)T s−1 ¯ (β ¯) = tP (3, m − 3) ¯ (β − μβ β β

with sβ¯ as given in (17.14). Thus, we expect the confidence ellipse 2 ¯ T s−1 ¯ (β − β) ¯ (β − β) = tP (3, m − 3) β

(17.22)

to localize the point ⎛

μβ¯1



⎜ ⎟ ⎟ μβ¯ = ⎜ ⎝ μβ¯2 ⎠

(17.23)

μβ¯3 with probability P . Security Polygon The measuring conditions, implying the z-coordinates to be charged by one and the same unknown systematic error, prompt the security polygon to degenerate into an interval −fs,z ≤ fβ¯1 ≤ fs,z fβ¯2 = 0

(17.24)

fβ¯3 = 0 which, for convenience, we consider a “stick” of length 2fs,z . Mergence of Ellipse and Interval The merging of a confidence ellipsoid with a degenerated security polyhedron is addressed in Appendix G. Figure 17.1 depicts the lattice shaped true plane (16.1), the least squares plane (17.5), the uncertainties (17.19) of the estimators β¯1 , β¯2 , β¯3 , the upper and lower uncertainty bowl (17.21) and, finally, the EPC-region, meant to localize the triple of true values β0,1 , β0,2 , and β0,3 .

17.5 EP C-Region

163

Fig. 17.1. Planes, case (i). Left: least squares plane with uncertainty bowls and true plane (lattice shaped). Top right: uncertainty intervals localizing the plane’s true parameters β0,1 , β0,2 , and β0,3 . The z−s’ systematic errors are assumed equal and thus do not affect β¯2 and β¯3 . Bottom right: EP-region localizing the triple of true values β0,1 , β0,2 , and β0,3

18 Planes: Case (ii)

Case (ii), Table 16.1 refers to correct x, y-coordinates and erroneous however repeatedly measured z-values. As now the empirical variances of the z−s are directly accessible, the scattering of the random errors may vary from z-coordinate to z-coordinate. At the same time varying unknown systematic errors are admitted. Naturally, the minimized sum of squared residuals still enters the construction of the solution vector—proves, meanwhile, no longer serviceable with respect to assessing uncertainties.

18.1 Fitting Conditions Suppose, there are m > 3 data triples (x0,1 , y0,1 , z¯1 ),

(x0,2 , y0,2 , z¯2 ),

...

, (x0,m , y0,m , z¯m )

(18.1)

the x0,i and y0,i of which being correct and the z¯i arithmetic means 1 zil = z0,i + (¯ zi − μz¯i ) + fz¯i ; n n

z¯i =

i = 1, . . . , m

l=1

E{Z¯i } = μz¯i ,

−fs,¯zi ≤ fz¯i ≤ fs,¯zi

each comprising the same number n of repeated measurements. The empirical variances 1  (zil − z¯i )2 ; n−1 n

s2z¯i =

i = 1, . . . , m

l=1

and the error limits of the unknown systematic errors issue the uncertainties uz¯i of the means z¯i ; we have z¯i ± uz¯i ,

uz¯i =

tP (n − 1) √ sz¯i + fs,¯zi ; n

i = 1, . . . , m .

We expect the true values z0,i of the z¯i to be localized by z¯i − uz¯i ≤ z0,i ≤ z¯i + uz¯i ;

i = 1, . . . , m .

(18.2)

166

18 Planes: Case (ii)

18.2 Orthogonal Projection Let us cast the inconsistent, over-determined, linear system β1 + β2 x0,i + β3 y0,i ≈ z¯i ;

i = 1, . . . , m > 3

(18.3)

into matrix form Aβ ≈ z¯ where ⎛

1 x0,1 y0,1





⎜ ⎟ ⎜ 1 x0,2 y0,2 ⎟ ⎜ ⎟ A=⎜ ⎟, ⎜··· ··· ··· ⎟ ⎝ ⎠ 1 x0,m y0,m





z¯1



⎜ ⎟ ⎜ z¯2 ⎟ ⎜ ⎟ z¯ = ⎜ ⎟. ⎜···⎟ ⎝ ⎠ z¯m

β1 ⎜ ⎟ ⎟ β=⎜ ⎝ β2 ⎠ , β3

The orthogonal projection produces ¯ = B T z¯ , β

β¯k =

m 

bik z¯i ;

k = 1, 2, 3

(18.4)

i=1

with

−1 B = A AT A = (bik ) ;

i = 1, . . . , m;

k = 1, 2, 3 .

¯ devise the least The components β¯k ; k = 1, 2, 3 of the solution vector β squares plane z(x) = β¯1 + β¯2 x + β¯3 y .

(18.5)

18.3 Uncertainties of the Components of the Solution Vector Random Errors We set out to formally establish an ensemble of least squares planes as issued by the data (z1l , z2l , . . . , zml ) , l = 1, . . . , n, see Sect. 7.3. The pertaining estimators are β¯kl =

m  i=1

bik zil ,

l = 1, . . . , n .

(18.6)

18.3 Uncertainties of the Components of the Solution Vector

167

Inserting the 1 zil ; n n

z¯i =

i = 1, . . . , m

l=1

into (18.4) reestablishes β¯k =

m 

 bik

i=1

=

1 n

 m  n n 1 1  zil = bik zil n n i=1 l=1

n 

β¯kl ;

l=1

k = 1, 2, 3

l=1

which, as a matter of course, we could just as well have taken directly from (18.6). We now deploy the differences β¯kl − β¯k =

m 

bik (zil − z¯i )

i=1

to formalize the elements



1  ¯ βkl − β¯k β¯k l − β¯k n−1 l=1 ⎤ ⎡ m m n  1   = bik (zil − z¯i ) ⎣ bjk (zjl − z¯j )⎦ n−1 i=1 j=1 n

sβ¯k β¯k =

l=1

=

m 

bik bjk sij ;

k, k  = 1, 2, 3

i,j=1

of the empirical variance–covariance matrix sβ¯ of the solution vector. Here the 1  (zil − z¯i )(zjl − z¯j ) ; n−1 n

sij =

i, j = 1, . . . , m

l=1

denote the elements of the empirical variance–covariance matrix s = (sij ) ;

i, j = 1, . . . , m

(18.7)

of the input data each having degrees of freedom ν = n − 1 . After all, we may cast this into ⎛ ⎞ sβ¯1 β¯1 sβ¯1 β¯2 sβ¯1 β¯3 ⎜ ⎟ T ⎟ sβ¯ = ⎜ sβ¯k β¯k ≡ s2β¯k . (18.8) ⎝ sβ¯2 β¯1 sβ¯2 β¯2 sβ¯2 β¯3 ⎠ = B s B , sβ¯3 β¯1

sβ¯3 β¯2

sβ¯3 β¯3

168

18 Planes: Case (ii)

Systematic Errors The propagated systematic errors are to be read from β¯k =

m 

bik [z0,i + (¯ zi − μz¯i ) + fz¯i ] ;

k = 1, 2, 3 .

(18.9)

i=1

We obviously have fβ¯k =

m 

bik fz¯i ;

k = 1, 2, 3

(18.10)

i=1

the worst-case estimations being fs,β¯k =

m 

| bik | fs,¯zi ;

k = 1, 2, 3 .

(18.11)

i=1

Assuming the fz¯i and the fs,¯zi to be equal fz¯i = fz ,

fs,¯zi = fs,z ;

i = 1, . . . m (18.12)

−fs,z ≤ fz ≤ fs,z (17.15) issues fβ¯1 = fz fβ¯2 = 0

(18.13)

fβ¯3 = 0 .

18.4 Confidence Intervals and Overall Uncertainties We confine the expectations E{β¯1 } = μβ¯1 = β0,1 + fβ¯1 E{β¯2 } = μβ¯2 = β0,2 + fβ¯2 E{β¯3 } = μβ¯3 = β0,3 + fβ¯3 to confidence intervals tP (n − 1) tP (n − 1) √ √ β¯1 − sβ¯1 ≤ μβ¯1 ≤ β¯1 + sβ¯1 n n tP (n − 1) tP (n − 1) √ √ sβ¯2 ≤ μβ¯2 ≤ β¯2 + sβ¯2 β¯2 − n n tP (n − 1) tP (n − 1) √ √ sβ¯3 ≤ μβ¯3 ≤ β¯3 + sβ¯3 β¯3 − n n

(18.14)

18.5 Uncertainty Bowls

169

of probability P . Hence, the uncertainties of the components β¯1 , β¯2 , β¯3 of the solution vector take the form uβ¯1 =

tP (n − 1) √ sβ¯1 + fs,β¯1 n

uβ¯2 =

tP (n − 1) √ sβ¯2 + fs,β¯2 n

uβ¯3 =

tP (n − 1) √ sβ¯3 + fs,β¯3 . n

(18.15)

In case of equal fz¯i , this turns into uβ¯1 =

tP (n − 1) √ sβ¯1 + fs,z n

uβ¯2 =

tP (n − 1) √ sβ¯2 n

uβ¯3 =

tP (n − 1) √ sβ¯3 . n

(18.16)

18.5 Uncertainty Bowls Random Errors Let us consider the estimators β¯kl as given in (18.6) to produce an ensemble of n least squares planes zl (x, y) = β¯1l + β¯2l x + β¯3l y ;

l = 1, . . . , n .

Summing over l reestablishes 1 zl (x, y) = β¯1 + β¯2 x + β¯3 y . n n

z(x, y) =

l=1

Thus, for any fixed point (x, y), we have the differences





zl (x, y) − z(x, y) = β¯1l − β¯1 + β¯2l − β¯2 x + β¯3l − β¯3 y =

m 

(bi1 + bi2 x + bi3 y) (zil − z¯i ) ;

l = 1, . . . , n

i=1

which bring forth the empirical variance 1  2 (zl (x, y) − z(x, y)) = bTs b n−1 n

s2z(x,y) =

l=1

170

18 Planes: Case (ii)

where b = [(b11 + b12 x + b13 y)

b21 + b22 x + b23 y) (18.17) ...

(bm1 + bm2 x + bm3 y)]T

denotes an auxiliary vector. The matrix s is given in (18.7). After all, the term tP (n − 1) √ sz(x,y) n

(18.18)

expresses the influence of random errors. Systematic Errors Inserting (18.9), β¯k = β0,k +

m 

bik (¯ zi − μz¯i ) +

i=1

m 

bik fz¯i ;

k = 1, 2, 3 ,

i=1

into z(x, y) = β¯1 + β¯2 x + β¯3 y produces z(x, y) = β0,1 + β0,2 x + β0,3 y +

m 

(bi1 + bi2 x + bi3 y)(¯ zi − μz¯i )

i=1

+

m 

(18.19)

(bi1 + bi2 x + bi3 y)fz¯i .

i=1

Thus, for any (x, y) the propagated systematic error reads fz(x,y) =

m 

(bi1 + bi2 x + bi3 y)fz¯i

(18.20)

i=1

its worst-case estimation being fs,z(x,y) =

m 

| bi1 + bi2 x + bi3 y | fs,¯zi .

(18.21)

i=1

Due to (17.15), for equal systematic errors this reduces to fs,z(x,y) = fs,z .

(18.22)

18.6 EP C-Region

171

Overall Uncertainty Combining (18.18) and (18.21) issues the uncertainty bowls z(x, y) ± uz(x,y) uz(x,y)

m  tP (n − 1) √ sz(x,y) + = | bi1 + bi2 x + bi3 y | fs,¯zi . n i=1

(18.23)

For equal systematic errors we have z(x, y) ± uz(x,y) uz(x,y) =

tP (n − 1) √ sz(x,y) + fs,z . n

(18.24)

The uncertainty bowls being placed in symmetry to the least squares plane z¯(x, y) span a spatial region which we expect to localize the true plane (16.1).

18.6 EP C-Region The EP C-region localizes the triple of true values (β0,1 , β0,2 , β0,3 ) with respect to the triple of estimators (β¯1 , β¯2 , β¯3 ). Confidence Ellipsoid Let us rewrite (18.9) as β¯k = μβ¯k +

m 

bik (¯ zi − μz¯i ) ;

k = 1, 2, 3

i=1

with μβ¯k = β0,k +

m 

bik fz¯i .

i=1

The empirical variance–covariance matrix of the solution vector sβ¯ as given in (18.8) provides Hotelling’s ellipsoid t2P (3, n − 1) ¯ ¯ − μ ¯)T s−1 . ( β − μ ) = (β ¯ ¯ β β β n Hence, we expect the confidence ellipsoid 2 ¯ = tP (3, n − 1) ¯ T s−1 (β − β) (β − β) ¯ β n

(18.25)

172

18 Planes: Case (ii)

to localize the point ⎛

μβ¯1



⎜ ⎟ ⎟ μβ¯ = ⎜ ⎝ μβ¯2 ⎠

(18.26)

μβ¯3 with probability P . Security Polyhedron While the fz¯1 , fz¯2 , . . . , fz¯m “scan” the set of points lying within or on the faces of the m-dimensional hypercuboid −fs,¯zi ≤ fz¯i ≤ fs,¯zi ;

i = 1, . . . , m

the propagated systematic errors fβ¯k =

m 

bik fz¯i ;

k = 1, 2, 3

i=1

span a polyhedron. However, assuming equal systematic errors as specified in (18.12), the polyhedron degenerates into an interval −fs,z ≤ fβ¯1 ≤ fs,z fβ¯2 = 0

(18.27)

fβ¯3 = 0 which we notionally take as a “stick” of lengths 2fs,z . Mergence of Ellipsoid Either with Interval or with Polyhedron The merging of a confidence ellipsoid either with a non-degenerated or with a degenerated security polyhedron is addressed in Appendix G. Figure 18.1 depicts the lattice shaped true plane (16.1), the least squares plane (18.5), the uncertainties (18.15) of the estimators β¯1 , β¯2 , β¯3 and, finally, the upper and lower uncertainty bowl (18.23). Figure 18.2 refers to equal systematic errors and displays the lattice shaped true plane (16.1), the least squares plane (18.5), the uncertainties (18.16) of the estimators β¯1 , β¯2 , β¯3 and, finally, the upper and lower uncertainty bowl (18.24). Figure 18.3 illustrates the composition of the spaces given by the confidence ellipsoid and the security polyhedron. While the ellipsoid remains fixed,

18.6 EP C-Region

173

Fig. 18.1. Planes, case (ii). Left: least squares plane, symmetrically placed uncertainty bowls, and true plane (lattice shaped ). Right: Uncertainty intervals localizing the plane’s true parameters β0,1 , β0,2 , and β0,3

174

18 Planes: Case (ii)

Fig. 18.2. Planes, case (ii), equal systematic errors. Left: least squares plane, symmetrically placed uncertainty bowls, and true plane (lattice shaped ). Right: Uncertainty regions localizing the plane’s true parameters β0,1 , β0,2 , and β0,3 . Equal systematic errors do not affect β¯2 and β¯3

18.6 EP C-Region

175

Fig. 18.3. Planes, case (ii): Composing the spaces of the confidence ellipsoid and the security polyhedron: the ellipsoid remains fixed, the center of the polyhedron moves step by step alongside the ellipsoid’s skin, maintaining its spatial orientation

176

18 Planes: Case (ii)

Fig. 18.4. Planes, case (ii): Uncertainty space or EPC-hull given by composing the confidence ellipsoid and the security polyhedron. The hull emerges from moving the center of the polyhedron alongside the ellipsoid’s skin and is meant to localize the triple (β0,1 , β0,2 , β0,3 ) of true values with respect to the triple of estimators (β¯1 , β¯2 , β¯3 )

18.6 EP C-Region

177

the center of the polyhedron is moved step by step along the ellipsoid’s skin; while this happens, the polyhedron maintains its spatial orientation. Figure 18.4 renders the composition of the spaces given by the confidence ellipsoid and the security polyhedron. The EPC-hull is intended to localize the triple (β0,1 , β0,2 , β0,3 ) of true values with respect to the triple of estimators (β¯1 , β¯2 , β¯3 ).

19 Planes: Case (iii)

Case (iii), Table 16.1 addresses measurement errors in all three coordinates. The scattering of the random errors as well as the actual values of the unknown systematic errors may vary from data triple to data triple. The now defected design matrix does not derogate the orthogonal projection, renders however the assessment of uncertainties more intricate.

19.1 Fitting Conditions Consider m > 3 data triples (¯ x1 , y¯1 , z¯1 ),

(¯ x2 , y¯2 , z¯2 ),

···

, (¯ xm , y¯m , z¯m )

with formal decompositions 1 xil = x0,i + (¯ xi − μx¯i ) + fx¯i ; n n

x ¯i =

i = 1,... ,m

l=1

¯ i } = μx¯ , E{X i

−fs,¯xi ≤ fx¯i ≤ fs,¯xi

1 yil = y0,i + (¯ yi − μy¯i ) + fy¯i ; n n

y¯i =

i = 1,... ,m

l=1

E{Y¯i } = μy¯i ;

−fs,¯yi ≤ fy¯i ≤ fs,¯yi

and 1 zil = z0,i + (¯ zi − μ¯i ) + fz¯i ; n n

z¯i =

i = 1,... ,m

l=1

E{Z¯i } = μz¯i ;

−fs,¯zi ≤ fz¯i ≤ fs,¯zi .

(19.1)

180

19 Planes: Case (iii)

The empirical variances 1  2 (xil − x ¯i ) n−1 n

s2x¯i =

l=1

1  2 = (yil − y¯i ) ; n−1 n

s2y¯i

i = 1, . . . , m

l=1

1  2 (zil − z¯i ) n−1 n

s2z¯i =

l=1

and the limits ±fs,¯xi , ±fs,¯yi , ±fs,¯zi of the unknown systematic errors suggest the uncertainties x ¯i ± ux¯i ,

ux¯i =

tP (n − 1) √ sx¯i + fs,¯xi n

y¯i ± uy¯i ,

uy¯i =

tP (n − 1) √ sy¯i + fs,¯yi ; n

z¯i ± uz¯i ,

uz¯i =

tP (n − 1) √ sz¯i + fs,¯zi . n

i = 1, . . . , m

(19.2)

The intervals x ¯i ± ux¯i , y¯i ± uy¯i , z¯i ± uz¯i are meant to localize the true values x0,i , y0,i , z0,i of the input data.

19.2 Orthogonal Projection We transfer the linear, inconsistent, and over-determined system ¯i + β3 y¯i ≈ z¯i ; β1 + β2 x

i = 1, . . . , m > 3

(19.3)

into matrix form Aβ ≈ z¯ where ⎛

1 x ¯1 y¯1



⎟ ⎜ ¯2 y¯2 ⎟ ⎜ 1 x ⎟, ⎜ A=⎜ ⎟ ⎝··· ··· ···⎠ 1 x ¯m y¯m



β1



⎜ ⎟ β = ⎝ β2 ⎠ , β3



z¯1



⎟ ⎜ ⎜ z¯2 ⎟ ⎟. ⎜ z¯ = ⎜ ⎟ ⎝...⎠ z¯m

19.3 Series Expansion of the Solution Vector

181

The orthogonal projection produces β¯k =

¯ = B T z¯ , β

m 

bik z¯i ;

k = 1, 2, 3

(19.4)

i=1

with B = A(AT A)−1 = (bik ) ;

i = 1, . . . , m;

k = 1, 2, 3 .

¯ devise the least The components β¯k ; k = 1, 2, 3 of the solution vector β squares plane z(x) = β¯1 + β¯2 x + β¯3 y .

(19.5)

19.3 Series Expansion of the Solution Vector We submit the β¯k ; k = 1, 2, 3 to series expansions throughout a neighborhood of the point (x0,1 , . . . , x0,m ; y0,1 , . . . , y0,m ; z0,1 , . . . , z0,m ) , firstly, with respect to the n points (x1l , x2l , . . . , xml ; y1l , y2l , . . . , yml ; z1l , z2l , . . . , zml ) ;

l = 1, . . . , n

which yields β¯kl (x1l , . . . , xml ; y1l , . . . , yml ; z1l , . . . , zml ) = β¯k (x0,1 , . . . , x0,1 ; y0,1 , . . . , y0,m ; z0,1 , . . . , z0,m )

+

m  ∂ β¯k i=1

∂x0,i

(xil − μx¯i ) +

m  ∂ β¯k i=1

+

m  ∂ β¯k i=1

∂x0,i

∂y0,i

fx¯i +

(yil − μy¯i ) +

i=1

m  ∂ β¯

k

i=1

m  ∂ β¯k

y0,i

fy¯i +

∂z0,i

m  ∂ β¯k i=1

∂x0,i

(zil − μz¯i ) + · · ·

fz¯i + · · ·

and, secondly, with respect to the sample-dependent means (¯ x1 , x ¯2 , . . . , x ¯m ; y¯1 , y¯2 , . . . , y¯m ; z¯1 , z¯2 , . . . , z¯m ) which produces

(19.6)

182

19 Planes: Case (iii)

x1 , . . . , x ¯m ; y¯1 , . . . , y¯m ; z¯1 . . . , z¯m ) β¯k (¯ = β¯k (x0,1 , . . . , x0,1 ; y0,1 , . . . , y0,m ; z0,1 , . . . , z0,m )

+

m m m    ∂ β¯k ∂ β¯k ∂ β¯k (19.7) (¯ xi − μx¯i ) + (¯ yi − μy¯i ) + (¯ zi − μz¯i ) + · · · ∂x ∂y ∂z 0,i 0,i 0,i i=1 i=1 i=1

+

m m m    ∂ β¯k ∂ β¯k ∂ β¯k fx¯i + fy¯i + fz¯ + · · · ∂x0,i y ∂x0,i i i=1 i=1 0,i i=1

where β¯k (x0,1 , . . . , x0,1 ; y0,1 , . . . , y0,m ; z0,1 , . . . , z0,m ) ≡ β0,k . As there is no other choice, we approximate the derivatives in x0,1 , . . . , z0,m through derivatives in x ¯1 , . . . , z¯m . Furthermore, we linearize the expansions, abbreviate the partial derivatives following ci1 =

∂ β¯1 , ∂x ¯i

ci+m,1 =

∂ β¯1 , ∂ y¯i

ci+2m,1 =

∂ β¯1 ∂ z¯i

ci2 =

∂ β¯2 , ∂x ¯i

ci+m,2 =

∂ β¯2 , ∂ y¯i

ci+2m,2 =

∂ β¯2 ; ∂ z¯i

ci3 =

∂ β¯3 , ∂x ¯i

ci+m,3 =

∂ β¯3 , ∂ y¯i

ci+2m,3 =

∂ β¯3 ∂ z¯i

i = 1, . . . , m

and transfer the coefficients cik to an auxiliary matrix ⎞ ⎛ c11 c21 · · · c3m,1 ⎟ ⎜ C T = ⎝ c12 c22 · · · c3m,2 ⎠ . c13

c23

···

(19.8)

c3m,3

The cik are given in Appendix B. Ultimately, deploying the notations vil = xil ,

vi+m,l = yil

vi+2m,l = zil

v¯i = x ¯i , μi = μx¯i ,

v¯i+m = y¯i μi+m = μy¯i

v¯i+2m = z¯i μi+2m = μz¯i

fi = fx¯i , fs,i = fs,¯xi ,

fi+m = fy¯i fs,i+m = fs,¯yi

fi+2m = fz¯i fs,i+2m = fs,¯zi

(19.9)

we may cast (19.6) and (19.7) into 3m 3m   β¯kl = β0,k + cik (vil − μi ) + cik fi i=1

3m 3m   cik (¯ vi − μi ) + cik fi ; β¯k = β0,k + i=1

l = 1, . . . , n

i=1

i=1

(19.10) k = 1, 2, 3 .

19.4 Uncertainties of the Components of the Solution Vector

183

Subtraction yields β¯kl − β¯k =

3m 

cik (vil − v¯i ) ;

k = 1, 2, 3

(19.11)

i=1

which implies 1¯ β¯k = βkl ; n n

k = 1, 2, 3 .

(19.12)

l=1

We take the propagation of random errors from (19.11) and the propagation of systematic errors from (19.10) spawning fβ¯k =

3m 

cik fi ;

k = 1, 2, 3 .

(19.13)

i=1

19.4 Uncertainties of the Components of the Solution Vector Random Errors Equation (19.11) issues the elements sβ¯k β¯k

⎤  3m  ⎡ 3m n  1   = cik (vil − v¯i ) ⎣ cjk (vjl − v¯j )⎦ n−1 i=1 j=1 l=1

=

3m 

cik cjk sij ;

k, k  = 1, 2, 3

i,j=1

of the empirical variance–covariance matrix ⎛ ⎞ sβ¯1 β¯1 sβ¯1 β¯2 sβ¯1 β¯3 ⎜ ⎟ ⎜ ⎟ sβ¯ = ⎜ sβ¯2 β¯1 sβ¯2 β¯2 sβ¯2 β¯3 ⎟ ; ⎝ ⎠ sβ¯3 β¯1 sβ¯3 β¯2 sβ¯3 β¯3

sβ¯k β¯k ≡ s2β¯k

of the solution vector. The 1  (vil − v¯i )(vjl − v¯j ) ; n−1 n

sij =

i, j = 1, . . . , 3m

l=1

denote the empirical variances and covariances of the input data each having degrees of freedom ν = n − 1. For convenience, we let the sij establish a matrix

184

19 Planes: Case (iii)

s = (sij ) ;

i, j = 1, . . . , 3m .

(19.14)

Hence, sβ¯ may be written as sβ¯ = C T s C .

(19.15)

Systematic Errors For equal systematic errors and equal error bounds fx¯i = fx ,

fs,¯xi = fs,x ,

−fs,x ≤ fx ≤ fs,x

fy¯i = fy ,

fs,¯yi = fs,y ,

−fs,y ≤ fy ≤ fs,y ;

fz¯i = fz ,

fs,¯zi = fs,z ,

−fs,z ≤ fz ≤ fs,z

i = 1, . . . , m

(19.16)

(19.13) yields fβ¯k =

m 

cik fx¯i +

i=1

= fx

m 

m 

ci+m,k fy¯i +

i=1 m 

cik + fy

i=1

m 

ci+2m,k fz¯i i=1 m 

ci+m,k + fz

i=1

ci+2m,k .

i=1

But as m 

ci1 = −β¯2 ,

i=1 m 

i=1

ci+m,1 = −β¯3 ,

i=1

ci2 = 0 ,

i=1 m 

m 

m 

m 

ci+2m,1 = 1

i=1

ci+m,2 = 0 ,

i=1

ci3 = 0 ,

m 

m 

ci+2m,2 = 0

(19.17)

i=1

ci+m,3 = 0 ,

i=1

m 

ci+2m,3 = 0

i=1

we have fβ¯1 = −β¯2 fx − β¯3 fy + fz fβ¯2 = 0

(19.18)

fβ¯3 = 0 . Equal systematic errors only affect β¯1 and shift the plane parallel to itself either up or down the z-axis.

19.5 Uncertainty Bowls

185

Confidence Intervals and Overall Uncertainties We consider the expectations E{β¯1 } = μβ¯1 = β0,1 + fβ¯1 E{β¯2 } = μβ¯2 = β0,2 + fβ¯2 E{β¯3 } = μβ¯3 = β0,3 + fβ¯3 to be localized by confidence intervals tP (n − 1) tP (n − 1) √ √ sβ¯1 ≤ μβ¯1 ≤ β¯1 + sβ¯1 β¯1 − n n tP (n − 1) tP (n − 1) √ √ sβ¯2 ≤ μβ¯2 ≤ β¯2 + sβ¯2 β¯2 − n n tP (n − 1) tP (n − 1) √ √ sβ¯3 ≤ μβ¯3 ≤ β¯3 + sβ¯3 β¯3 − n n with probability P . Hence, the overall uncertainties of the components of the solution vector turn out to be  tP (n − 1) √ sβ¯k + | cik | fs,i ; n i=1 3m

uβ¯k =

k = 1, 2, 3 .

(19.19)

For equal systematic errors, this reduces to uβ¯1 =

tP (n − 1) √ sβ¯1 + | β¯2 | fs,x + | β¯3 | fs,y + fs,z n

uβ¯2 =

tP (n − 1) √ sβ¯2 n

uβ¯3 =

tP (n − 1) √ sβ¯3 . n

(19.20)

19.5 Uncertainty Bowls Random Errors Let the β¯kl constitute an ensemble of n least squares planes zl (x, y) = β¯1l + β¯2l x + β¯3l y ;

l = 1, . . . , n .

(19.21)

186

19 Planes: Case (iii)

Summed over l, the zl (x, y) reproduce z(x, y) =

n 

zl (x, y) .

l=1

Thus, for any fixed point (x, y), the differences





zl (x, y) − z(x, y) = β¯1l − β¯1 + β¯2l − β¯2 x + β¯3l − β¯3 y =

3m 

(ci1 + ci2 x + ci3 y) (vil − v¯i ) ;

l = 1, . . . , n

i=1

bring forth the empirical variance 1  2 (zl (x, y) − z(x, y)) = cTs c n−1 n

s2z(x,y) =

(19.22)

l=1

in which c = (c11 + c12 x + c13 y

c21 + c22 x + c23 y ...

c3m,1 + c3m,2 x + c3m,3 y)T

denotes an auxiliary vector. Systematic Errors Inserting (19.10) into (19.5) issues z(x, y) = β0,1 + β0,2 x + β0,3 y +

3m 

(ci1 + ci2 x + ci3 y) (¯ vi − μi ) +

i=1

3m 

(ci1 + ci2 x + ci3 y) fi .

i=1

From this we read the propagated systematic error fz(x,y)

3m  = (ci1 + ci2 x + ci3 y)fi

(19.23)

i=1

its worst case estimation being fs,z(x,y) =

3m 

| ci1 + ci2 x + ci3 y | fs,i .

(19.24)

i=1

In case of equal systematic errors we have fz(x,y) = −β¯2 fx − β¯3 fy + fz .

(19.25)

19.6 EP C-Region

187

Overall Uncertainty After all, we expect the uncertainty bowls z(x, y) ± uz(x,y) 3m  tP (n − 1) √ T √ c sc+ | ci1 + ci2 x + ci3 y | fs,i n i=1

uz(x,y) =

(19.26)

to localize the true plane (16.1). For equal systematic errors this turns into z(x, y) ± uz(x,y) uz(x,y) =

tP (n − 1) √ T √ c s c+ | β¯2 | fs,x + | β¯3 | fs,y + fs,z . n

(19.27)

19.6 EP C-Region The EP C-region localizes the triple of true values (β0,1 , β0,2 , β0,3 ) with respect to the triple of estimators (β¯1 , β¯2 , β¯3 ). Confidence Ellipsoid We rewrite (19.10) as β¯k = μβ¯k +

3m 

cik (¯ vi − μi ) ;

μβ¯k = β0,k +

i=1

3m 

cik fi ;

k = 1, 2, 3 .

i=1

¯ as given With this the variance–covariance matrix sβ¯ of the solution vector β, in (19.15), produces Hotelling’s ellipsoid ¯ ¯ − μ ¯)T s−1 (β ¯) = ¯ (β − μβ β β

t2P (3, n − 1) . n

Hence, we expect the confidence ellipsoid 2 ¯ T s−1 ¯ = tP (3, n − 1) (β − β) (β − β) ¯ β n

(19.28)

to localize the point ⎛

μβ¯1



⎜ ⎟ ⎟ μβ¯ = ⎜ ⎝ μβ¯2 ⎠ μβ¯3 with probability P .

(19.29)

188

19 Planes: Case (iii)

Security Polyhedron While the errors f1 , f2 , . . . , f3m “scan” the set of points lying within or on the faces of the 3m-dimensional hypercuboid −fs,i ≤ fi ≤ fs,i ;

i = 1, . . . , 3m

the propagated systematic errors fβ¯k =

3m 

cik fi ;

k = 1, 2, 3

i=1

span a polyhedron. In case of equal systematic errors, the polyhedron degenerates into an interval −(| β¯2 | fs,x + | β¯3 | fs,y + fs,z ) ≤ fβ¯1 ≤ | β¯2 | fs,x + | β¯3 | fs,y + fs,z fβ¯2 = 0

(19.30)

fβ¯3 = 0 which we notionally consider “a stick” of lengths 2(| β¯2 | fs,x + | β¯3 | fs,y + fs,z ). Mergence of Ellipsoid Either with Interval or with Polyhedron The merging of a confidence ellipsoid either with a non-degenerated or with a degenerated security polyhedron is addressed in Appendix G. Figure 19.1 depicts the lattice shaped true plane (16.1), the least squares plane (19.5), the uncertainties (19.19) of the estimators β¯1 , β¯2 , β¯3 and, finally, the upper and lower uncertainty bowl (19.26). Figure 19.2 refers to equal systematic errors and displays the lattice shaped true plane (16.1), the least squares plane (19.5), the uncertainties (19.20) of the estimators β¯1 , β¯2 , β¯3 and, finally, the upper and lower uncertainty bowl (19.27).

19.6 EP C-Region

189

Fig. 19.1. Planes, case (iii). Left: least squares plane, symmetrically placed uncertainty bowls, and true plane (lattice shaped ) . Right: Uncertainty intervals localizing the plane’s true parameters β0,1 β0,2 , and β0,3

190

19 Planes: Case (iii)

Fig. 19.2. Planes, case (iii), equal systematic errors in x, y, z. Left: least squares plane, symmetrically placed uncertainty bowls, and true plane (lattice shaped ). Right: Uncertainty intervals localizing the true values β0,1 β0,2 , and β0,3 . Equal systematic errors do not affect β¯2 and β¯3

Part VII

Fitting of Parabolas

20 Preliminaries

The fitting of parabolas relies on data pairs (x1 , y1 ),

(x2 , y2 ),

...

, (xm , ym ) ;

m > 3.

20.1 Distinction of Cases We shall discuss three fitting situations (Table 20.1): Table 20.1. Fitting of Parabolas, Three Cases Case

Abscissas

Ordinates

(i) (ii) (iii)

Error-free Error-free Repeated measurements

Individual measurements Repeated measurements Repeated measurements

Case (i) considers correct abscissas and individually measured ordinates. Each ordinate is supposed to hold a particular random error, stemming from one and the same normal distribution, and a common unknown systematic error. Case (ii) still assumes error-free abscissas but submits the ordinates to repeated measurements. Then, the scattering of the random errors as well as the actual values of the unknown systematic errors may vary from ordinate to ordinate. Case (iii), finally, addresses erroneous abscissas and erroneous ordinates thus admitting variances and unknown systematic errors to vary from data point to data point.

20.2 True Parabola Let the equation of a parabola y(x) = β0,1 + β0,2 x + β0,3 x2

(20.1)

194

20 Preliminaries

be fulfilled by the m data pairs (x0,1 , y0,1 ),

(x0,2 , y0,2 ),

...

, (x0,m , y0,m ) .

(20.2)

Written in matrices, this reads Aβ 0 = y 0

(20.3)

where ⎛

1 x0,1 x20,1

⎜ 1 x 2 0,2 x0,2 ⎜ A=⎜ ⎝··· ··· ··· 1 x0,m x20,m

⎞ ⎟ ⎟ ⎟, ⎠



β0,1



⎜ ⎟ ⎟ β0 = ⎜ ⎝ β0,2 ⎠ , β0,3



⎞ y0,1 ⎜y ⎟ ⎜ 0,2 ⎟ y0 = ⎜ ⎟. ⎝ ··· ⎠

(20.4)

y0,m

Given the design matrix A has rank 3, the linear system (20.3) formally reproduces β 0 = B T z0 ,



−1 B = A AT A .

(20.5)

We address (20.1) as the true parabola, (20.2) as the true input data, and β 0 as the true solution vector. On substituting a set of measured data for the true ones, (20.3) becomes inconsistent. This observation suggests to fit a least squares parabola to the defective input and to assess the true parameters β0,1 , β0,2 , and β0,3 by means of suitably shaped uncertainty intervals.

21 Parabolas: Case (i)

Case (i), Table 20.1 considers error-free abscissas and erroneous, individually measured ordinates.

21.1 Fitting Conditions The adjustment relates to m > 3 data pairs (x0,1 , y1 ),

(x0,2 , y2 ),

...

, (x0,m , ym )

(21.1)

i = 1, . . . , m

(21.2)

assuming the x0,i correct and the yi erroneous yi = y0,i + (yi − μyi ) + fyi ;

with expectations μyi = E{Yi }. We assume the random errors (yi − μyi ) to come from the same normal density and thus to have a common theoretical variance. Further, we suppose the yi to be burdened by the same unknown systematic error fyi = fy ;

i = 1, . . . , m

−fs,y ≤ fy ≤ fs,y .

(21.3)

For the time being, we are obviously not in a position to quote the uncertainties of the yi .

21.2 Orthogonal Projection We are faced with an inconsistent, over-determined, linear system β1 + β2 x0,i + β3 x20,i ≈ yi ;

i = 1, . . . , m > 3

(21.4)

and ask for a parabola y(x) = β¯1 + β¯2 x + β¯3 x2

(21.5)

196

21 Parabolas: Case (i)

fitting the input data in terms of least squares. Introducing ⎛ ⎛ ⎞ ⎞ ⎛ ⎞ 1 x0,1 x20,1 y1 β1 ⎜y ⎟ ⎜ 1 x 2 ⎟ ⎜ ⎟ 0,2 x0,2 ⎟ ⎜ 2⎟ ⎜ ⎟, β β=⎜ y = A=⎜ ⎜ ⎟, ⎟ 2 ⎝ ⎠ ⎝···⎠ ⎝··· ··· ··· ⎠ β3 1 x0,m x20,m ym the matrix form of (21.4) reads Aβ ≈ y .

(21.6)

The orthogonal projection produces ¯ = BTy , β

β¯k =

m 

bik yi ;

k = 1, 2, 3

(21.7)

i=1

where

−1 B = A AT A = (bik ) ;

i = 1, . . . , m;

k = 1, 2, 3 .

(21.8)

The components β¯k ; k = 1, 2, 3 of the solution vector (21.7) establish the least squares parabola (21.5).

21.3 Uncertainties of the Input Data According to the assumption made, the minimized sum of squared residuals



¯ = y − Aβ ¯ T y − Aβ ¯ Q issues an estimate s2y =

¯ Q m−3

(21.9)

of the common theoretical variance σy2 = E{Sy2 } of the input data. The empirical variance s2y has degrees of freedom ν = m−3. Obviously, this information is due to our requesting the input data to devise a parabola. Thus, we are in a position to localize the expectations μyi = y0,i + fy ;

i = 1, . . . , m

(21.10)

through confidence intervals yi − tP (m − 3) sy ≤ μyi ≤ yi + tP (m − 3) sy ;

i = 1, . . . , m ,

(21.11)

21.4 Uncertainties of the Components of the Solution Vector

197

Appendix H, so that the overall uncertainties of the input data take the form uyi = tP (m − 3) sy + fs,y ;

i = 1, . . . , m .

(21.12)

Hence, we consider the intervals yi − uyi ≤ y0,i ≤ yi + uyi ;

i = 1, . . . , m

to localize the true values y0,i of the ordinates yi ; i = 1, . . . , m.

21.4 Uncertainties of the Components of the Solution Vector Random Errors Though we are not endued with repeated measurements, we may nevertheless assign an empirical variance–covariance matrix to the components β¯k ; k = 1, 2, 3 of the solution vector. To begin with, we refer to the expectations E{β¯k } = μβ¯k =

m 

bik μyi ;

k = 1, 2, 3 .

i=1

Thus, the theoretical variances and the theoretical covariances of the components of the solution vector take the form   σ ¯ ¯  = E (β¯k − μ ¯ )(β¯k  − μ ¯  ) βk βk

=E

⎧ m ⎨  ⎩

i=1

βk

βk

⎤⎫ ⎡ m m  

⎬ bik (Yi − μyi ) ⎣ bjk Yj − μyj ⎦ = σy2 bik bik  ; ⎭ j=1

i=1

k = 1, 2, 3 . Substituting s2y for σy2 issues the empirical counterparts sβ¯k β¯k  = s2y

m 

bik bik  .

i=1

Hence, the empirical variance–covariance matrix of the solution vector reads ⎞ ⎛ sβ¯1 β¯1 sβ¯1 β¯2 sβ¯1 β¯3 ⎟ ⎜ sβ¯ = ⎝ sβ¯2 β¯1 sβ¯2 β¯2 sβ¯2 β¯3 ⎠ (21.13) sβ¯3 β¯1

sβ¯3 β¯2

sβ¯3 β¯3

the elements having degrees of freedom ν = m − 3. Whenever appropriate, we denote sβ¯1 β¯1 = s2β¯1 ,

sβ¯2 β¯2 = s2β¯2 ,

sβ¯3 β¯3 = s2β¯3 .

198

21 Parabolas: Case (i)

Systematic Errors The formal decompositions β¯k =

m 

bik [y0,i + (yi − μyi ) + fy ] ;

k = 1, 2, 3

(21.14)

i=1

suggest the propagated systematic errors fβ¯k = fy

m 

bik ;

k = 1, 2, 3 .

i=1

As m  i=1

bi1 = 1 ,

m 

bi2 = 0 ,

i=1

m 

bi3 = 0

(21.15)

i=1

we have fβ¯1 = fy fβ¯2 = 0

(21.16)

fβ¯3 = 0 . Confidence Intervals and Overall Uncertainties We localize the expectations E{β¯1 } = μβ¯1 = β0,1 + fy E{β¯2 } = μβ¯2 = β0,2

(21.17)

E{β¯3 } = μβ¯3 = β0,3 through confidence intervals β¯1 − tP (m − 3) sβ¯1 ≤ μβ¯1 ≤ β¯1 + tP (m − 3) sβ¯1 β¯2 − tP (m − 3) sβ¯2 ≤ μβ¯2 ≤ β¯2 + tP (m − 3) sβ¯2

(21.18)

β¯3 − tP (m − 3) sβ¯3 ≤ μβ¯3 ≤ β¯3 + tP (m − 3) sβ¯3 with probability P . Hence, the uncertainties of the estimators β¯1 , β¯2 , β¯3 are given by uβ¯1 = tP (m − 3) sβ¯1 + fs,y uβ¯2 = tP (m − 3) sβ¯2

(21.19)

uβ¯3 = tP (m − 3) sβ¯3 . While uβ¯2 and uβ¯3 do not exhibit systematic uncertainty components, uβ¯1 is burdened by fs,y . This is obvious, as fy shifts the parabola parallel to itself either down or up the y-axis.

21.6 EP C-Region

199

21.5 Uncertainty Band Inserting the formal decompositions β¯k =

m 

bik yi

i=1

= βk,0 +

m 

bik (yi − μyi ) + fy

i=1

m 

(21.20) bik ;

k = 1, 2, 3

i=1

into the least squares parabola y(x) = β¯1 + β¯2 x + β¯3 x2 yields y(x) = β0,1 + β0,2 x + β0,3 x2 +

m 

(bi1 + bi2 x + bi3 x2 )(yi − μyi ) + fy .

i=1

For any fixed x, the expectation of Y (x) brings forth E{Y (x)} = μy(x) = β0,1 + β0,2 x + β0,3 x2 + fy . Thus, there is an x-dependent theoretical variance 2 = E{(Y (x) − μy(x) )2 } σy(x)

= σy2

m 

(bi1 + bi2 x + bi3 x2 )2 ,

  σy2 = E (Yi − μyi )2 .

i=1

Substituting s2y for σy2 and submitting fy to a worst-case estimation produces the uncertainty band y(x) ± uy(x) # uz(x) = tP (m − 3) sy

m 

(21.21) (bi1 + bi2 x + bi3

x2 )2

+ fs,y .

i=1

The boundary lines, running in symmetry to y¯(x), are expected to localize the true parabola (20.1).

21.6 EP C-Region The EP C-region confines the triple of true values (β0,1 , β0,2 , β0,3 ) with respect to the triple of estimators (β¯1 , β¯2 , β¯3 ).

200

21 Parabolas: Case (i)

Confidence Ellipsoid Let us consider the formal decompositions β¯k = μβ¯k +

m 

bik (yi − μyi ) ;

k = 1, 2, 3

i=1

with μβ¯k as given in (21.17). As sβ¯ is known from (21.13), Hotelling’s ellipsoid reads 2 ¯ − μ ¯)T s−1 ¯ (β ¯) = tP (3, m − 3) , ¯ (β − μβ β β

Appendix C. The pertaining confidence ellipsoid 2 ¯ ¯ T s−1 (β − β) ¯ (β − β) = tP (3, m − 3) β

(21.22)

is expected to localize the point ⎛

μβ¯1



⎜ ⎟ μ μβ¯ = ⎜ ¯2 ⎟ β ⎝ ⎠

(21.23)

μβ¯3 with probability P . Security Polygon The measuring conditions referred to entail the security polygon to degenerate into an interval −fs,y ≤ fβ¯1 ≤ fs,y fβ¯2 = 0

(21.24)

fβ¯3 = 0 . For convenience, we consider the interval a “stick” of length 2fs,y . Mergence of Ellipsoid and Interval The merging of a confidence ellipsoid with a degenerated security polyhedron is addressed in Appendix G. Figure 21.1 displays the true parabola (20.1), the fitted parabola (21.5), the uncertainties (21.19) of the estimators β¯1 , β¯2 , and β¯3 and, finally, the uncertainty band (21.21). The illustrations are based on simulated data implying known true values and, in particular, extensive graphical scale transformations, Appendix A.

21.6 EP C-Region

201

Fig. 21.1. Parabolas, case (i). Left: adjusted parabola, uncertainty band, and true parabola (dashed line). Right: Uncertainty intervals localizing the parabola’s true parameters β0,1 , β0,2 , and β0,3 . Equal systematic errors do not affect β¯2 and β¯3

22 Parabolas: Case (ii)

Case (ii), Table 20.1 still invokes correct abscissas, introduces, meanwhile, repeated measurements of the ordinates. As the empirical variances of the ordinates are directly accessible, the scattering of the random errors may vary from ordinate to ordinate; at the same time varying unknown systematic errors are admitted. It goes without saying that the minimized sum of squared residuals still enters the construction of the solution vector proves, however, no longer serviceable with respect to assessing uncertainties.

22.1 Fitting Conditions Given m > 3 data pairs (x0,1 , y¯1 ),

(x0,2 , y¯2 ),

...

, (x0,m , y¯m )

(22.1)

the x-coordinates being error-free and the y-coordinates measured quantities 1 yil = y0,i + (¯ yi − μy¯i ) + fy¯i ; n n

y¯i =

i = 1, . . . , m

l=1

E{Y¯i } = μy¯i ,

−fs,¯yi ≤ fy¯i ≤ fs,¯yi

each comprising the same number n of repeated measurements. Combining the empirical variances of the ordinates 1  (yil − y¯i )2 ; n−1 n

s2y¯i =

i = 1, . . . , m

l=1

with the boundaries of the unknown systematic errors the uncertainties of the input data are given by y¯i ± uy¯i ,

uy¯i =

tP (n − 1) √ sy¯i + fy¯i ; n

i = 1, . . . , m .

We expect the intervals y¯i − uy¯i ≤ y0,i ≤ y¯i + uy¯i ;

i = 1, . . . , m

to localize the true ordinates y0,i of the measured data y¯i .

(22.2)

204

22 Parabolas: Case (ii)

22.2 Orthogonal Projection Let us rewrite the linear, inconsistent, over-determined system β1 + β2 x0,i + β3 x20,i ≈ y¯i ; in matrix form. Defining ⎛ 1 x0,1 x20,1 ⎜ 1 x 2 0,2 x0,2 ⎜ A=⎜ ⎝··· ··· ···



i = 1, . . . , m > 3



⎟ ⎟ ⎟, ⎠

β1

⎞ y¯1 ⎜ y¯ ⎟ ⎜ 2⎟ y ¯=⎜ ⎟ ⎝···⎠ ⎛



⎜ ⎟ β = ⎝ β2 ⎠ , β3

1 x0,m x20,m

(22.3)

y¯m

we have Aβ ≈ y¯ . The orthogonal projection yields ¯ = BTy β ¯,

β¯k =

m 

bik y¯i ;

k = 1, 2, 3

(22.4)

i=1

where

−1 B = A AT A = (bik ) ;

i = 1, . . . , m;

k = 1, 2, 3 .

¯ establish the fitted parabola The components β¯k of the solution vector β y(x) = β¯1 + β¯2 x + β¯3 x2 .

(22.5)

22.3 Uncertainties of the Components of the Solution Vector Random Errors We notionally consider an ensemble of least squares parabolas taken from the data sets (y1l , y2l , . . . , yml ) ; l = 1, . . . , n, Sect. 7.3. Obviously, the belonging estimators are β¯kl =

m 

bik yil ;

l = 1, . . . , n .

i=1

Inserting the means 1 yil ; n n

y¯i =

l=1

i = 1, . . . , m

(22.6)

22.3 Uncertainties of the Components of the Solution Vector

205

into (22.4) provides the same result, namely  n  m  m n     1 1 bik yil = bik yil β¯k = n n i=1 i=1 l=1

=

1 n

n 

β¯kl ;

l=1

k = 1, 2, 3 .

l=1

The differences β¯kl − β¯k =

m 

bik (yil − y¯i )

i=1

bring forth the elements



1  ¯ βkl − β¯k β¯k l − β¯k n−1 n

sβ¯k β¯k =

l=1

⎤ ⎡ m m n  1   bik (yil − y¯i ) ⎣ bjk (yjl − y¯j )⎦ = n−1 i=1 j=1 l=1

=

m 

bik bjk sij ;

k, k  = 1, 2, 3

i,j=1

of the empirical variance–covariance matrix of the solution vector. Here, the 1  (yil − y¯i )(yjl − y¯j ) ; n−1 n

sij =

i, j = 1, . . . , m

l=1

denote the elements of the empirical variance–covariance matrix s = (sij ) ;

i, j = 1, . . . , m

(22.7)

of the input data, each having degrees of freedom ν = n − 1. Thus, the ¯ reads empirical variance–covariance matrix of the solution vector β ⎛

sβ¯1 β¯1

⎜ sβ¯ = ⎜ ⎝ sβ¯2 β¯1 sβ¯3 β¯1

sβ¯1 β¯2

sβ¯1 β¯3



sβ¯2 β¯2

⎟ T sβ¯2 β¯3 ⎟ ⎠ = B s B;

sβ¯3 β¯2

sβ¯3 β¯3

sβ¯k β¯k ≡ s2β¯k .

(22.8)

206

22 Parabolas: Case (ii)

Systematic Errors The propagated systematic errors as issued by the decompositions β¯k =

m 

bik [ y0,i + (¯ yi − μy¯i ) + fy¯i ] ;

k = 1, 2, 3

(22.9)

i=1

are fβ¯k =

m 

bik fy¯i ;

k = 1, 2, 3 .

(22.10)

i=1

This suggests the worst-case estimations fs,β¯k =

m 

| bik | fs,¯yi ;

k = 1, 2, 3 .

(22.11)

i=1

Should the fy¯i and fs,¯yi be equal fy¯i = fy ,

fs,¯yi = fs,y ;

i = 1, . . . m (22.12)

−fs,y ≤ fy ≤ fs,y (21.15) produces fβ¯1 = fy fβ¯2 = 0

(22.13)

fβ¯3 = 0 . Confidence Intervals and Overall Uncertainties Confining the expectations E{β¯1 } = μβ¯1 = β0,1 + fβ¯1 E{β¯2 } = μβ¯2 = β0,2 + fβ¯2 E{β¯3 } = μβ¯3 = β0,3 + fβ¯3 with probability P to confidence intervals tP (n − 1) tP (n − 1) √ √ sβ¯1 ≤ μβ¯1 ≤ β¯1 + sβ¯1 β¯1 − n n tP (n − 1) tP (n − 1) √ √ sβ¯2 ≤ μβ¯2 ≤ β¯2 + sβ¯2 β¯2 − n n tP (n − 1) tP (n − 1) √ √ sβ¯3 ≤ μβ¯3 ≤ β¯3 + sβ¯3 β¯3 − n n

(22.14)

22.4 Uncertainty Band

207

suggests to assess the uncertainties of the components of the solution vector as uβ¯1 =

tP (n − 1) √ sβ¯1 + fs,β¯1 n

uβ¯2 =

tP (n − 1) √ sβ¯2 + fs,β¯2 n

uβ¯3 =

tP (n − 1) √ sβ¯3 + fs,β¯3 . n

(22.15)

Given the fy¯i are equal, this gives way to uβ¯1 =

tP (n − 1) √ sβ¯1 + fs,z n

uβ¯2 =

tP (n − 1) √ sβ¯2 n

uβ¯3 =

tP (n − 1) √ sβ¯3 . n

(22.16)

22.4 Uncertainty Band Random Errors By means of the estimators β¯kl as given in (22.6) we may visualize an ensemble of n least squares parabolas yl (x) = β¯1l + β¯2l x + β¯3l x2 ;

l = 1, . . . , n .

On the average, the members of the ensemble reissue the least squares parabola 1 yl (x) n n

y(x) =

l=1

= β¯1 + β¯2 x + β¯3 x2 . Hence, for any fixed x, the differences





yl (x) − y(x) = β¯1l − β¯1 + β¯2l − β¯2 x + β¯3l − β¯3 x2

=

m 

bi1 + bi2 x + bi3 x2 (yil − y¯i ) ;

i=1

l = 1, . . . , n

208

22 Parabolas: Case (ii)

bring forth the empirical variance 1  (yl (x) − y(x))2 = bTs b n−1 n

s2y(x) =

(22.17)

l=1

where b = [(b11 + b12 x + b13 x2 )

(b21 + b22 x + b23 x2 ) (22.18) ...

(bm1 + bm3 x + bm3 x2 )]T

designates an auxiliary vector and s the empirical variance–covariance matrix of the input data as quoted in (22.7). Hence, the term tP (n − 1) √ sy(x) n

(22.19)

assesses the influence of random errors. Systematic Errors Inserting (22.9), β¯k = β0,k +

m 

bik [(¯ yi − μy¯i ) + fy¯i ] ;

k = 1, 2, 3

i=1

into y(x) = β¯1 + β¯2 x + β¯3 x2 produces y(x) = β0,1 + β0,2 x + β0,3 x2 + +

m  i=1 m 

(bi1 + bi2 x + bi3 x2 )(¯ yi − μy¯i )

(22.20)

(bi1 + bi2 x + bi3 x2 )fy¯i .

i=1

Hence, we consider fy(x) =

m 

(bi1 + bi2 x + bi3 x2 )fy¯i

(22.21)

i=1

the propagated systematic error, its worst-case estimation being fs,z(x,y) =

m 

| bi1 + bi2 x + bi3 x2 | fs,¯yi .

(22.22)

i=1

Assuming equal systematic errors, (21.15) causes fs,y(x) = fs,y .

(22.23)

22.5 EP C-Region

209

Overall Uncertainty Combining (22.19) with (22.22) yields y(x) ± uy(x) uy(x) =

m  tP (n − 1) √ sy(x) + | bi1 + bi2 x + bi3 x2 | fs,¯yi n i=1

(22.24)

which for equal systematic errors turns into y(x) ± uy(x) uy(x) =

tP (n − 1) √ sy(x) + fs,y . n

(22.25)

At any rate, we expect the uncertainty band to localize the true parabola (20.1).

22.5 EP C-Region The EP C-region localizes the triple of true values (β0,1 , β0,2 , β0,3 ) with respect to the triple of estimators (β¯1 , β¯2 , β¯3 ). Confidence Ellipsoid Recasting (22.9), we find β¯k = μβ¯k +

m 

bik (¯ yi − μy¯i )

i=1

with μβ¯k = β0,k +

m 

bik fy¯i .

i=1

The variance–covariance matrix of the solution vector sβ¯ as given in (22.8) produces Hotelling’s ellipsoid t2P (3, n − 1) ¯ ¯ − μ ¯)T s−1 . ( β − μ ) = (β ¯ ¯ β β β n The pertaining confidence ellipsoid 2 ¯ T s−1 ¯ = tP (3, n − 1) (β − β) (β − β) ¯ β n

(22.26)

210

22 Parabolas: Case (ii)

is expected to localize the point ⎛

μβ¯1



⎜ ⎟ ⎟ μβ¯ = ⎜ ⎝ μβ¯2 ⎠

(22.27)

μβ¯3 with probability P . Security Polyhedron While the fy¯1 , fy¯2 , . . . , fy¯m “scan” the set of points lying within or on the faces of the m-dimensional hypercuboid −fs,¯yi ≤ fy¯i ≤ fs,¯yi ;

i = 1, . . . , m

the components of the propagated systematic error fβ¯k =

m 

bik fz¯i ;

k = 1, 2, 3

i=1

span a polyhedron. However, assuming equal systematic errors, the polyhedron degenerates into an interval −fs,y ≤ fβ¯1 ≤ fs,y fβ¯2 = 0

(22.28)

fβ¯3 = 0 which we notionally consider a “stick” of lengths 2fs,y . Mergence of Ellipsoid Either with Interval or with Polyhedron The merging of a confidence ellipsoid either with a non-degenerated or with a degenerated security polyhedron is addressed in Appendix G. Figure 22.1 displays the true parabola (20.1), the fitted parabola (22.5), the uncertainties (22.15) of the estimators β¯1 , β¯2 , β¯3 and, finally, the uncertainty band (22.24). Figure 22.2 considers equal systematic errors and renders the true parabola (20.1), the fitted parabola (22.5), the uncertainties (22.16) of the estimators β¯1 , β¯2 , β¯3 and, ultimately, the uncertainty band (22.25). The illustrations are based on simulated data implying known true values and, in particular, extensive graphical scale transformations, Appendix A.

22.5 EP C-Region

211

Fig. 22.1. Parabolas, case (ii). Left: adjusted parabola, uncertainty band, and true parabola (dashed line). Right: Uncertainty intervals localizing the parabola’s true parameters β0,1 β0,2 , and β0,3

212

22 Parabolas: Case (ii)

Fig. 22.2. Parabolas, case (ii), equal systematic errors. Left: adjusted parabola, uncertainty band, and true parabola (dashed line). Right: Uncertainty intervals localizing the parabola’s true parameters β0,1 , β0,2 , and β0,3 . Equal systematic errors do not affect β¯2 and β¯3

23 Parabolas: Case (iii)

Case (iii), Table 20.1 suggests erroneous abscissas and erroneous ordinates blurred by varying theoretical variances and varying unknown systematic errors. The errors of the design matrix do not affect the orthogonal projection, they do, however, render the assessment of uncertainties more intricate.

23.1 Fitting Conditions Consider m > 3 data pairs (¯ x1 , y¯1 ),

(¯ x2 , y¯2 ),

···

, (¯ xm , y¯m )

(23.1)

with formal decompositions 1 xil = x0,i + (¯ xi − μx¯i ) + fx¯i ; n n

x ¯i =

i = 1,... ,m

l=1

¯ i } = μx¯ , E{X i

−fs,¯xi ≤ fx¯i ≤ fs,¯xi

and 1 yil = y0,i + (¯ yi − μy¯i ) + fy¯i ; n n

y¯i =

i = 1,... ,m

l=1

E{Y¯i } = μy¯i ;

−fs,¯yi ≤ fy¯i ≤ fs,¯yi .

The empirical variances 1  2 (xil − x ¯i ) , n−1 n

s2x¯i =

l=1

1  2 (yil − y¯i ) ; n−1 n

s2y¯i =

i = 1, . . . , m

l=1

and the boundaries of systematic errors issue the uncertainties of the input data

214

23 Parabolas: Case (iii)

x ¯i ± ux¯i ,

ux¯i =

tP (n − 1) √ sx¯i + fs,¯xi n i = 1, . . . , m

y¯i ± uy¯i ,

uy¯i

tP (n − 1) √ sy¯i + fs,¯yi . = n

(23.2)

We expect the intervals x ¯i ± ux¯i and y¯i ± uy¯i to localize the true values x0,i and y0,i , respectively.

23.2 Orthogonal Projection Putting ⎛

1 x ¯1 x ¯21









β1 ⎜ ⎟ β = ⎝ β2 ⎠ ,

⎜ ⎟ ¯2 x ¯22 ⎟ ⎜ 1 x ⎟ A=⎜ ⎜··· ··· ··· ⎟ , ⎝ ⎠



⎟ ⎜ ⎜ y¯2 ⎟ ⎟ y ¯=⎜ ⎜...⎟ ⎠ ⎝

β3

1 x ¯m x ¯2m

y¯1

y¯m

we may transfer the linear, inconsistent, and over-determined system β1 + β2 x ¯i + β3 x ¯2i ≈ y¯i ;

i = 1, . . . , m > 3

(23.3)

into Aβ ≈ y¯ . The orthogonal projection produces the solution vector ¯ = BTy β ¯,

β¯k =

m 

bik y¯i ;

k = 1, 2, 3

(23.4)

i=1

with B = A(AT A)−1 = (bik ) ;

i = 1, . . . , m;

k = 1, 2, 3 .

¯ establish the Hence, the components β¯k ; k = 1, 2, 3 of the solution vector β least squares parabola y(x) = β¯1 + β¯2 x + β¯3 x2 .

(23.5)

As the elements bik are faulty, in order to assess measurement uncertainties, we are asked to carry out series expansions.

23.3 Series Expansion of the Solution Vector

215

23.3 Series Expansion of the Solution Vector We submit the β¯k ; k = 1, 2, 3 to series expansions throughout a neighborhood of the point (x0,1 , . . . , x0,m ;

y0,1 , . . . , y0,m ),

primarily, with respect to the n points (x1l , x2l , . . . , xml ; y1l , y2l , . . . , yml );

l = 1, . . . , n

producing β¯kl (x1l , . . . , xml ; y1l , . . . , yml ) = β¯k (x0,1 , . . . , x0,m ; y0,1 , . . . , y0,m ) +

m m   ∂ β¯k ∂ β¯k (xil − μx¯i ) + (yil − μy¯i ) + · · · ∂x0,i ∂y0,i i=1 i=1

+

(23.6)

m m   ∂ β¯k ∂ β¯k fx¯i + fy¯ + · · · ∂x0,i ∂y0,i i i=1 i=1

whereat β¯k (x0,1 , . . . , x0,1 ; y0,1 , . . . , y0,m ) ≡ β0,k and, secondly, with respect to the sample-dependent means ¯2 , . . . , x ¯m ; y¯1 , y¯2 , . . . , y¯m ) (¯ x1 , x issuing x1 , . . . , x ¯m ; ; y¯1 , . . . , y¯m ) = β¯k (x0,1 , . . . , x0,m ; y0,1 , . . . , y0,m ) β¯k (¯ +

m m   ∂ β¯k ∂ β¯k (¯ xi − μx¯i ) + (¯ yi − μy¯i ) + · · · ∂x0,i ∂y0,i i=1 i=1

+

(23.7)

m m   ∂ β¯k ∂ β¯k fx¯i + fy¯ + · · · . ∂x0,i ∂y0,i i i=1 i=1

As there is no other choice, we approximate the derivatives in x0,1 , . . . , y0,m through derivatives in x ¯1 , . . . , y¯m . For convenience, we abbreviate the partial derivatives following ci1 =

∂ β¯1 , ∂x ¯i

ci+m,1 =

∂ β¯1 , ∂ y¯i

ci2 =

∂ β¯2 , ∂x ¯i

ci+m,2 =

∂ β¯2 , ∂ y¯i

ci3 =

∂ β¯3 , ∂x ¯i

ci+m,3 =

∂ β¯3 ∂ y¯i

i = 1, . . . , m

(23.8)

216

23 Parabolas: Case (iii)

and assign the coefficients cik to an auxiliary matrix ⎛

c11

⎜ C T = ⎝ c12 c13



c21

···

c22

···

⎟ c2m,2 ⎠ .

c23

···

c2m,3

c2m,1

Furthermore, we put vil = xil , v¯i = x ¯i , μi = μx¯i ,

vi+m,l = yil v¯i+m = y¯i μi+m = μy¯i

fi = fx¯i , fs,i = fs,¯xi ,

(23.9)

fi+m = fy¯i fs,i+m = fs,¯yi .

Linearizing the expansions, we find after all 2m 2m   β¯kl = β0,k + cik (vil − μi ) + cik fi ; i=1

i=1

2m 

2m 

l = 1, . . . , n (23.10)

β¯k = β0,k +

cik (¯ vi − μi ) +

i=1

cik fi ;

k = 1, 2, 3 .

i=1

Subtraction produces β¯kl − β¯k =

2m 

cik (vil − v¯i ) ;

k = 1, 2, 3 ;

l = 1, . . . , n .

(23.11)

i=1

Considering an ensemble of n least squares parabolas based on the respective l-th repeated measurements of the m measuring points i = 1, . . . , m (x1l , . . . , xml ;

y1l , . . . , yml ) ;

l = 1, . . . , n

we have 1¯ β¯k = βkl ; n n

k = 1, 2, 3 .

(23.12)

l=1

While (23.11) provides a means to handle the propagation of random errors the terms fβ¯k =

2m 

cik fi ;

i=1

devise the propagated systematic errors.

k = 1, 2, 3

(23.13)

23.4 Uncertainties of the Components of the Solution Vector

217

23.4 Uncertainties of the Components of the Solution Vector Random Errors Deploying (23.11), we set up the elements n



1  ¯ βkl − β¯k β¯k l − β¯k n−1 l=1 ⎤  ⎡ 2m  2m n  1   = cik (vil − v¯i ) ⎣ cjk (vjl − v¯j )⎦ n−1 i=1 j=1

sβ¯k β¯k =

l=1

=

2m 

k, k  = 1, 2, 3

cik cjk sij ;

i,j=1

of the empirical variance–covariance matrix ⎛ ⎞ sβ¯1 β¯1 sβ¯1 β¯2 sβ¯1 β¯3 ⎜ ⎟ ⎟ sβ¯ = ⎜ ⎝ sβ¯2 β¯1 sβ¯2 β¯2 sβ¯2 β¯3 ⎠ , sβ¯3 β¯1 sβ¯3 β¯2 sβ¯3 β¯3

sβ¯k β¯k ≡ s2β¯k

of the solution vector. Again, the 1  (vil − v¯i )(vjl − v¯j ) ; n−1 n

sij =

i, j = 1, . . . , 2m

l=1

denote the elements of the empirical variance–covariance matrix s = (sij ) ;

i, j = 1, . . . , 2m

(23.14)

of the input data, each sij having degrees of freedom ν = n − 1. As done previously, we condense the matrix sβ¯ to sβ¯ = C Ts C .

(23.15)

Systematic Errors Assuming equal systematic errors and error bounds fx¯i = fx ,

fs,¯xi = fs,x ,

−fs,x ≤ fx ≤ fs,x i = 1, . . . , m

fy¯i = fy ,

fs,¯yi = fs,y ,

−fs,y ≤ fy ≤ fs,y

(23.16)

218

23 Parabolas: Case (iii)

(23.13) yields fβ¯k =

m 

cik fx¯i +

i=1

= fx

m 

m 

ci+m,k fy¯i

i=1

cik + fy

i=1

m 

ci+m,k .

i=1

But as m 

ci1 = −β¯2 ,

i=1 m 

ci+m,1 = 1

i=1

ci2 = −2β¯3 ,

i=1 m 

m 

m 

ci+m,2 = 0

(23.17)

i=1

ci3 = 0 ,

i=1

m 

ci+m,3 = 0

i=1

we have fβ¯1 = −β¯2 fx + fy fβ¯2 = −2 β¯3 fx

(23.18)

fβ¯3 = 0 from which we take that equal systematic errors shift the parabola parallel to itself. Confidence Intervals and Overall Uncertainties Localizing the expectations E{β¯1 } = μβ¯1 = β0,1 + fβ¯1 E{β¯2 } = μβ¯2 = β0,2 + fβ¯2 E{β¯2 } = μβ¯3 = β0,3 + fβ¯3 with probability P by confidence intervals tP (n − 1) tP (n − 1) √ √ sβ¯1 ≤ μβ¯1 ≤ β¯1 + sβ¯1 β¯1 − n n tP (n − 1) tP (n − 1) √ √ sβ¯2 ≤ μβ¯2 ≤ β¯2 + sβ¯2 β¯2 − n n tP (n − 1) tP (n − 1) √ √ sβ¯3 ≤ μβ¯3 ≤ β¯3 + sβ¯3 β¯3 − n n

23.5 Uncertainty Band

219

the overall uncertainties of the components of the solution vector are seen to be  tP (n − 1) √ sβ¯k + | cik | fs,i ; n i=1 2m

uβ¯k =

k = 1, 2, 3 .

(23.19)

In case of equal systematic errors are equal, this turns into uβ¯1 =

tP (n − 1) √ sβ¯1 + | β¯2 | fs,x + fs,y n

uβ¯2 =

tP (n − 1) √ sβ¯2 + 2 | β¯3 | fs,x n

uβ¯3 =

tP (n − 1) √ sβ¯3 . n

(23.20)

23.5 Uncertainty Band Random Errors Let us visualize an ensemble of n least squares parabolas yl (x) = β¯1l + β¯2l x + β¯3l x2 ;

l = 1, . . . , n .

as addressed in (23.12). Obviously, the ensemble reestablishes y(x) =

n 

yl (x) .

l=1

For any fixed x, the differences





yl (x) − y(x) = β¯1l − β¯1 + β¯2l − β¯2 x + β¯3l − β¯3 x2 =

2m 

ci1 + ci2 x + ci3 x2 (vil − v¯i ) ;

l = 1, . . . , n

i=1

suggest an empirical variance 1  2 (yl (x) − y(x)) = cTs c n−1 n

s2y(x) =

l=1

letting c = (c11 + c12 x + c13 x2

c21 + c22 x + c23 x2 ...

denote an auxiliary vector.

c2m,1 + c2m,2 x + c2m,3 x2 )T

(23.21)

220

23 Parabolas: Case (iii)

Systematic Errors Combining (23.10) with (23.5) produces y(x) = β0,1 + β0,2 x + β0,3 x2 +

2m 

(ci1 + ci2 x + ci3 x2 )(¯ vi − μi ) +

i=1

2m 

(ci1 + ci2 x + ci3 x2 )fi .

i=1

From this we read the propagated systematic error fy(x) =

2m 

(ci1 + ci2 x + ci3 x2 )fi

(23.22)

| ci1 + ci2 x + ci3 x2 | fs,i .

(23.23)

i=1

its worst-case estimation being fs,y(x) =

2m  i=1

For equal systematic errors, (23.17) and (23.18) issue fy(x) = −β¯2 fx − 2 β¯3 fx x + fy .

(23.24)

Overall Uncertainty After all, the uncertainty band is given by y(x) ± uy(x) uy(x) =

2m  tP (n − 1) √ T √ c sc+ | ci1 + ci2 x + ci3 x2 | fs,i . n i=1

(23.25)

In case of equal systematic errors we have y(x) ± uy(x) uy(x) =

tP (n − 1) √ T √ c s c+ | β¯2 + 2β¯3 x | fs,x + fs,y . n

(23.26)

At any rate, we expect the uncertainty band to hold the true parabola (20.1).

23.6 EP C-Region The EP C-region localizes the triple of true values (β0,1 , β0,2 , β0,3 ) with respect to the triple of estimators (β¯1 , β¯2 , β¯3 ).

23.6 EP C-Region

221

Confidence Ellipsoid From (23.10) we take β¯k = μβ¯k +

2m 

cik (¯ vi − μi ) ;

μβ¯k = β0,k +

i=1

2m 

cik fi ;

k = 1, 2, 3 .

i=1

The variance–covariance matrix of the solution vector sβ¯ as given in (23.15) produces Hotelling’s ellipsoid 2 ¯ − μ ¯)T s−1 ¯ − μ ¯) = tP (3, n − 1) (β ( β ¯ β β β n

from which we draw the confidence ellipsoid 2 ¯ T s−1 ¯ = tP (3, n − 1) . (β − β) (β − β) ¯ β n

We expected the latter to localize the point ⎛ ⎞ μβ¯1 ⎜ ⎟ ⎟ μβ¯ = ⎜ ⎝ μβ¯2 ⎠ μβ¯3

(23.27)

(23.28)

with probability P . Security Polyhedron While the errors f1 , f2 , . . . , f2m “scan” the set of points lying within or on the faces of the 2m-dimensional hypercuboid −fs,i ≤ fi ≤ fs,i ;

i = 1, . . . , 2m

the components of the propagated systematic error fβ¯k =

2m 

cik fi ;

k = 1, 2, 3

i=1

span a polyhedron. For equal systematic errors the polyhedron degenerates into a polygon fβ¯1 = −β¯2 fx + fy fβ¯2 = −2β¯3 fx fβ¯3 = 0 .

(23.29)

222

23 Parabolas: Case (iii)

Mergence of Ellipsoid Either with Interval or with Polyhedron The merging of a confidence ellipsoid either with a non-degenerated or with a degenerated security polyhedron is addressed in Appendix G. Figure 23.1 displays the true parabola (20.1), the fitted parabola (23.5), the uncertainties (23.19) of the estimators β¯1 , β¯2 , β¯3 and, finally, the uncertainty band (23.25). Figure 23.2 considers equal systematic errors and renders the true parabola (20.1), the fitted parabola (23.5), the uncertainties (23.20) of the estimators β¯1 , β¯2 , β¯3 and, ultimately, the uncertainty band (23.26). The illustrations are based on simulated data implying known true values and, in particular, extensive graphical scale transformations, Appendix A.

23.6 EP C-Region

223

Fig. 23.1. Parabolas, case (iii). Left: least squares parabola, uncertainty band, and true parabola (dashed line). Right: Uncertainty intervals localizing the parabola’s true parameters β0,1 , β0,2 , and β0,3

224

23 Parabolas: Case (iii)

Fig. 23.2. Parabolas, case (iii), equal systematic errors in x and y. Left: least squares parabola, uncertainty band, and true parabola (dashed line). Right: Uncertainty intervals localizing the parabola’s true parameters β0,1 , β0,2 , and β0,3 . Equal systematic errors do not affect β¯3

Part VIII

Non-linear Fitting

24 Series Truncation

Considering functional relationships being non-linear in the unknown parameters asks for a deviant strategy [7]. To proceed via series truncation, we firstly suppose the relationships to behave adequately smooth with respect to the unknown parameters and secondly the magnitudes of the measuring errors to remain sufficiently small. It is, however, to be observed that the truncation errors and the measuring errors jointly enter the least squares estimators and the respective uncertainties.

24.1 Homologous True Function Let some function φ(x, y; a, b) = 0 be non-linear with respect to the parameters a and b. Given a, b, the relationship involves a well-defined set of data pairs x, y. Vice versa, given the outcomes x, y of some physical experiment are known to follow φ(x, y; a, b) = 0, the measured data, though erroneous, may be deployed to assess the unknown parameters a and b. To linearize φ(x, y; a, b) = 0, we rely on guessed starting values, say, a∗ and b∗ φ(x, y; a, b) = φ(x, y; a∗ , b∗ ) +

∂φ(x, y; a∗ , b∗ ) ∂φ(x, y; a∗ , b∗ ) (a − a∗ ) + (b − b∗ ) + · · · = 0 ∗ ∂a ∂b∗

(24.1)

and consider the differences β1 = a − a∗

and β2 = b − b∗

(24.2)

as unknowns. Putting ∂φ(x, y; a∗ , b∗ ) ∂b∗ ξ= , ∂φ(x, y; a∗ , b∗ ) ∂a∗

η=−

(24.1) turns into an apparently linear model

φ(x, y; a∗ , b∗ ) ∂φ(x, y; a∗ , b∗ ) ∂a∗

(24.3)

228

24 Series Truncation

β1 + β2 ξ ≈ η ;

i = 1, . . . , m .

(24.4)

The linearization, being the result of a series truncation, is in no way connected to measurement errors. The closer the starting values a∗ , b∗ to the true parameters a, b, the more (24.4) resembles an identity in zero as induced via β1 → 0,

φ(x, y; a∗ , b∗ ) → 0 .

β2 → 0,

(24.5)

Obviously, the identity in zero substitutes the true function φ(x, y; a, b) = 0. For convenience, let us call this identity the homologous true function. Should the starting values a∗ , b∗ be not close enough to the true values a, b, the proceeding might break down.

24.2 Fitting Conditions Let there be m > 2 data pairs (x0,1 , y¯1 ),

(x0,2 , y¯2 ),

...

(x0,m , y¯m )

(24.6)

the abscissas of which being error-free and ordinates arithmetic means each covering n repeated measurements 1 yil = y0,i + (¯ yi − μy¯i ) + fy¯i ; n n

y¯i =

i = 1,... ,m

l=1

E{Y¯i } = μy¯i ;

−fs,¯yi ≤ fy¯i ≤ fs,¯yi .

The empirical variances 1  2 (yil − y¯i ) , n−1 n

s2y¯i =

i = 1, . . . , m

l=1

and the error limits ±fs,¯y1 issue the uncertainties of the input data y¯i ± uy¯i ,

ux¯i =

tP (n − 1) √ sy¯i + fs,¯yi ; n

i = 1, . . . , m .

(24.7)

The intervals y¯i ± uy¯i are seen to localize the true values y0,i of the input data.

24.3 Orthogonal Projection The data (24.6) produce coefficients

24.3 Orthogonal Projection

∂φ(x0,i , y¯i ; a∗ , b∗ ) ∂b∗ , ξ¯i = ∂φ(x0,i , y¯i ; a∗ , b∗ ) ∂a∗

η¯i = −

φ(x0,i , y¯i ; a∗ , b∗ ) ∂φ(x0,i , y¯i ; a∗ , b∗ ) ∂a∗

229

(24.8)

so that β1 + β2 ξ¯i ≈ η¯i ;

i = 1, . . . , m .

(24.9)

Cast in matrices this reads Aβ ≈ η ¯ given ⎛

⎞ 1 ξ¯1 ⎜ 1 ξ¯ ⎟ 2 ⎟ ⎜ A=⎜ ⎟, ⎝··· ···⎠ 1 ξ¯m

 β=



β1

⎞ η¯1 ⎜ η¯ ⎟ ⎜ 2⎟ η ¯=⎜ ⎟. ⎝···⎠

,

β2

η¯m

The orthogonal projection yields ¯ = BTη β ¯,

β¯k =

m 

bik η¯i ;

k = 1, 2

(24.10)

i=1

where

−1 = (bik ) ; B = A AT A

i = 1, . . . , m;

k = 1, 2 .

As the coefficients bik of the matrix B are charged by measuring errors, in order to assign measurement uncertainties, the components β¯1 and β¯2 of the solution vector have to be submitted to series expansions. Submitting (24.9) to least squares has brought forth the straight line η(ξ) = β¯1 + β¯2 ξ .

(24.11)

Hence, there are estimators a ¯ = a∗ + β¯1 ,

¯b = b∗ + β¯2

(24.12)

of the unknown parameters a and b. To disburden the discussion, we confine ourselves to a simplifying Example Given a function φ(x, y; a, b) = y − a exp (−b/x) = 0 ;

a, b > 0 ,

x>0

(24.13)

230

24 Series Truncation

with arbitrarily chosen parameters a = 3e and b = 1 and m > 2 data pairs (24.6) holding error-free abscissas and ordinates being arithmetic means. As ξ(x, a∗ ) = −

a∗ , x

η = y exp (b∗ /x) − a∗

(24.14)

the least squares proceeding (24.9) brings forth ⎛ ⎛ ⎞ 1 −a∗ /x0,1 y¯1 exp (b∗ /x0,1 ) − a∗  ⎜ 1 −a∗ /x ⎟ ⎜ y¯ exp (b∗ /x ) − a∗ β1 0,2 ⎟ 0,2 ⎜ ⎜ 2 A=⎜ , η ¯=⎜ ⎟, β= ⎝··· ⎝ ⎠ ··· ··· β2 1 −a∗ /x0,m

⎞ ⎟ ⎟ ⎟ ⎠

y¯m exp (b∗ /x0,m ) − a∗

where rank (A) = 2 is assumed. Again, the components of the solution vector are given by m  β¯k = bik η¯i ; k = 1, 2 . (24.15) i=1

Let us now consider the homologous true system β0 = B T η0

Aβ0 = η0 ; with ⎛

1 −a/x0,1 ⎜ 1 −a/x 0,2 ⎜ A=⎜ ⎝··· ···

⎞ ⎟ ⎟ ⎟, ⎠

 β0 =

(24.16)



β0,1

,

β0,2

y0,1 exp (b/x0,1 ) − a ⎜ y exp (b/x ) − a 0,2 ⎜ 0,2 η0 = ⎜ ⎝ ···

1 −a/x0,m

⎞ ⎟ ⎟ ⎟. ⎠

y0,m exp (b/x0,m ) − a

Since η0 = 0

(24.17)

β0 = 0

(24.18)

we have

as it should be.

24.4 Iteration Given the estimators a ¯, ¯b are closer to the true values a, b than the guessed ∗ ∗ starting values a , b , it stands to reason to cyclically repeat the adjustment until β¯1 ≈ 0

and β¯2 ≈ 0 .

(24.19)

24.5 Uncertainties of the Components of the Solution Vector

231

Clearly, if we were to feed in error free data, progressing cycles would steadily contract the estimators β¯1 , β¯2 down to the numerical uncertainty of the computer. However, as the input data are erroneous, just after a few cycles, the numerical values of the estimators β¯1 , β¯2 start to oscillate. To recall, the iteration, if convergent, is in no way apt to reduce the effect of the measuring errors—it may, however, reduce the effect of the linearization errors.

24.5 Uncertainties of the Components of the Solution Vector As the respective starting values a∗ , b∗ are to be judged error-free, the uncertainties of the estimators β¯1 , β¯2 pass over to the uncertainties of the estimated parameters a ¯ and ¯b, ua¯ = uβ¯1 ,

u¯b = uβ¯2 .

(24.20)

Random Errors As (24.14) provides us with repeated measurements, we are not reliant on the minimized sum of squared residuals. Inserting the η¯i into (24.15) issues β¯k =

m 

bik η¯i

i=1

=

m 

bik [¯ yi exp (b∗ /x0,i ) − a∗ ]

i=1

=

m 

bik y¯i exp (b∗ /x0,i ) − a∗

i=1

=

m 

(24.21) bik

i=1

n m m  1  bik yil exp (b∗ /x0,i ) − a∗ bik ; n i=1 i=1

k = 1, 2 .

l=1

Defining β¯kl =

m 





bik yil exp (b /x0,i ) − a

i=1

m 

bik ;

k = 1, 2 .

(24.22)

bik exp (b∗ /x0,i )( yil − y¯i ) ;

k = 1, 2 .

(24.23)

i=1

we arrive at β¯kl − β¯k =

m  i=1

232

24 Series Truncation

Hence, the empirical variances and covariances of the solution vector are given by



1  ¯ βkl − β¯k β¯k l − β¯k n−1 l=1 ⎤⎡ ⎤ m  exp (b∗ /x0,i )(yil − y¯i )⎦ ⎣ bjk exp (b∗ /x0,j )(yjl − y¯j )⎦ n

sβ¯k β¯k = ⎡ n m 1  ⎣ = bik n−1 i=1

j=1

l=1

=

m 

bik exp (b∗ /x0,i ) bjk exp (b∗ /x0,j ) sij ;

k, k  = 1, 2, 3 .

i,j=1

We gather the 1  (yil − y¯i )(yjl − y¯j ) n−1 n

sij =

(24.24)

l=1

within the empirical variance–covariance matrix of the input data s = (sij ) ;

i, j = 1, . . . , m .

(24.25)

For convenience, we further introduce an auxiliary matrix ⎛ ⎞ b11 exp (b∗ /x0,1 ) b12 exp (b∗ /x0,1 ) ⎜ b exp (b∗ /x ) b22 exp (b∗ /x0,2 ) ⎟ 0,2 ⎜ 21 ⎟ H=⎜ ⎟ ⎝ ⎠ ··· ··· bm1 exp (b∗ /x0,m ) bm2 exp (b∗ /x0,m ) so that

⎛ sβ¯ = ⎝

sβ¯1 β¯1 sβ¯1 β¯2

⎞ ⎠ = H T s H,

sβ¯2 β¯1 sβ¯2 β¯2 Systematic Errors Returning to (24.15), we observe

sβ¯k β¯k ≡ s2β¯k .

(24.26)

24.5 Uncertainties of the Components of the Solution Vector

β¯k =

m 

233

bik η¯i

i=1

=

m 

bik [¯ yi exp (b∗ /x0,i ) − a∗ ]

i=1

=

m 

bik [(y0,i + (¯ yi − μy¯i ) + fy¯i ) exp (b∗ /x0,i ) − a∗ ]

(24.27)

i=1

=

m 

bik [y0,i exp (b∗ /x0,i ) − a∗ ]

i=1

+

m 

bik exp (b∗ /x0,i )(¯ yi − μy¯i ) +

i=1

m 

bik exp (b∗ /x0,i )fy¯i ;

k = 1, 2 .

i=1

Obviously, in practice, the terms m 

bik [y0,i exp (b∗ /x0,i ) − a∗ ] ;

k = 1, 2

(24.28)

i=1

are inaccessible. However, should they be sufficiently close to zero, thus corresponding to the homologous true system, the propagated systematic error would be fβ¯k =

m 

bik exp (b∗ /x0,i )fy¯i ;

k = 1, 2

(24.29)

i=1

with the worst-case estimation fs,β¯k =

m 

| bik | exp (b∗ /x0,i )fs,¯yi ;

k = 1, 2 .

(24.30)

i=1

The unknown, non-zero term (24.28) asks us to rate the uncertainties to come cautiously. For equal systematic errors fy¯i = fy ,

fs,¯yi = fs,y ;

i = 1, . . . m

−fs,y ≤ fy ≤ fs,y (24.29) gives way to fs,β¯k = fs,y

m  i=1

| bik | exp (b∗ /x0,i ) ;

k = 1, 2 .

(24.31)

234

24 Series Truncation

Overall Uncertainties From (24.26) and (24.30) we take  tP (n − 1) √ sβ¯k + | bik | fs,¯yi exp (b∗ /x0,i ) ; n i=1 m

uβ¯k =

k = 1, 2 .

(24.32)

In case of equal systematic errors, due to (24.31), this turns into  tP (n − 1) √ sβ¯k + fs,y | bik | exp (b∗ /x0,i ) ; n i=1 m

uβ¯k =

k = 1, 2 .

(24.33)

Hence, the estimators (24.12) and the uncertainties (24.20) should localize the unknown parameters a and b according to ¯b − u¯ ≤ b ≤ ¯b + u¯ b b

a ¯ − ua¯ ≤ a ≤ a ¯ + ua¯ ,

(24.34)

whereat the fitted function reads y(x) = a ¯ exp (−¯b/x) ;

x > 0.

(24.35)

Remarkably enough, given a ¯ > a0 ,

¯b > b0

or

a ¯ < a0 ,

¯b < b0

Fig. 24.1. Fitted function y(x) = a ¯ exp (− ¯b/x), uncertainties of the input data, true function y(x) = a exp (− b/x) (dashed line), and uncertainty belt according to (24.36); arbitrarily chosen parameters a = 3e and b = 1

24.5 Uncertainties of the Components of the Solution Vector

235

the fitted curve is slightly revolved with respect to the true curve, either in the one or in the other sense of rotation. If, however, a ¯ > a0 ,

¯b < b0

or

a ¯ < a0 ,

¯b > b0

the fitted curve is at least in essence shifted parallel to itself. In practice, we are unaware of what happens. In order to establish, say, an uncertainty belt which localizes the true function, we insert the pair of results (¯ a + ua¯ , ¯b − u¯b )

and

(¯ a − ua¯ , ¯b + u¯b )

(24.36)

as implied in (24.34) into (24.35). This produces two boundary curves inbetween which we should find the true non-linear function (24.13). Figure 24.1 displays the true function (24.13), the fitted function (24.35), the uncertainties of the input data (24.7), and the suggested uncertainty belt following (24.36).

25 Transformation

Relationships non-linear with respect to the unknown parameters may be linearized via coordinate transformation. As a matter of course, the least squares estimators do not include linearization errors. However, they now depend on the properties of the transformation. Further, assigning measurement uncertainties, even a two step linearization is to be carried out.

25.1 Homologous True Function We resume the example as discussed in Chap. 24, φ = a exp (−b/T ) ;

a, b > 0 ;

T > 0.

(25.1)

To linearize, we put y = ln φ ,

β1 = ln a ,

x = 1/T ,

β2 = −b

(25.2)

giving way to β1 + β2 x = y .

(25.3)

Considering m data pairs (T0,1 , φ0,1 ),

(T0,2 , φ0,2 ),

...,

(T0,m , φ0,m )

(25.4)

satisfying φ0,i = a exp (−b/T0,i ) ;

i = 1, . . . , m

the homologous true function, as introduced in the preceding section, reads β1 + β2 x0,i = y0,i ;

i = 1, . . . , m .

(25.5)

25.2 Fitting Conditions Let there be m > 2 data pairs (T0,1 , φ¯1 ),

(T0,2 , φ¯2 ),

...,

(T0,m , φ¯m )

expressing n repeated measurements for each of the φ¯i

(25.6)

238

25 Transformation

1 φil = φ0,i + (φ¯i − μφ¯i ) + fφ¯i ; φ¯i = n n

i = 1,... ,m

l=1

E{φ¯i } = μφ¯i ;

−fs,φ¯i ≤ fφ¯i ≤ fs,φ¯i .

The uncertainties, as given by the empirical variances 1  (φil − φ¯i )2 ; n − 1 i=1 m

s2φ¯i =

i = 1,... ,m

(25.7)

and the systematic errors fφ¯i , take the form φ¯i ± uφ¯i ,

uφ¯i =

tP (n − 1) √ sφ¯i + fs,φ¯i ; n

i = 1, . . . , m .

(25.8)

Moreover, we shall need the uncertainties of the y¯i . We have 1 1 y¯i = y0,i + ¯ (φ¯i − μφ¯i ) + ¯ fφ¯i , φi φi

i = 1,... ,m

1 1 yil = y0,i + ¯ (φil − μφ¯i ) + ¯ fφ¯i , φi φi

l = 1,... ,n.

(25.9)

As φ¯i > 0, we simply write 1 sy¯i = ¯ sφ¯i ; φi

i = 1,... ,m.

(25.10)

1 Combined with the systematic errors fy¯i = ¯ fφ¯i the uncertainties are φi given by y¯i ± uy¯i ,

uy¯i =

tP (n − 1) √ sy¯i + fs,¯yi ; n

i = 1, . . . , m .

25.3 Orthogonal Projection Written in matrices the inconsistent, over-determined, linear system β1 + β2 x0,i ≈ y¯i ; reads Aβ ≈ y¯

i = 1,... ,m

(25.11)

25.4 Uncertainties of the Components of the Solution Vector

where



1 x0,1

⎜ 1 x 0,2 ⎜ A=⎜ ⎝··· ···





⎟ ⎟ ⎟, ⎠

β=

⎛ β1

y¯1

239



⎜ y¯ ⎟ ⎜ 2⎟ y ¯=⎜ ⎟. ⎝···⎠

,

β2

1 x0,m

y¯m

The orthogonal projection produces β¯k =

¯ = B T y¯ , β

m 

bik y¯i ;

k = 1, 2

(25.12)

i=1

with



−1 B = A AT A = (bik ) ; i = 1, . . . , m; k = 1, 2 . ¯ ¯ constitute the least The components βk ; k = 1, 2 of the solution vector β squares line (25.13) y(x) = β¯1 + β¯2 x . Ultimately, there are estimators ¯

a ¯ = e β1 ,

¯b = −β¯2

(25.14)

for the unknown parameters a, b .

25.4 Uncertainties of the Components of the Solution Vector As has been discussed in Sect.7.1, the uncertainties of the estimators a ¯ and ¯b turn out to be ¯

ua¯ = eβ1 uβ¯1 ,

u¯b = uβ¯2 .

(25.15)

Random Errors Again we are not reliant on the minimized sum of squared residuals in order to assess the influence of random errors. Rather, the components β¯k of the solution vector (25.12)  n  m  m n  1 1  ¯ βk = bik yil = bik yil n n i=1 i=1 l=1

l=1

(25.16) n 1¯ = βkl ; n l=1

produce the differences

k = 1, 2

240

25 Transformation

β¯kl − β¯k =

m 

bik (yil − y¯i )

i=1

=

m 

bik (

i=1

φil − φ¯i ) φ¯i

which devise the empirical variance–covariance matrix ⎞ ⎛ sβ¯1 β¯1 sβ¯1 β¯2 ⎠, sβ¯ = ⎝ sβ¯k β¯k ≡ s2β¯k . sβ¯2 β¯1 sβ¯2 β¯2 With 1  (yil − y¯i )(yjl − y¯j ) ; n−1 n

sij =

i, j = 1, . . . , m

l=1

=

n 1  (φil − φ¯i ) (φjl − φ¯j ) n−1 φ¯i φ¯j l=1

and s = (sij ) ;

i, j = 1, . . . , m .

designating the empirical variance–covariance matrix of the input data, we have sβ¯ = B T s B .

(25.17)

Systematic Errors The formal decompositions β¯k =

m 

1 1 bik [y0,i + ¯ (φ¯i − μφ¯i ) + ¯ fφ¯i ] ; φ φ i i i=1

k = 1, 2

(25.18)

issue the propagated systematic errors fβ¯k =

m 

1 bik ¯ fφ¯i ; φ i i=1

k = 1, 2

(25.19)

their worst-case estimations being fs,β¯k =

m  i=1

1 | bik | ¯ fs,φ¯i ; φi

k = 1, 2 .

(25.20)

25.4 Uncertainties of the Components of the Solution Vector

241

In case of equal systematic errors fφ¯i = fφ ,

fs,φ¯i = fs,φ ;

i = 1, . . . m (25.21)

−fs,φ ≤ fφ ≤ fs,φ we have fs,β¯k = fs,φ

m  i=1

1 | bik | ¯ ; φi

k = 1, 2 .

(25.22)

Overall Uncertainties Ultimately, combining the square roots of the diagonal elements of (25.17) either with (25.20) or with (25.22) brings forth uβ¯k =

tP (n − 1) √ sβ¯k + fs,β¯k ; n

k = 1, 2 .

(25.23)

Following (25.14) and (25.15) the results take the form ¯

a ¯ ± ua¯ ;

ua¯ = eβ1 uβ¯1

¯b ± u¯ ; b

u¯b = uβ¯2

(25.24)

where the fitted function itself is given by φ(T ) = a ¯ exp (− ¯b/T ) .

(25.25)

Just to have an impression about the region which localizes the true function, we may consider a pair of functions deploying the set of estimators (¯ a − ua¯ , ¯b + u¯b )

and

(¯ a + ua¯ , ¯b − u¯b ) .

(25.26)

In a sense, we might see them to constitute something like an uncertainty belt. Figure 25.1 reverts to the data set used in Fig. 24.1 and displays the fitted function (25.25), the true function (25.1), the uncertainties of the input data and, finally, the uncertainty belt according to (25.26).

242

25 Transformation

Fig. 25.1. Fitted function φ(T ) = a ¯ exp (− ¯b/T ), true function φ(T ) = a exp (− b/T ) (dashed line), uncertainties of the input data, and uncertainty belt following (25.26)

Part IX

Appendices

A

Graphical Scale Transformations

The graphical visualization of uncertainty statements present us with a hitch. As we commonly assume relative uncertainties of, say, 10−3 and less, our eyes are not in a position to resolve the details. A way out is to introduce graphical expansion factors. Mean Values ¯ Let us graphically expand the uncertainty ux¯ of the arithmetic mean x x ¯ ± ux¯ . To have a beneficial vista, instead of the metrologically defined result, we display x ¯ ± V ux¯ ;

V 1.

(A.1)

The factor V expands the primary uncertainty region with reference to the estimator x ¯. By means of data simulations, we constantly keep track of the true values of the measurands. Equation (A.1) tells us that any number lying within the expanded uncertainty region may symbolize the true value. Thus, in order to have a meaningful exposure the graph has, in a formal sense as a ¯ according to matter of course, to shift the true value x0 of the mean x x∗0 = x ¯ + (x0 − x ¯)V ;

V 1

(A.2)

Putting V = 1, we retrace x∗0 = x0 . ¯ = μx¯ of the A similar transformation applies to the expectation E{X} ¯ random variable X μ∗x¯ = x ¯ + (μx¯ − x ¯)V ;

V 1.

(A.3)

Again, V = 1 recovers μ∗x¯ = μx¯ . Straight Lines Assume the graphical display of the uncertainty band of a least squares line to be too narrow to be visually resolved. Hence, instead of

246

A

Graphical Scale Transformations

y¯(x) ± uy¯(x) we depict y¯(x) ± V uy¯(x) ;

V 1.

(A.4)

The expansion prevents the two branches of the uncertainty region to pretendedly match the fitted straight line y¯(x) = β¯1 + β¯2 x .

(A.5)

The expansion of the uncertainty band asks us to formally readjust the position of the true straight line y0 (x) = β0,1 + β0,2 x .

(A.6)

For any fixed x, any number lying in between y¯(x) − uy¯(x) . . . y¯(x) + uy¯(x) might be the true value y0 (x). Hence, the true straight line is to be transformed according to y0∗ (x) = y¯(x) + (y0 (x) − y¯(x))V .

(A.7)

Obviously, V = 1 yields y0∗ (x) = y0 (x). Inserting (A.5) and (A.6), we have y0∗ (x) = β¯1 + β¯2 x + (β0,1 + β0,2 x − β¯1 − β¯2 x)V (A.8) = [ β¯1 + (β0,1 − β¯1 )V ] + [ β¯2 + (β0,2 − β¯2 )V ] x . Hence, we display the true straight line following ∗ ∗ + β0,2 x y0∗ (x) = β0,1

(A.9)

with coefficients ∗ = β¯1 + (β0,1 − β¯1 )V β0,1 ∗ β0,2 = β¯2 + (β0,2 − β¯2 )V .

(A.10)

To transform the expectations E{β¯1 } = μβ¯1 and E{β¯2 } = μβ¯2 , we start from (A.5) μy¯(x) = μβ¯1 + μβ¯2 x .

(A.11)

μ∗y¯(x) = y¯(x) + (μy¯(x) − y¯(x))V .

(A.12)

Equation (A.7) tells us

Thus, inserting (A.5) and (A.11), we find μ∗β¯1 = β¯1 + (μβ¯1 − β¯1 )V μ∗β¯2 = β¯2 + (μβ¯2 − β¯2 )V .

(A.13)

A

Graphical Scale Transformations

247

Planes Instead of z¯(x, y) ± uz¯(x,y) the diagram is intended to display z¯(x, y) ± V uz¯(x,y) ;

V 1.

(A.14)

By necessity, the expansion has to take reference to the fitted plane z¯(x, y) = β¯1 + β¯2 x + β¯3 y .

(A.15)

Initially, we readjust the true plane z0 (x, y) = β0,1 + β0,2 x + β0,3 y .

(A.16)

For any fixed point x, y, any number lying within z¯(x, y) − uz¯(x,y) . . . z¯(x, y) + uz¯(x,y) might symbolize the true value z(x0 , y0 ). Hence, we put z0∗ (x, y) = z¯(x, y) + (z0 (x, y) − z¯(x, y))V .

(A.17)

V = 1 reproduces z0∗ (x, y) = z0 (x, y). Inserting (A.15) and (A.16) we find z0∗ (x, y) = β¯1 + β¯2 x + β¯3 y + (β0,1 + β0,2 x + β0,3 y − β¯1 − β¯2 x − β¯3 y)V = [ β¯1 + (β0,1 − β¯1 )V ] + [ β¯2 + (β0,2 − β¯2 )V ] x

(A.18)

+ [ β¯3 + (β0,3 − β¯3 )V ] y . Thus, for graphical ends the true plane reads ∗ ∗ ∗ z0∗ (x, y) = β0,1 + β0,2 x + β0,2 y

(A.19)

with coefficients ∗ β0,1 = β¯1 + (β0,1 − β¯1 )V ∗ β0,2 = β¯2 + (β0,2 − β¯2 )V

(A.20)

∗ β0,3 = β¯3 + (β0,3 − β¯3 )V .

We also transform the expectations E{β¯1 } = μβ¯1 , E{β¯2 } = μβ¯2 and E{β¯3 } = μβ¯3 . From (A.15) we take μz¯(x,y) = μβ¯1 + μβ¯2 x + μβ¯3 y .

(A.21)

248

A

Graphical Scale Transformations

But then μ∗z¯(x,y) = z¯(x, y) + (μz¯(x,y) − z¯(x, y))V

(A.22)

yields μ∗β¯1 = β¯1 + (μβ¯1 − β¯1 )V μ∗β¯2 = β¯2 + (μβ¯2 − β¯2 )V

(A.23)

μ∗β¯3 = β¯3 + (μβ¯3 − β¯3 )V . Parabolas Instead of y¯(x) ± uy¯(x) we wish to display y¯(x) ± V uy¯(x) ;

V 1

(A.24)

where the least squares parabola y¯(x) = β¯1 + β¯2 x + β¯3 x2

(A.25)

serves as a reference. The scale transformations shifts the true parabola y0 (x) = β0,1 + β0,2 x + β0,3 x2

(A.26)

y0∗ (x) = y¯(x) + (y0 (x) − y¯(x))V

(A.27)

into

yielding y0∗ (x) = β¯1 + β¯2 x + β¯3 x2 + (β0,1 + β0,2 x + β0,3 x2 − β¯1 − β¯2 x − β¯3 x2 )V = [ β¯1 + (β0,1 − β¯1 )V ] + [ β¯2 + (β0,2 − β¯2 )V ] x

(A.28)

+ [ β¯3 + (β0,3 − β¯3 )V, ] x2 . Hence, the coefficients of the readjusted true parabola ∗ ∗ ∗ y0∗ (x) = β0,1 + β0,2 x + β0,3 x2

turn out to be

(A.29)

A

Graphical Scale Transformations

249

∗ β0,1 = β¯1 + (β0,1 − β¯1 )V ∗ β0,2 = β¯2 + (β0,2 − β¯2 )V

(A.30)

∗ β0,3 = β¯3 + (β0,3 − β¯3 )V .

Finally, we readjust the expectations E{β¯1 } = μβ¯1 , E{β¯2 } = μβ¯2 and E{β¯3 } = μβ¯3 . (A.25) produces μy¯(x) = μβ¯1 + μβ¯2 x + μβ¯3 x2

(A.31)

μ∗y¯(x) = y¯(x) + (μy¯(x) − y¯(x))V

(A.32)

As

we find, inserting (A.25) and (A.31), μ∗β¯1 = β¯1 + (μβ¯1 − β¯1 )V μ∗β¯2 = β¯2 + (μβ¯2 − β¯2 )V μ∗β¯3 = β¯3 + (μβ¯3 − β¯3 )V .

(A.33)

B

Expansion of Solution Vectors

Given the abscissas as well as ordinates are considered erroneous, the fitting of geometrical objects implies erroneous design matrices. Then, error propagation requires us to carry out series expansions of the solution vectors. Straight Lines – Section 15.3 The series expansions of the components β¯1 , β¯2 produce coefficients ⎡ ⎡ ⎤ ⎤ m m m m     ¯1 2 β 1 ⎣ ⎣m¯ 2¯ xi xi − y¯j − y¯i x ¯j − x ¯j y¯j ⎦ − x ¯j ⎦ ci,1 = D D j=1 j=1 j=1 j=1 ci+m,1

ci,2

ci+m,2

⎡ ⎤ m m  1 ⎣ 2 = x ¯ −x ¯i x ¯j ⎦ D j=1 j j=1 ⎡ ⎡ ⎤ ⎤ m m  ¯2 2 β 1 ⎣  ⎣m¯ − xi − = y¯j + m¯ yi ⎦ − x ¯j ⎦ D D j=1 j=1 ⎡ ⎤ m 1 ⎣  − = x ¯j + m¯ xi ⎦ ; D j=1

i = 1, . . . , m .

For convenience, the coefficients have been gathered within an auxiliary matrix as stated in (15.9). Planes – Section 19.3 The design matrix ⎛

1 ⎜ 1 A=⎜ ⎝... 1 produces

x ¯1 x ¯2 ... x ¯m

⎞ y¯1 y¯2 ⎟ ⎟ ...⎠ y¯m

252

B

Expansion of Solution Vectors



H11 H12 H13



⎟ ⎜ AT A = ⎝ H21 H22 H23 ⎠ H31 H32 H33 with elements H11 = m ,

H22 =

m 

x ¯2j ,

H33 =

j=1

H12 = H21 =

m 

x ¯j ,

y¯j2

j=1

H13 = H31 =

j=1

Let

m 

m 

y¯j ,

H23 = H32 =

j=1

m 

x ¯j y¯j .

j=1



2 + H12 (H31 H23 − H33 H12 ) D = H11 H22 H33 − H23 +H13 (H21 H32 − H13 H22 ) .

denote the determinant of AT A. With this, the inverse of AT A reads ⎛ ⎞ 2 H22 H33 − H23 | H23 H13 − H12 H33 | H12 H23 − H22 H13 + ,−1 1⎜ ⎟ 2 ATA = ⎝ H23 H13 − H12 H33 | H11 H33 − H13 | H12 H13 − H11 H23 ⎠ . D H12 H23 − H22 H13 | H12 H13 − H11 H23 |

2 H11 H22 − H12



−1 ¯ = B T z¯ take the form Putting B = A AT A , the components of β β¯1 =

m  j=1

bj1 z¯j ,

β¯2 =

m 

bj2 z¯j ,

j=1

β¯3 =

m 

bj3 z¯j ,

j=1

more detailed m & 1  % 2 β¯1 = H22 H33 − H23 + [H23 H13 − H12 H33 ] x ¯j D j=1

+ [H12 H23 − H22 H13 ] y¯j } z¯j m & % 1  2 [H23 H13 − H12 H33 ] + H11 H33 − H13 x ¯j β¯2 = D j=1

+ [H12 H13 − H11 H23 ] y¯j } z¯j m 1  {[H12 H23 − H22 H13 ] + [H12 H13 − H11 H23 ] x ¯j β¯3 = D j=1 &  % 2 y¯j z¯j . + H11 H22 − H12

B

Expansion of Solution Vectors

253

The partial derivatives of the determinant are issued as & % ∂D 2 x ¯i + 2 [H12 H13 − H11 H23 ] y¯i = 2 H11 H33 − H13 ∂x ¯i +2 [H13 H23 − H12 H33 ] & % ∂D 2 y¯i = 2 [H12 H13 − H11 H23 ] x ¯i + 2 H11 H22 − H12 ∂ y¯i +2 [H12 H23 − H13 H22 ] ∂D = 0. ∂ z¯i Differentiating each of the components β¯k ; k = 1, 2, 3 with respect to x, y, and z, we find ci1 =

∂ β¯1 β¯1 ∂D =− ∂x ¯i D ∂x ¯i ⎧ m  1 ⎨ + ¯i − H23 y¯i ) z¯j + (H13 H23 − H12 H33 ) z¯i 2 (H33 x D⎩ j=1

+ (H13 y¯i − H33 )

m 

x ¯j z¯j + (H23 + H12 y¯i − 2H13 x ¯i )

j=1

ci+m,1 =

m 

⎫ ⎬ y¯j z¯j

j=1



∂ β¯1 β¯1 ∂D =− ∂ y¯i D ∂ y¯i ⎧ m  1 ⎨ + ¯i ) z¯j + (H12 H23 − H13 H22 ) z¯i 2 (H22 y¯i − H23 x D⎩ j=1

+ (H23 + H13 x ¯i − 2H12 y¯i )

m 

x ¯j z¯j + (H12 x ¯i − H22 )

j=1

ci+2m,1 =

ci2

j=1

∂ β¯1 = bi1 ∂ z¯i

⎧ m  ∂ β¯2 β¯2 ∂D 1 ⎨ = =− + z¯j (H13 y¯i − H33 ) ∂x ¯i D ∂x ¯i D⎩ j=1

2 z¯i + (H13 − H11 y¯i ) + H11 H33 − H13

m 

m  j=1

y¯j z¯j

⎫ ⎬ ⎭

⎫ ⎬ y¯j z¯j



254

B

Expansion of Solution Vectors

ci+m,2

⎧ m  ¯ ¯ ∂ β2 β2 ∂D 1 ⎨ = =− + ¯i − 2H12 y¯i ) z¯j (H23 + H13 x ∂ y¯i D ∂ y¯i D⎩ j=1 + (H12 H13 − H11 H23 ) z¯i + 2 (H11 y¯i − H13 )

+ (H12 − H11 x ¯i )

m 

y¯j z¯j

ci+2m,2 =

ci3

x ¯j z¯j

j=1

⎫ ⎬

j=1

m 



∂ β¯2 = bi2 ∂ z¯i

⎧ m  ∂ β¯3 1 ⎨ β¯3 ∂D = =− + ¯i ) z¯j (H23 + H12 y¯i − 2H13 x ∂x ¯i D ∂x ¯i D⎩ j=1 + (H12 H13 − H11 H23 ) z¯i + (H13 − H11 y¯i )

m 

x ¯j z¯j + 2 (H11 x ¯i − H12 )

j=1

ci+m,3

⎧ m  ∂ β¯3 β¯3 ∂D 1 ⎨ = =− + ¯i − H22 ) z¯j (H12 x ∂ y¯i D ∂ y¯i D⎩ j=1



2 z¯i + (H12 − H11 x + H11 H22 − H12 ¯i )

m  j=1

ci+2m,3 =

∂ β¯3 = bi3 . ∂ z¯i

Parabolas – Section 23.4 The design matrix



1 ⎜ 1 A=⎜ ⎝... 1

x ¯1 x ¯2 ... x ¯m

⎞ x ¯21 x ¯22 ⎟ ⎟ ... ⎠ x ¯2m

produces ⎛

H11 H12 H13



⎟ ⎜ AT A = ⎝ H21 H22 H23 ⎠ H31 H32 H33 with elements

m 

⎫ ⎬ y¯j z¯j

j=1

x ¯j z¯j

⎫ ⎬ ⎭



B

H11 = m ,

H22 =

m 

x ¯2j

,

Expansion of Solution Vectors

H33 =

j=1

H12 = H21 =

m 

x ¯j ,

m 

255

x ¯4j

j=1

H13 = H31 =

j=1

m 

x ¯2j ,

H23 = H32 =

j=1

m 

x ¯3j .

j=1

Denoting the determinant of AT A by

2 + H12 (H31 H23 − H33 H12 ) D = H11 H22 H33 − H23 +H13 (H21 H32 − H13 H22 ) we have +

,−1

ATA

⎛ =

2 H22 H33 − H23

| H23 H13 − H12 H33 | H12 H23 − H22 H13



1⎜ ⎟ 2 | H12 H13 − H11 H23 ⎠ . ⎝H23 H13 − H12 H33 | H11 H33 − H13 D 2 H12 H23 − H22 H13 | H12 H13 − H11 H23 | H11 H22 − H12

¯ = B T y¯ take the form Putting B = A(AT A)−1 the components of β β¯1 =

m  j=1

bj1 y¯j ,

β¯2 =

m  j=1

bj2 y¯j ,

β¯3 =

m 

bj3 y¯j

j=1

i.e. m & 1  % 2 H22 H33 − H23 + [H23 H13 − H12 H33 ] x ¯j β¯1 = D j=1  + [H12 H23 − H22 H13 ] x ¯2j y¯j m & % 1  2 [H23 H13 − H12 H33 ] + H11 H33 − H13 x ¯j β¯2 = D j=1  + [H12 H13 − H11 H23 ] x ¯2j y¯j m 1  {[H12 H23 − H22 H13 ] + [H12 H13 − H11 H23 ] x ¯j β¯3 = D j=1 & 2 % 2 x ¯j y¯j . + H11 H22 − H12

The partial derivatives of the determinant turn out to be ∂D = 2 [H13 H23 − H12 H33 ] ∂x ¯i % & 2 +2 H11 H33 + 2 (H12 H23 − H13 H22 ) − H13 x ¯i

3 2 x ¯i +6 [H12 H13 − H11 H23 ] x ¯2i + 4 H11 H22 − H12 ∂D = 0. ∂ y¯i

256

B

Expansion of Solution Vectors

Those of the components of the solution vector read ⎧ m

 ∂ β¯1 β¯1 ∂D 1 ⎨ ci1 = =− + ¯i − 3H23 x ¯2i + 2H22 x ¯3i y¯j 2 H33 x ∂x ¯i D ∂x ¯i D⎩ j=1

+ (H23 H13 − H12 H33 ) y¯i m

 + −H33 + 2H23 x ¯i + 3H13 x ¯2i − 4H12 x ¯3i x ¯j y¯j j=1

+2 (H12 H23 − H22 H13 ) x ¯i y¯i % + H23 − 2 (H13 + H22 ) x ¯i + 3H12 x ¯2i

m &

x ¯2j y¯j

j=1



∂ β¯1 = bi1 ∂ y¯i

ci+m,1 =

ci2

⎫ ⎬

⎧ m ¯ ¯

 ∂ β2 β2 ∂D 1 ⎨ −H33 + 2H23 x = =− + ¯i + 3H13 x ¯2i − 4H12 x ¯3i y¯j ∂x ¯i D ∂x ¯i D⎩ j=1 m

 +4 −H13 x ¯i + H11 x3i x ¯j y¯j



+ H11 H33 −

j=1

y¯i + 2 (H12 H13 − H11 H23 ) x ¯i y¯i ⎫ m ⎬ & % + H13 + 2H12 x ¯i − 3H11 x ¯2i x ¯2j y¯j ⎭ 2 H13

j=1

ci+m,2 =

ci3

∂ β¯2 = bi2 ∂ y¯i

⎧ m & ∂ β¯3 β¯3 ∂D 1 ⎨% H23 − 2 (H13 + H22 ) x = =− + ¯i + 3H12 x ¯2i y¯j ∂x ¯i D ∂x ¯i D⎩ j=1

+ H13 + 2H12 x ¯i −

3H11 x ¯2i

m



x ¯j y¯j + (H12 H13 − H11 H23 ) y¯i

j=1



2 x ¯i y¯i + 2 (−H12 + H11 x +2 H11 H22 − H12 ¯i )

m  j=1

ci+m,3 =

∂ β¯3 = bi3 . ∂ y¯i

x ¯2j y¯j

⎫ ⎬ ⎭

C

Special Confidence Ellipses and Ellipsoids

In case repeated measurements are lacking, confidence ellipses and ellipsoids are not ready to hand. In the following, a heuristic approach is discussed. Ellipses Chapter 13 relates the adjustment of a straight line to individual measurements. In order to design a confidence ellipse, we start from Hotelling’s ellipse as given, e.g., in (3.71), [28] t2 (2, n − 1) = n(ζ¯ − μ)T s−1 (ζ¯ − μ) =

& n % syy (¯ x − μx )2 − 2sxy (¯ x − μx )(¯ y − μy ) + sxx (¯ y − μy )2 . |s |

(C.1)

Putting s¯−1 = ns−1 where 

⎛s sxx

xx

sxy

s=

; syx

syy

⎜ n s¯ = ⎝ syx n

sxy ⎞ n ⎟ syy ⎠ n

(C.2)

we have ¯−1 (ζ¯ − μ) t2 (2, n − 1) = (ζ¯ − μ)T s =

* sxy sxx 1 ) syy (¯ x − μx )2 − 2 (¯ x − μx )(¯ (¯ y − μy )2 . y − μy ) + |¯ s| n n n

(C.3)

To recall, the ellipse refers to arithmetic means x ¯, y¯ with expectations μx , μy and an empirical variance–covariance matrix s¯, the elements of which have degrees of freedom n − 1. By contrast, Sect. 13.6 considers estimators β¯1 , β¯2 with expectations μβ¯1 and μβ¯2 and empirical variance–covariance matrix

258

C

Special Confidence Ellipses and Ellipsoids

 sβ¯ =

sβ¯1 β¯1

sβ¯1 β¯2

sβ¯2 β¯1

sβ¯2 β¯2

(C.4)

the elements of which have degrees of freedom m − 2. Let us tentatively cast Hotelling’s ellipse into ¯ − μβ¯)T s−1 ¯ t2 (2, . . .) = (β ¯) ¯ (β − μβ β =

1 % s ¯ ¯ (β¯1 − μβ¯1 )2 − 2sβ¯1 β¯2 (β¯1 − μβ¯1 )(β¯2 − μβ¯2 ) |sβ¯| β2 β2

(C.5)

& + sβ¯1 β¯1 (β¯2 − μβ¯2 )2 . While (C.3) includes repeated measurements (C.5) does not. Nevertheless, in both cases, the statistical fluctuations per se stand on an equal footing. To invoke the pertaining density a formal n is needed. Hotelling’s density reads pT (t; m, n − 1) =

(n −

tm−1 2Γ(n/2) − m)/2]Γ(m/2) [1 + t2 /(n − 1)]n/2

1)m/2 Γ[(n

t > 0,

(C.6)

n>m

Let us firstly substitute the number 2 for m as we are considering two variables, pT (t; 2, n − 1) =

t 2Γ(n/2) . (n − 1)Γ[(n − 2)/2] [1 + t2 /(n − 1)]n/2

Next we replace Hotelling’s degrees of freedom n−1 by the degrees of freedom of the elements of the matrix (C.4), i.e. by m − 2. Hence pT (t; 2, m − 2) =

t 2Γ((m − 1)/2) . (m − 2)Γ[(m − 3)/2] [1 + t2 /(m − 2)](m−1)/2

(C.7)

After all, the proceeding suggests to assign the confidence ellipse sβ¯2 β¯2 (β1 − β¯1 )2 − 2sβ¯1 β¯2 (β1 − β¯1 )(β2 − β¯2 ) + sβ¯1 β¯1 (β2 − β¯2 )2 (C.8) = t2P (2, m − 2)|sβ¯ | to the empirical estimators β¯1 , β¯2 of the adjustment of a straight line as discussed in Chap. 13. Consulting a table for the quantiles of Hotelling’s

C

Special Confidence Ellipses and Ellipsoids

259

density, we look for the entry which refers to 2 variables and m − 2 degrees of freedom. After all, we expect (C.8) to localize the point  μβ¯1 (C.9) μβ¯ = μβ¯2 with probability P . Finally, we hint that the ellipse’s angle of rotation tan(2ϕ) =

−sβ¯1 β¯2 , sβ¯2 β¯2 − sβ¯1 β¯1

(C.10)

measured counterclockwise against the β1 -axis of a rectangular β1 , β2 coordinate system, proves sample-independent as the empirical variance s2y cancels out.

Ellipsoids Chapter 17 relates the adjustment of a plane to individual measurements. Following similar arguments we arrive at 2 ¯ T s−1 ¯ (β − β) ¯ (β − β) = tP (3, m − 3) β

(C.11)

as here the elements of the empirical variance–covariance matrix sβ¯ have degrees of freedom m − 3. Hence, we have to refer to the quantiles of the density pT (t; 3, m − 3)

=

(m −

2Γ[(m − 2)/2] − 5)/2] Γ(3/2) 

t2

3)3/2 Γ[(m

1+

for 3 variables and m − 3 degrees of freedom.

t2 m−3

(m−2)/2

(C.12)

D

Extreme Points of Ellipses and Ellipsoids

To facilitate graphical representations of ellipses and ellipsoids we draw upon their extreme points.

Confidence Ellipses The extreme point of the confidence ellipse (13.27) with respect to the β1 and β2 -directions are given by  ¯ sxx β1 t = ±√ (D.1) sxx sxy β¯2 β -direction 1 and

¯ β1 β¯2

β2 -direction

t = ±√ syy



syx ,

(D.2)

syy

respectively. Correspondingly, for (14.26) we have ¯  β1 sxx t = ±√ √ n sxx sxy β¯2 β -direction 1 and

¯ β1 β¯2

β2 -direction

t = ±√ √ n syy



(D.3)

syx . syy

(D.4)

Confidence Ellipsoids With respect to ellipsoids, let us start out considering xT s−1 x = t2 /n where x = (x more, let

y

T

z)

(D.5)

marks any point on the ellipsoid’s skin. Further-

262

D

Extreme Points of Ellipses and Ellipsoids



s11 s12 s13



⎟ ⎜ ⎟ s=⎜ ⎝ s21 s22 s23 ⎠ s31 s32 s33

(D.6)

denote the empirical variance–covariance matrix and ⎛ ⎞ ρ11 ρ21 ρ31 ⎜ ⎟ ⎟ adj s = ⎜ ⎝ ρ12 ρ22 ρ32 ⎠ ρ13 ρ23 ρ33

(D.7)

its adjoint, with ρij the cofactors of sij , both matrices being symmetrical. As is well known, we have s−1 =

adj s |s |

(D.8)

with |s | the determinant of s. In the following we shall refer to the relations |s |s11 = ρ22 ρ33 − ρ32 ρ23 ,

|s |s22 = ρ11 ρ33 − ρ13 ρ31

|s |s12 = ρ31 ρ23 − ρ21 ρ33 ,

|s |s23 = ρ12 ρ31 − ρ11 ρ32

|s |s13 = ρ21 ρ32 − ρ22 ρ13 ,

|s |s33 = ρ11 ρ22 − ρ12 ρ21

(D.9)

and ρ11 s31 + ρ21 s32 + ρ31 s33 = 0 ρ12 s31 + ρ22 s32 + ρ32 s33 = 0

(D.10)

ρ13 s31 + ρ23 s32 + ρ33 s33 = |s | . From (D.5) and (D.8) we draw f (x, y, z) = ρ11 x2 + ρ22 y 2 + ρ33 z 2 + 2ρ12 xy + 2ρ23 yz + 2ρ31 zx −

t2 |s| = 0. n

(D.11)

T

Let (x1 y1 z1 ) denote one of the two extreme points in the z-direction. T The respective normal vector is given by (0 0 ∂f /∂z) . Hence ρ11 x1 + ρ12 y1 + ρ31 z1 = 0 ρ12 x1 + ρ22 y1 + ρ23 z1 = 0 ∂f . ρ31 x1 + ρ23 y1 + ρ33 z1 = ∂z

(D.12)

D

Extreme Points of Ellipses and Ellipsoids

263

Solving for x1 , y1 , z1 yields x1 =

1 ∂f 1 ∂f (ρ12 ρ23 − ρ31 ρ22 ) = s13 |s |2 ∂z |s | ∂z

y1 =

1 ∂f 1 ∂f (ρ12 ρ31 − ρ23 ρ11 ) = s23 |s |2 ∂z |s | ∂z

z1 =

1 ∂f 1 ∂f (ρ11 ρ22 − ρ12 ρ12 ) = s33 . 2 |s | ∂z |s | ∂z

(D.13)

Inserting this into (D.11) we find ∂f 1 t = ±√ |s | √ . ∂z s33 n

(D.14)

Analogue procedures lead to the extreme points in the x- and y-direction, respectively. Returning to the center coordinates β¯1 , β¯2 , and β¯3 we have in the β1 -direction ⎛¯ ⎞ β1 ⎜ ⎟ ⎜ β¯2 ⎟ ⎝ ⎠ β¯3



β1 -direction

s11



⎜ ⎟ t ⎜ s12 ⎟ , = ±√ √ ⎝ ⎠ n s11 s13

(D.15)

in the β2 -direction ⎛¯ ⎞ β1 ⎜ ⎟ ⎜ β¯2 ⎟ ⎝ ⎠ β¯3



β2 -direction

s21



⎜ ⎟ t ⎜ s22 ⎟ = ±√ √ ⎝ ⎠ n s22 s23

(D.16)

and, finally, in the β3 -direction ⎛¯ ⎞ β1 ⎜ ⎟ ⎜ β¯2 ⎟ ⎝ ⎠ β¯3



β3 -direction

s31



⎜ ⎟ t ⎜ s32 ⎟ . = ±√ √ ⎝ ⎠ n s33 s33

(D.17)

E

Drawing Ellipses and Ellipsoids

Just for convenience, we summarize the basic formulas to be used to draw ellipses and ellipsoids.

Ellipses To have an example, we refer to (13.27), sβ¯2 β¯2 (β1 − β¯1 )2 − 2sβ¯1 β¯2 (β1 − β¯1 )(β2 − β¯2 ) + sβ¯1 β¯1 (β2 − β¯2 )2 = t2P (2, m − 2)|sβ¯ | . Putting β1 − β¯1 = r cos ϕ;

β2 − β¯2 = r sin ϕ

we have r=$

$ tP (2, m − 2) |sβ¯ | sβ¯2 β¯2 cos2 ϕ − 2sβ¯1 β¯2 sin ϕ cos ϕ + sβ¯1 β¯1 sin2 ϕ

.

In case of (14.26)

2





2 sβ¯2 β¯2 β1 − β¯1 − 2sβ¯1 β¯2 β1 − β¯1 β2 − β¯2 + sβ¯1 β¯1 β2 − β¯2 = | sβ¯ |

t2P (2, n − 1) n

we find

r=$

tP (2, n − 1) $ √ |sβ¯ | n sβ¯2 β¯2 cos2 ϕ − 2sβ¯1 β¯2 sin ϕ cos ϕ + sβ¯1 β¯1 sin2 ϕ

.

266

E

Drawing Ellipses and Ellipsoids

Ellipsoids We refer to (17.22), 2 ¯ T s−1 ¯ (β − β) ¯ (β − β) = tP (3, m − 3) , β

(E.1)

with sβ¯ as given in (17.14), ⎛

sβ¯1 β¯2

sβ¯1 β¯1

⎜ sβ¯ = ⎝ sβ¯2 β¯1 sβ¯3 β¯1

sβ¯1 β¯3



sβ¯2 β¯2

⎟ sβ¯2 β¯3 ⎠ .

sβ¯3 β¯2

sβ¯3 β¯3

To shorten the notation we put x = β1 − β¯1 ,

y = β2 − β¯2 ,

z = β3 − β¯3

and, in a formal sense, ⎛

γ11 γ12 γ13



⎟ ⎜ s−1 ¯ = ⎝ γ21 γ22 γ23 ⎠ β

(E.2)

γ31 γ32 γ33 so that (E.1) turns into γ11 x2 + γ22 y 2 + γ33 z 2 + 2γ12 xy + 2γ23 yz + 2γ31 zx = t2P (3, m − 2) .

(E.3)

The contour lines z = const. are obviously ellipses. Following (D.17), we have to consider the interval √ √ − tP (3, m − 3) sβ¯3 β¯3 ≤ z ≤ tP (3, m − 3) sβ¯3 β¯3 . (E.4) As the centers of the ellipses are shifted with respect to the origin x = y = 0, we put x = ξ + xM ;

y = η + yM

(E.5)

where xM =

γ22 γ31 − γ12 γ23 z; 2 −γ γ γ12 11 22

yM =

γ11 γ23 − γ12 γ31 z. 2 −γ γ γ12 11 22

(E.6)

For any z=const. we have γ11 ξ 2 + 2γ12 ξη + γ22 η 2 = t2P (3, m − 3) 2 −(γ33 z 2 + 2(γ31 xM + γ23 yM )z + γ11 x2M + 2γ12 xM yM + γ22 yM ).

(E.7)

After all, the problem to draw a spatially rotated ellipsoid has been reduced to the problem to draw a sufficiently dense set of ellipses perpendicular to the z-axis.

F

Security Polygons and Polyhedra

Unknown systematic errors cause a new, in error calculus hitherto unknown species of geometrical objects to go on stage [8]. These are polygons in case of two measurands, polyhedra in case of three and abstract polytopes for more than three measurands. At any rate, the objects are convex and pointsymmetric [28]. The naming security polygon, security polyhedron and security polytope establishes a correspondence to the terms confidence ellipse and confidence ellipsoid. We shall confine ourselves to polygons and polyhedra. Security Polygons Edges Let us reconsider (14.10). For r = 2 the components of the propagated systematic error fβ¯k =

m 

bik fy¯i ;

k = 1, 2

i=1

span a polygon. In a way, the polygon comes into being while the fy¯1 , fy¯1 , . . . , fy¯m “scan” the set of points lying within or on the faces of the m-dimensional hypercuboid −fs,¯yi ≤ fy¯i ≤ fs,¯yi ;

i = 1, . . . , m .

To simplify matters, let us put fy¯i = fi , fs,¯yi = fs,i and m = 5 so that fβ¯1 = b11 f1 + b21 f2 + b31 f3 + b41 f4 + b51 f5

(F.1)

fβ¯2 = b12 f1 + b22 f2 + b32 f3 + b42 f4 + b52 f5 with − fs,i ≤ fi ≤ fs,i ;

i = 1, . . . , 5 .

(F.2)

With reference to a rectangular fβ¯1 , fβ¯2 coordinate system, the polygon is obviously enclosed within a rectangle

268

F

Security Polygons and Polyhedra

−fs,β¯k ≤ fβ¯k ≤ fs,β¯k ;

fs,β¯k =

5 

|bik | fs,i ;

k = 1, 2 .

i=1

Let us assume b11 = 0. Eliminating f1 , we find a straight line fβ¯2 =

b12 f ¯ + c1 b11 β1

(F.3)

where c1 = h12 f2 + h13 f3 + h14 f4 + h15 f5 and h12 =

1 (b11 b22 − b12 b21 ) , b11

h13 =

1 (b11 b32 − b12 b31 ) b11

h14 =

1 (b11 b42 − b12 b41 ) , b11

h15 =

1 (b11 b52 − b12 b51 ) . b11

Any variation of c1 shifts the line (F.3) parallel to itself; c1 is maximal if f2 = f2∗ = sign (h12 ) fs,2 ,

f3 = f3∗ = sign (h13 ) fs,3

f4 = f4∗ = sign (h14 ) fs,4 ,

f5 = f5∗ = sign (h15 ) fs,5 .

(F.4)

We obviously have cs,1 = h12 f2∗ + h13 f3∗ + h14 f4∗ + h15 f5∗ and − cs,1 ≤ c1 ≤ cs,1 .

(F.5)

Hence, the lines fβ¯2 =

b12 f ¯ + cs,1 , b11 β1

fβ¯2 =

b12 f ¯ − cs,1 b11 β1

(F.6)

hold maximum distance thus pegging two edges of the polygon. Shifting f1 in −fs,1 , . . . , fs,1 , the point fβ¯1 = b11 f1 + b21 f2∗ + b31 f3∗ + b41 f4∗ + b51 f5∗ fβ¯2 = b12 f1 + b22 f2∗ + b32 f3∗ + b42 f4∗ + b52 f5∗ ,

(F.7)

slides along the polygon’s cs,1 edge and something similar applies to the −cs,1 edge.

F

Security Polygons and Polyhedra

269

Vertices When it comes to specify the vertices of the two edges obtained so far, the cs,1 edge is marked off by Vertex

V1

fβ¯1 ,1 = b11 fs,1 + b21 f2∗ + b31 f3∗ + b41 f4∗ + b51 f5∗ fβ¯2 ,1 = b12 fs,1 + b22 f2∗ + b32 f3∗ + b42 f4∗ + b52 f5∗ and

Vertex

V2

fβ¯1 ,2 = −b11 fs,1 + b21 f2∗ + b31 f3∗ + b41 f4∗ + b51 f5∗ fβ¯2 ,2 = −b12 fs,1 + b22 f2∗ + b32 f3∗ + b42 f4∗ + b52 f5∗ . For the −cs,1 edge we have Vertex

V3

fβ¯1 ,3 = b11 fs,1 − b21 f2∗ − b31 f3∗ − b41 f4∗ − b51 f5∗ fβ¯2 ,3 = b12 fs,1 − b22 f2∗ − b32 f3∗ − b42 f4∗ − b52 f5∗ and

Vertex

V4

fβ¯1 ,4 = −b11 fs,1 − b21 f2∗ − b31 f3∗ − b41 f4∗ − b51 f5∗ fβ¯2 ,4 = −b12 fs,1 − b22 f2∗ − b32 f3∗ − b42 f4∗ − b52 f5∗ . This suggests a point symmetry with respect to the vertices V1 and V4 and the vertices V2 and V3 , respectively. After all, we have found two of the polygon’s edges including their vertices. We conclude that the polygon holds at most m pairs of parallel edges. Given b11 = 0, we cannot eliminate f1 . Instead, (F.1) produces two abscissas fβ¯1 = ± [ sign(b21 ) b21 fs,2 + sign(b31 ) b31 fs,3

(F.8)

+ sign(b41 ) b41 fs,4 + sign(b51 ) b51 fs,5 ] through each of which a line is to be drawn parallel to the fβ¯2 -axis. The coordinate fβ¯2 slides along these verticals following fβ¯2 = b12 f1 ± [ sign(b21 ) b22 fs,2 + sign(b31 ) b32 fs,3 + sign(b41 ) b42 fs,4 + sign(b51 ) b52 fs,5 ] .

(F.9)

270

F

Security Polygons and Polyhedra

Equal systematic errors Assuming fy¯i = fy ; fs,¯yi = fs,y ; i = 1, . . . , m, due to (14.13), the security polygon degenerates into an interval − fs,y ≤ fs,β¯1 ≤ fs,y ;

fβ¯2 = 0 .

(F.10)

Security Polyhedra Faces To examine the geometrical properties of the convex, point-symmetric hull spanned by three propagated systematic errors, fβ¯k =

m 

bik fi ,

−fs,i ≤ fi ≤ fs,i ;

k = 1, 2, 3

i=1

we again confine ourselves to m = 5 fβ¯1 = b11 f1 + b21 f2 + b31 f3 + b41 f4 + b51 f5 fβ¯2 = b12 f1 + b22 f2 + b32 f3 + b42 f4 + b52 f5

(F.11)

fβ¯3 = b13 f1 + b23 f2 + b33 f3 + b43 f4 + b53 f5 . With respect to a rectangular fβ¯1 , fβ¯2 , fβ¯3 coordinate system, the polyhedron is enclosed within a cuboid given by − fs,β¯k ≤ fβ¯k ≤ fs,β¯k ;

fs,β¯k =

5 

|bik | fs,i ;

k = 1, 2, 3 .

(F.12)

i=1

Rewriting (F.11) according to b11 f1 + b21 f2 + b31 f3 = fβ¯1 − b41 f4 − b51 f5 = ξ b12 f1 + b22 f2 + b32 f3 = fβ¯2 − b42 f4 − b52 f5 = η

(F.13)

b13 f1 + b23 f2 + b33 f3 = fβ¯3 − b43 f4 − b53 f5 = ζ we may eliminate f1 and f2 thus solving for f3      b11 b21 b31   b11 b21 ξ          f3  b12 b22 b32  =  b12 b22 η  .      b13 b23 b33   b13 b23 ζ  Defining λ12 = b12 b23 − b22 b13 ,

μ12 = b21 b13 − b11 b23 ,

ν12 = b11 b22 − b21 b12

F

Security Polygons and Polyhedra

271

we have λ12 fβ¯1 + μ12 fβ¯2 + ν12 fβ¯3 = c12

(F.14)

c12 = h12,3 f3 + h12,4 f4 + h12,5 f5

(F.15)

where

and h12,3 = (λ12 b31 + μ12 b32 + ν12 b33 ) h12,4 = (λ12 b41 + μ12 b42 + ν12 b43 ) h12,5 = (λ12 b51 + μ12 b52 + ν12 b53 ) . Varying c12 , which we can do via f3 , f4 , f5 , the plane (F.14) gets shifted parallel to itself. The choices f3 = f3∗ = sign (h12,3 ) fs,3 f4 = f4∗ = sign (h12,4 ) fs,4 f5 = f5∗ = sign (h12,5 ) fs,5 assign a maximum value to c12 cs,12 = h12,3 f3∗ + h12,4 f4∗ + h12,5 f5∗ whereat −cs,12 ≤ c12 ≤ cs,12 . After all, we have found two of the solid’s faces. Calling them F1 and F2, we observe λ12 fβ¯1 + μ12 fβ¯2 + ν12 fβ¯3 = −cs,12

F1

λ12 fβ¯1 + μ12 fβ¯2 + ν12 fβ¯3 =

F2 .

cs,12

(F.16)

In f = (f1

f2

f3

f4

f5 )

T

the variables f1 , f2 are free to vary, while the variables f3 , f4 , f5 are to be kept fixed. If f1 , f2 vary, the vector f β¯ = B T f slides along the faces F1 and F2. Cyclically swapping the variables, we successively find the polyhedron’s remaining faces. As Table F.1 indicates, the maximum number of faces is given by m(m − 1). Table F.2 visualizes the swapping procedure. Given in (F.11), say, b11 and b12 vanish, we cannot eliminate f1 . Moreover, as ν12 becomes zero, (F.16) represents a pair of planes parallel to the fβ¯3 -axis, λ12 fβ¯1 + μ12 fβ¯2 = ±cs,12 ;

ν12 = 0 .

The cases λ12 = 0 and μ12 = 0 are to be treated correspondingly.

272

F

Security Polygons and Polyhedra Table F.1. Maximum Number of Faces

× × × × ×

f1 f2 × × × ×

f1 f3 f2 f3 × × ×

f1 f4 f2 f4 f3 f4 × ×

f1 f5 f2 f5 f3 f5 f4 f5 ×

Table F.2. Cyclic Elimination, r = 3 and m = 5 Eliminated variable 1st step 2nd step 3rd step 4th step 5th step 6th step 7th step 8th step 9th step 10th step

f1 f1 f1 f1 f2 f2 f2 f3 f3 f4

f2 f3 f4 f5 f3 f4 f5 f4 f5 f5

Variables remaining within the system ⇒ ⇒ ⇒ ⇒ ⇒ ⇒ ⇒ ⇒ ⇒ ⇒

f3 f4 f5 f2 f4 f5 f1 f5 f1 f1

f4 f5 f2 f3 f5 f1 f3 f1 f2 f2

f5 f2 f3 f4 f1 f3 f4 f2 f4 f3

Vertices Given h12,i = 0; i = 3, 4, 5, we may consider two sets of f -vectors with respect to the faces F1 and F2, namely [ −fs,1

−fs,2

−f3∗

−f4∗

−f5∗ ]

T

[

fs,1

−fs,2

−f3∗

−f4∗

−f5∗ ]

T

[ −fs,1

fs,2

−f3∗

−f4∗

−f5∗ ]

T

[

fs,1

fs,2

−f3∗

−f4∗

−f5∗ ]

T

[

fs,1

fs,2

f3∗

f4∗

f5∗ ]

T

[ −fs,1

fs,2

f3∗

f4∗

f5∗ ]

T

fs,1

−fs,2

f3∗

f4∗

f5∗ ]

T

[ −fs,1

−fs,2

f3∗

f4∗

f5∗ ]

T

F1

and

[

F2

F

Security Polygons and Polyhedra

273

On each of the faces, these vectors appoint four vertices via f β¯ = B T f . For F1, we find Vertex

V1

fβ¯1 ,1 = −b11 fs,1 − b21 fs,2 − b31 f3∗ − b41 f4∗ − b51 f5∗ fβ¯2 ,1 = −b12 fs,1 − b22 fs,2 − b32 f3∗ − b42 f4∗ − b52 f5∗ fβ¯3 ,1 = −b13 fs,1 − b23 f2 − b33 f3∗ − b43 f4∗ − b52 f5∗ Vertex

V2

fβ¯1 ,1 = b11 fs,1 − b21 fs,2 − b31 f3∗ − b41 f4∗ − b51 f5∗ fβ¯2 ,1 = b12 fs,1 − b22 fs,2 − b32 f3∗ − b42 f4∗ − b52 f5∗ fβ¯3 ,1 = b13 fs,1 − b23 fs,2 − b33 f3∗ − b43 f4∗ − b52 f5∗ Vertex

V3

fβ¯1 ,1 = −b11 fs,1 + b21 fs,2 − b31 f3∗ − b41 f4∗ − b51 f5∗ fβ¯2 ,1 = −b12 fs,1 + b22 fs,2 − b32 f3∗ − b42 f4∗ − b52 f5∗ fβ¯3 ,1 = −b13 fs,1 + b23 fs,2 − b33 f3∗ − b43 f4∗ − b52 f5∗ Vertex

V4

fβ¯1 ,1 = b11 fs,1 + b21 fs,2 − b31 f3∗ − b41 f4∗ − b51 f5∗ fβ¯2 ,1 = b12 fs,1 + b22 fs,2 − b32 f3∗ − b42 f4∗ − b52 f5∗ fβ¯3 ,1 = b13 fs,1 + b23 fs,2 − b33 f3∗ − b43 f4∗ − b52 f5∗ As the polyhedra are point-symmetric, the vertices on F2 are to be obtained from those on F1 by mirroring. Now assume h12,5 to vanish. Then, as (F.11) and (F.15) suggest, each of the faces carries eight vertices instead of four. Should additionally h12,4 vanish, each face would hold 16 vertices. However, at least one of the coefficients h12,i , i = 3, 4, 5 must be unequal to zero as otherwise the planes F1 and F2 would pass through the origin.

274

F

Security Polygons and Polyhedra

Example We shall find the surfaces and vertices of the polyhedron ⎛ ⎞ 1 2 3 −1 2 ⎜ ⎟ f β¯ = B T f , B T = ⎝ −1 3 2 2 −1 ⎠ , −1 ≤ fi ≤ 1 ; 2 −1 −3 3 −3

i = 1, . . . , 4 .

As suggested by fβ¯1 = f1 + 2f2 + 3f3 − f4 + 2f5 fβ¯2 = −f1 + 3f2 + 2f3 + 2f4 − f5 fβ¯3 = 2f1 − f2 − 3f3 + 3f4 − 3f5 , the polyhedron is enclosed within the cuboid −9 ≤ fβ¯1 ≤ 9 ,

−9 ≤ fβ¯2 ≤ 9 ,

−12 ≤ fβ¯3 ≤ 12 .

Cyclic elimination yields the coefficients λ, μ, ν, the constants h12,3 , . . . and ± cs,12 , . . . and finally the vertices and faces. The polyhedron is depicted in Fig. F.1 and shows 20 faces, each face holding four vertices.

F

Security Polygons and Polyhedra

275

Fig. F.1. Security polyhedron as specified in the example: there are 20 faces, each face having four vertices

G

EP Boundaries and EPC Hulls

The configuration of confidence ellipses and ellipsoids on the one hand with security polygons and polyhedra on the other gives rise to EP boundaries and EPC hulls as termed by the author. An EP boundary is intended to localize a 2-tuple of true values with respect to the related 2-tuple of estimators. An EPC hull does, mutatis mutandis, the same for a 3-tuple. We shall abstain from considering tuples with dimensionality greater than 3. To recall [8, 29, 28]: –



An EP boundary denotes a convex, continuously differentiable closed line composed of the segments of a confidence Ellipse and the edges of a Polygon. An EPC hull devises a convex, continuously differentiable closed surface composed of the segments of a confidence Ellipsoid, the faces of a Polyhedron and certain segments of elliptical Cylinders providing smooth transitions between the flat and elliptically curved parts of the hull.

While an EP boundary resembles a convex slice of a potato, an EPC hull takes after a convex potato as a whole. EP Boundary Ellipse and Interval Let us consider a rectangular β1 , β2 coordinate system and estimators β¯1 , β¯2 with expectations μβ¯1 , μβ¯2 and true values β0,1 , β0,2 , respectively. Any point of the ellipse’s circumference may coincide with the point  μβ¯1 . μβ¯ = μβ¯2 Let the interval limiting the propagated systematic error be stretched along the β1 - axes and notionally be taken as a “stick.” To span the EP region, we move the midpoint of the “stick” along the circumference of the ellipse, keeping the stick’s orientation steadily horizontal, i.e. parallel to the β1 -axis. The end of the stick pointing away from the ellipse produces a point of the EP boundary. For a practical approach, we observe

278

G

EP Boundaries and EPC Hulls

that there are two points of the ellipse in which the “stick” coincides with the local tangent. Here, we decompose the ellipse shifting one of the partial arcs to the left and the other to right, in each case parallel to the β1 -axis. Reconnecting the arcs on either side by copies of the horizontally kept “stick” produces the EP boundary as a whole. We expect the tuple (β0,1 , β0,2 ) of true values to be an element of the set of (β1 , β2 )-tuples lying within or on the borderline of the EP region. Ellipse and Polygon Again, we refer to a rectangular β1 , β2 coordinate system and estimators β¯1 , β¯2 with expectations μβ¯1 , μβ¯2 and true values β0,1 , β0,2 , respectively. Any point of the ellipse’s circumference may coincide with the point  μβ¯1 μβ¯ = . μβ¯2 Let us now move the center of the polygon in discrete, sufficiently narrow steps along the circumference of the ellipse, keeping the polygon’s orientation steadily constant. In each of the breakpoints we draw a tangent to the ellipse. Shifting the tangent away from its osculation point, parallel to itself and to the ellipse’s outside, it either strikes a vertex of the polygon or coincides with one of its edges at any rate marking the largest possible distance from the osculation point. In the first case we produce a point, in the latter a line segment of the EP boundary. After all, the boundary of the EP region consists of the segments of a contiguous decomposition of the ellipse’s circumference and the edges of the polygon, where arcs and edges follow in succession [29]. We expect the tuple (β0,1 , β0,2 ) of true values to be an element of the set of (β1 , β2 )-tuples lying within or on the borderline of the EP region. EPC Hull Ellipsoid and Interval We refer to a rectangular β1 , β2 , β3 coordinate system and estimators β¯1 , β¯2 , β¯3 with expectations μβ¯1 , μβ¯2 , μβ¯3 and true values β0,1 , β0,2 , β0,3 , respectively. Any point of the ellipsoid’s skin may coincide with the point ⎛ ⎞ μβ¯1 ⎜ ⎟ ⎟ μβ¯ = ⎜ ⎝ μβ¯2 ⎠ . μβ¯3 Let the interval limiting the propagated systematic error be stretched along the β1 -axis and notionally taken as a “stick.”

G

EP Boundaries and EPC Hulls

279

For convenience, we cover the ellipsoid by an ensemble of sufficiently dense contour lines β3 = const. being on their part ellipses, obviously. We then move the center of the “stick” along any one of these ellipses—keeping the stick’s spatial orientation steadily horizontal and parallel to the β¯1 -axis. Again, there are two points in which the “stick” coincides with the local tangent. Here, we decompose the ellipse shifting one of the partial arcs to the left and the other to the right, respectively, in each case parallel to the β1 -axis. Reconnecting the arcs on either side by inserting copies of the “stick” produces a first continuous and differentiable contour line of the EPC hull. Repeating the procedure with the ellipsoid’s other contour lines eventually yields the EPC hull as a whole. We expect the triple (β0,1 , β0,2 , β0,3 ) of true values to be an element of the set of (β1 , β2 , β3 )-tuples lying within or on the border of the EPC hull. Ellipsoid and Polygon We conceive a rectangular β1 , β2 , β3 coordinate system and estimators β¯1 , β¯2 , β¯3 with expectations μβ¯1 , μβ¯2 , μβ¯3 and true values β0,1 , β0,2 , β0,3 , respectively. Any point of the ellipsoid’s skin may coincide with the point ⎛ ⎞ μβ¯1 ⎜ ⎟ ⎟ μβ¯ = ⎜ ⎝ μβ¯2 ⎠ . μβ¯3 Let fβ¯3 = 0 so that the polygon confining the pair (fβ¯1 , fβ¯2 ) of propagated systematic error is aligned parallel to β1 , β2 -plane. Again, we cover the ellipsoid by an ensemble of contour lines β3 = const. We then move the center of the polygon in discrete, sufficiently narrow steps along any such contour line β3 = const., keeping the polygon’s orientation steadily constant and parallel to the β1 , β2 -plane. In each of the breakpoints we conceive a local tangent. Shifting the tangent away from its osculation point, parallel to itself and to the ellipse’s outside, it either strikes a vertex of the polygon or coincides with one of its edges, at any rate marking the largest possible distance from the osculation point. In the first case we get a single point, in the latter a line segment of a contour line of the EPC hull. Repeating the procedure with the other contour lines of the ellipsoid successively produces the EPC hull in its entirety. We expect the triple (β0,1 , β0,2 , β0,3 ) of true values to be an element of the set of (β1 , β2 , β3 )-tuples lying within or on the border of the EPC hull. Ellipsoid and Polyhedron We refer to a rectangular β1 , β2 , β3 coordinate system and estimators β¯1 , β¯2 , β¯3 with expectations μβ¯1 , μβ¯2 , μβ¯3 and true values β0,1 , β0,2 , β0,3 , respectively. Any point of the ellipsoid’s skin may coincide with the point

280

G

EP Boundaries and EPC Hulls



μβ¯1



⎟ ⎜ ⎟ μβ¯ = ⎜ ⎝ μβ¯2 ⎠ . μβ¯3 To span the EPC hull, we move the center of the polyhedron along any contour line β3 = const. of the ellipsoid, keeping the spatial orientation of the polyhedron steadily constant. In each of the breakpoints we conceive a local tangent plane. We shift the plane away from its osculation point, parallel to itself and to the ellipsoid’s outside so that, finally, it strikes either a vertex of the polyhedron or coincides with one of the polyhedron’s edges or faces. In the first case we get a point, in the second a line segment and in the third a plane segment of the EPC hull. Repeating the procedure with the ellipsoid’s other contour lines eventually produces the EPC hull as a whole (Fig. G.1). We expect the triple (β0,1 , β0,2 , β0,3 ) of true values to be an element of the set of (β1 , β2 , β3 )-tuples lying within or on the border of the EPC hull.

G

EP Boundaries and EPC Hulls

Fig. G.1. Exemplary configuration of an ellipsoid with a polyhedron

281

H

Student’s Density

We consider it beneficial to conceptualize two versions of the variable of Student’s density.

Variable T (ν) =

X −μ S

Let the random variable X be N (μ, σ 2 ) -distributed, 1 P (X ≤ x) = √ σ 2π



x

2

exp − −∞

(x − μ) 2σ 2

dx .

(H.1)

Furthermore, let the x1 , x2 , . . . , xn

(H.2)

be n realization of the random variable X. The sample variance is given by 1  2 (xl − x ¯) ; n−1 n

s2 =

ν = n − 1.

(H.3)

l=1

For a fixed t we define x = μ + ts

(H.4)

and ask for 1 P (X ≤ μ + ts) = √ σ 2π

μ+ts 



exp − −∞

(x − μ) 2σ 2

2

dx .

(H.5)

Substituting η = (x − μ)/σ for x produces 1 P (X ≤ μ + ts) = P (t, s, ν) = √ 2π

ts/σ 

e−η

−∞

2

/2

dη .

(H.6)

284

H

Student’s Density

For this to become statistically representative, we average1 by means of   2 ν ν/2 νs2 exp − 2 sν−2 (H.7) pS 2 s = ν/2 2σ 2 Γ (ν/2) σ ν so that P (t, ν) = E {P (t, s, ν)} =

⎧ ∞ ⎪ ⎨ 0

1 √ ⎪ ⎩ 2π

ts/σ 

e−η

−∞

2

/2

⎫ ⎪ ⎬ dη

⎪ ⎭

pS 2 s2 ds2 .

(H.8)

But this is a distribution function with a variable t. Hence, differentiating with respect to t dP (t, ν) = dt

∞ : 0

 2 2 ; 1 t s √ exp − 2 s pS 2 s2 ds2 2σ σ 2π

and integrating out the right-hand side issues a density, namely   ν+1 Γ 1 2 +ν , pT (t, ν) = √ .   (ν + 1)/2 πν Γ t2 2 1+ ν

(H.9)

(H.10)

This, obviously, is Student’s or Gosset’s density for the random variable T (ν) =

X −μ . S

(H.11)

Due to (H.3) and (H.7), the number of degrees of freedom is ν = n − 1.

Variable T (ν) =

¯ −μ X √ S/ n

Let us replace (H.4) by √ x ¯ = μ + ts/ n

(H.12)

in which 1 x ¯l n n

x ¯=

l=1

denotes the sample mean. As 1

Averaging by pS (s) leads to the same result as pS (s)ds = pS 2 (S 2 )dS 2 .

(H.13)

H



¯ ≤x P X ¯ =

1 √ √ σ/ n 2π



x¯

exp − −∞

Student’s Density

285

2

(¯ x − mu) 2σ 2 /n

d¯ x

(H.14)

we have

√ ¯ ≤ μ + ts/ n = P X

1 √ √ σ/ n 2π

√ μ+ts/  n



exp − −∞

2

(¯ x − μ) 2σ 2 /n

d¯ x.

(H.15)

√ Substituting η = (¯ x − μ)/(σ/ n) leads us back to (H.6). Hence, the random variable T (ν) =

¯ −μ X √ S/ n

is t-distributed with degrees of freedom ν = n − 1 .

(H.16)

I

Uncertainty Band Versus EP-Region

The fitting of straight lines asks for two-dimensional uncertainty regions. Remarkably enough, there are two different, however, equivalent approaches [29]. Set of Straight Lines Versus Set of 2-Tuples Consider a straight line as defined by a 2-tuple of true values  β0,1 β0,2 pegging its ordinate intercept β0,1 and slope β0,2 . The least squares adjustment provides a pair of estimators  β¯1 β¯2 devising a straight line y(x) = β¯1 + β¯2 x . However, there are still other straight lines compatible with the error model and the input data. The idea is to engird this bunch by an uncertainty band y¯(x) ± uy¯(x) . We expect the upper and lower borderlines y¯(x) + uy¯(x)

and y¯(x) − uy¯(x)

to localize the true straight line. At the same time, the fitted straight line may be endowed by a confidence ellipse and a security polygon. Their piecing together devises an EP-region. The latter marks a set of tuples (β1 , β2 ) being compatible with the input data and the error model. We expect the 2-tuple of true values, (β0,1 , β0,2 ), to be an element of this set.

288

I

Uncertainty Band Versus EP-Region

The approaches, though ostensibly different, should put forth concurrent results: The set of straight lines, as devised by the uncertainty band y¯(x) ± uy¯(x) should reproduce the set of 2-tuples (β1 , β2 ) as confined by the EPregion, and vice versa. We shall establish this complementarity. To this end, we step back and decompose the uncertainty band and the EP-region into its components due to random and systematic errors.

Transformation Procedures Initially, we transform the random share of the uncertainty band into the confidence ellipsoid and, thereafter, the systematic share into the security polygon. Random Errors Consider case (i) as quoted in Table 12.1. Abstaining from systematic errors, the uncertainty band is given by (13.26) disregarding the term fs,y . The confidence ellipse is given in (13.27). We envisage the set of tangents to the upper and lower boundaries of the uncertainty band. Each tangent specifies a certain y-intercept and slope, i.e a tuple  β1 . β2 Considering Fig. I.1, we see the parameters β1 , β2 of the tangent to P1 to reproduce the point P1∗ on the ellipse. More general, a tangent gliding from P1 via P2 to P3 is seen to establish the right arc P1∗ , P2∗ , P3∗ of the confidence ellipse. Similarly, a tangent gliding from P5 via P6 to P7 is seen to reproduce the ellipse’s left arc P5∗ , P6∗ , P7∗ . Let us specify the differences β1 − β¯1 and β2 − β¯2 . For this, we denote the upper boundary of the uncertainty band by y2 (x) = y¯(x) + uy¯(x) = β¯1 + β¯2 x + tP (m − 2) sy

#

m 

(bi1 + bi2 x)2 .

(I.1)

i=1

Due to (13.17) we observe $ y2 (x) = β¯1 + β¯2 x + tP (m − 2) sβ¯1 β¯1 + 2sβ¯1 β¯2 x + sβ¯2 β¯2 x2 .

(I.2)

I

Uncertainty Band Versus EP-Region

289

The tangent to any point (x, y2 (x)) y(ξ) = y2 (x) + y2 (x)(ξ − x) = β1 + β2 ξ

(I.3)

implies β1 = y2 (x) − y2 (x) x

and β2 = y2 (x) .

With y2 (x) = β¯2 + tP (m − 2) $

sβ¯1 β¯2 + sβ¯2 β¯2 x

(I.4)

sβ¯1 β¯1 + 2sβ¯1 β¯2 x + sβ¯2 β¯2 x2

we have β1 − β¯1 = tP (m − 2) $

sβ¯1 β¯1 + sβ¯1 β¯2 x

.

(I.5)

.

(I.6)

sβ¯1 β¯1 + 2sβ¯1 β¯2 x + sβ¯2 β¯2 x2

Similarly, we find β2 − β¯2 = tP (m − 2) $

sβ¯1 β¯2 + sβ¯2 β¯2 x sβ¯1 β¯1 + 2sβ¯1 β¯2 x + sβ¯2 β¯2 x2

Ultimately, inserting (I.5) and (I.6) into (13.27) and gathering terms in powers of x0 , x1 , and x2 issues an identity, provided that tP (m − 2) = tP (2, m − 2) . This, however, ascertains differing probabilities as P (m − 2) = P (2, m − 2) .

(I.7)

The disparity is to be attributed to the observation that the straight line’s uncertainty band is established by Student’s density, which is one-dimensional, while the confidence ellipse is issued by Hotelling’s density, which is twodimensional. To have an example, we assume m = 14 and choose P (2, m − 2) = 95% for Hotelling’s density. This produces tP (2, m − 2) = 3.0. On the other hand, for tP (m − 2) = 3.0, Student’s density issues a probability of P (m − 2) = 99%. Letting x tend to infinity, we get the tangent to y2 (x) with y-intercept and slope sβ¯ β¯ lim {β1 } = β¯1 + tP (m − 2) √ 1 2 x→∞ sβ¯2 β¯2 sβ¯ β¯ lim {β2 } = β¯2 + tP (m − 2) √ 2 2 x→∞ sβ¯2 β¯2

(I.8)

290

I

Uncertainty Band Versus EP-Region

producing the point P4∗ of Fig. I.1. Putting x = 0, we find the tangent to y2 (x) with y-intercept and slope sβ¯ β¯ β1 − β¯1 = tP (m − 2) √ 1 1 sβ¯1 β¯1 (I.9) sβ¯1 β¯2 β2 − β¯2 = tP (m − 2) √ . sβ¯1 β¯1 We observe (I.8) to coincide with (D.2) and (I.9) with (D.1). Deriving corresponding expressions for the lower boundary of the uncertainty band y1 (x) = y¯(x) − uy¯(x)

#

= β¯1 + β¯2 x − tP (m − 2) sy

m 

(bi1 + bi2 x)2

(I.10)

i=1

produces β1 − β¯1 = −tP (m − 2) $

sβ1 β1 + sβ1 β2 x sβ1 β1 + 2sβ1 β2 x + sβ2 β2 x2 (I.11)

β2 − β¯2 = −tP (m − 2) $

sβ1 β2 + sβ2 β2 x sβ1 β1 + 2sβ1 β2 x + sβ2 β2 x2

For x → ∞, we obtain the tangent to y1 (x) with y-intercept and slope sβ¯ β¯ lim {β1 } = β¯1 − tP (m − 2) √ 1 2 x→∞ sβ¯2 β¯2 sβ¯ β¯ lim {β2 } = β¯2 − tP (m − 2) √ 2 2 x→∞ sβ¯2 β¯2

(I.12)

establishing the point P8∗ of Fig. I.1. Vice versa, we might wish to deploy the coordinates β1 and β2 of the ellipse’s circumference directly as given in (13.27) and draw a bunch of straight lines y(x) = β1 + β2 x in-between the upper and lower borders of the straight line’s uncertainty band. We in fact find the bunch to fit the uncertainty band, given we boost Student’s t from tP (m − 2) up to Hotelling’s tP (2, m − 2) putting tP (m − 2) = tP (2, m − 2). Otherwise, the bunch would exceed the borderlines of the uncertainty band. Case (ii), as specified in Table 12.1, leads to a corresponding result, here, however, we have to put tP (n − 1) = tP (2, n − 1) .

I

Uncertainty Band Versus EP-Region

291

Systematic Errors To illustrate the proceeding, we address case (ii) of Table 12.1 ignoring random errors. To simplify the discussion, the number of measuring points is reduced to just three. On its left, Fig. I.2 displays the least squares line and its uncertainty band. On the right we observe the associated security polygon. The symbols LS1 to LS6 designate the line segments of the upper and lower borderlines of the uncertainty band. Intuitively, we expect the y-intercept and slope of line segment LS1 to reproduce the polygon’s vertex V1 ; furthermore, segment LS2 to reproduce vertex V2 and, finally, segment LS3 vertex to issue V3 . Corresponding relations should hold with respect to the segments of the upper borderline and the polygon’s remaining three vertices. To keep track of the supposed relationships, we resort to a numerical example, ⎞ ⎛ ⎛ ⎞ b11 b12 1.33 −0.50 ⎟ ⎜ ⎜ ⎟ 0.00 ⎠ . B = ⎝ b21 b22 ⎠ = ⎝ 0.33 b31

b32

−0.67

0.50

Referring to (14.21) fs,¯y(x) =| b11 + b12 x | fs,¯y1 + | b21 + b22 x | fs,¯y2 + | b31 + b32 x | fs,¯y3 , we have LS1 : x < −

b31 ; b32

LS2 : −

b31 b11 ≤x