Truth and Traceability in Physics and Metrology 1643270974, 9781643270975

Metrological data is known to be blurred by the imperfections of the measuring process. In retrospect, for about two cen

196 43 7MB

English Pages 80 [81] Year 2018

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
PRELIMS.pdf
Preface
Acknowledgements
Author biography
Michael Grabe
CH001.pdf
Chapter 1 Basics of metrology
1.1 Regular or constant errors
1.2 Where traceability begins
1.3 Judging measurement results
1.4 True values and traceability
1.5 Consistency
1.6 Measuring errors
References
CH002.pdf
Chapter 2 Some statistics
2.1 Measurands and random variables
2.2 Fisher’s density
2.3 Confidence intervals
2.4 Non-uniqueness of the empirical covariance
2.5 Breakdown of statistical inference
2.6 Curing hypothesis testing
References
CH003.pdf
Chapter 3 Measurement uncertainties
3.1 One measurand
3.2 Two and more measurands
3.3 Random errors
3.4 Bias
3.5 Overall uncertainty
3.6 Error propagation at a glance
References
CH004.pdf
Chapter 4 Method of least squares
4.1 Geometry of adjustment
4.2 Linear systems
4.3 Quintessence of the method of least squares
References
CH005.pdf
Chapter 5 Fitting of straight lines
5.1 True straight line
5.2 Fitting conditions
5.3 Straight line (I)
Least squares estimators
Uncertainties of the input data
Uncertainties of the least squares estimators
Uncertainty band
EP-region
Security polygon
Data simulation
5.4 Straight line (II)
5.5 Straight line (III)
References
CH006.pdf
Chapter 6 Features of least squares estimators
6.1 Uncertainties
6.2 Weighted least squares
6.3 Transfer of true values
6.4 Fundamental constants of physics
References
CH007.pdf
Chapter 7 Prospects
7.1 Revising the error calculus
7.2 Redefining the SI base units
CH008.pdf
Chapter 8 Epilogue
8.1 Verification by experiment
8.2 Deciding by reasoning
8.3 What is right, what is wrong?
BIBLIO.pdf
Outline placeholder
Journals
Works by the Author
Monographs and Anthologies
Recommend Papers

Truth and Traceability in Physics and Metrology
 1643270974, 9781643270975

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Truth and Traceability in Physics and Metrology

Truth and Traceability in Physics and Metrology Michael Grabe

Morgan & Claypool Publishers

Copyright ª 2018 Morgan & Claypool Publishers All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the publisher, or as expressly permitted by law or under terms agreed with the appropriate rights organization. Multiple copying is permitted in accordance with the terms of licences issued by the Copyright Licensing Agency, the Copyright Clearance Centre and other reproduction rights organisations. Rights & Permissions To obtain permission to re-use copyrighted material from Morgan & Claypool Publishers, please contact [email protected]. ISBN ISBN ISBN

978-1-64327-096-8 (ebook) 978-1-64327-093-7 (print) 978-1-64327-094-4 (mobi)

DOI 10.1088/978-1-64327-096-8 Version: 20181001 IOP Concise Physics ISSN 2053-2571 (online) ISSN 2054-7307 (print) A Morgan & Claypool publication as part of IOP Concise Physics Published by Morgan & Claypool Publishers, 1210 Fifth Avenue, Suite 250, San Rafael, CA, 94901, USA IOP Publishing, Temple Circus, Temple Way, Bristol BS1 6HG, UK

To Lucy

Contents Preface

ix

Acknowledgements

xi

Author biography

xii

1

Basics of metrology

1-1

1.1 1.2 1.3 1.4 1.5 1.6

Regular or constant errors Where traceability begins Judging measurement results True values and traceability Consistency Measuring errors References

2

Some statistics

2-1

2.1 2.2 2.3 2.4 2.5 2.6

Measurands and random variables Fisher’s density Confidence intervals Non-uniqueness of the empirical covariance Breakdown of statistical inference Curing hypothesis testing References

2-1 2-1 2-3 2-4 2-4 2-5 2-7

3

Measurement uncertainties

3-1

3.1 3.2 3.3 3.4 3.5 3.6

One measurand Two and more measurands Random errors Bias Overall uncertainty Error propagation at a glance References

3-1 3-6 3-8 3-10 3-10 3-12 3-12

4

Method of least squares

4-1

4.1

Geometry of adjustment

4-1

1-1 1-2 1-3 1-4 1-5 1-6 1-10

vii

Truth and Traceability in Physics and Metrology

4.2 4.3

Linear systems Quintessence of the method of least squares References

4-4 4-8 4-8

5

Fitting of straight lines

5-1

5.1 5.2 5.3 5.4 5.5

True straight line Fitting conditions Straight line (I) Straight line (II) Straight line (III) References

6

Features of least squares estimators

6-1

6.1 6.2 6.3 6.4

Uncertainties Weighted least squares Transfer of true values Fundamental constants of physics References

6-1 6-4 6-5 6-8 6-9

7

Prospects

7-1

7.1 7.2

Revising the error calculus Redefining the SI base units

7-1 7-2

8

Epilogue

8-1

8.1 8.2 8.3

Verification by experiment Deciding by reasoning What is right, what is wrong?

8-1 8-1 8-1

5-1 5-2 5-3 5-12 5-12 5-13

9-1

References and suggested reading

viii

Preface There ain’t no rules around here – we’re trying to accomplish something! Thomas Alva Edison More often than not the discovery of insight begins with intuition. As it is, findings need to be proven by experiment. Paradoxically, the current procedures to verify the conformity between theory and experiment seem to be somewhat out of order and this comes the more seriously into effect the tinier and hence the more delicate to observe the explored effects are. These days physical theories and experiments are more sophisticated than ever. So what is the background of the dilemma? Metrological data are known to be blurred by the imperfections of the measuring process. All along experimenters attempt to cut out as much information as possible as to the physical quantities aimed at. Here the father figure of error calculus, Carl Friedrich Gauss, is highly esteemed for having put the essentials of data evaluation long ago on a seemingly sound and safe footing. Ironically, Gauss was defeated by a momentous fallacy. The drama has its roots in his proceeding to interpret what he termed regular or constant errors, errors being constant in time and unknown with respect to magnitude and sign. As a theoretician Gauss passed the buck to experimenters, claiming that it would be their job to get rid of them. Unfortunately he erred: those errors turned out ineliminable in principle. As Gauss understood the situation, he based his error calculus on irregular or random errors alone, thus creating a concept that was incomplete and, strictly speaking, inapplicable to metrology right from the outset. In retrospect, for about two centuries regular or constant errors were not the focal point of experimental activities. In line with this, today’s notation unknown systematic errors instead of regular or constant errors, as proposed by Gauss himself, suggests that the postGaussian era had lost sight of the primordial stimulus given by Gauss. Confusingly, the worldwide practice to belatedly admit those unknown systematic errors amounts to considering them as being random too. Nevertheless, during the early 1950s and the late 1970s, this so-called randomization came under suspicion to cause metrological incompatibilities. Eventually these inquiries suggested considering a rigorous recast of the Gaussian error calculus. Well knowing that any attempt to methodically restructure a constitutive, internationally long established proceeding would provoke intense controversies I realized that that was what had to be done. In my view the addressed randomization prevents experimenters from localizing the true values of the measurands as the associated measurement uncertainties turn out unreliably small. Furthermore, due to the presence of unknown systematic errors, the common practice to safeguard measurement results by probability statements lacks statistical justification: probability statements regarding measured results do no longer exist.

ix

Truth and Traceability in Physics and Metrology

After all, I conjecture that the conformity between theory and experiment might have become out of balance. That is why this disquisition discusses an error concept dispensing with the common practice of randomizing unknown systematic errors so as to end the current practice of mixing up random errors and randomized unknown systematic errors. Instead, unknown systematic errors will be treated as what they physically are—namely as constants being unknown with respect to magnitude and sign. For the perpetual localization of the true values of the measurands the term traceability has been coined. Obviously, traceability is a necessary condition in order to achieve physical truth, and is hence of paramount importance. As it stands, the considered ideas issue a proceeding steadily localizing the true values of the measurands and consequently traceability. From there they are likely to offer a way out of the disquiet physics appears to be afflicted with these days. But unknown systematic errors cause other steep cuts as to scientific reasoning. The tools of statistical inference such as tests of hypothesis and analyses of variance, once supposed to analyse measured data, prove inapplicable in the presence of experimentally induced unknown systematic errors—whether we like it or not. The reflections might open up new vistas in the natural sciences. Braunschweig June 2018,

Michael Grabe

x

Acknowledgements I am happy to acknowledge the encouragement and support I got from the Physikalisch-Technische Bundesanstalt during my endeavours to revise and to recast the classically error calculus from scratch. The management fostered my efforts and encouraged me to attend national and international conferences. Also would I like to thank my colleagues for giving me valuable suggestions without which I would not have been in a position to accomplish the essay.

xi

Author biography Michael Grabe Dr Michael Grabe studied physics at the Universities of Braunschweig and Stuttgart and took his doctoral degree at the Technical University of Braunschweig, Institute for Physical Chemistry, where he was a research assistant and lecturer for physical chemistry and applied computer science. He then worked at the Physikalisch–Technische Bundesanstalt Braunschweig, focusing on legal metrology, computerized interferometric length measurements, procedures for the assessment of measurement uncertainties, and adjustments of fundamental constants of physics. Lectures and papers concerning the evaluation of measured data are cited on http://www.uncertainty.de.

xii

IOP Concise Physics

Truth and Traceability in Physics and Metrology Michael Grabe

Chapter 1 Basics of metrology

To measure is to compare.

1.1 Regular or constant errors As is well known, measurement results are specified by an estimator assessing the measurand or the quantity to be measured, and a related measurement uncertainty aiming at the blurring of the measuring process. Up to the end of the 1970s uncertainties were judged according to the notion that measuring errors were throughout random and the measured result, estimator ± uncertainty, would include ‘the adequate information the experimeter looked for’. But was this the theoretical scattering center of the repeated measurements, the true value of the measurand or even something else? Beyond, there was quite an uneasy feeling as to what those portentous regular or constant errors might affect. Let us recall, the father of error calculus, Carl Friedrich Gauss, based his error calculus on random errors alone though he defined yet another type of measuring errors which he termed regular or constant errors. These latter he judged to be constant in time and unknown with respect to magnitude and sign. As a theoretician, Gauss expected experimenters to get rid of them so that he felt free to proceed without them. In retrospect we might wish to speculate as to whether Gauss considered the consequences of this decision should those errors turn out to be ineliminable. We are sure that Gauss would not knowingly have stuck to an insufficient formalism, also, as regular or constant errors turned out to be ineliminable, we have to hold fast that his formalism was patently incomplete and hence metrologically inapplicable right from the outset. Suffice it to stress that Gauss was convinced that experimenters were in a position to do their job—unfortunately, he erred. As is obvious, a constant unknown systematic error removes the theoretical scattering center of the repeated measurements from the true value of the measurand. Consequently, due to its being unknown with respect to magnitude doi:10.1088/978-1-64327-096-8ch1

1-1

ª Morgan & Claypool Publishers 2018

Truth and Traceability in Physics and Metrology

and sign, it induces a momentous metrological perturbation. Oddly enough, systematic errors and true values failed to become an issue for about two centuries—though these terms never disappeared completely. When Gauss’s regular or constant errors were eventually ‘recognized’, manifestly because they turned out to be ineliminable and were thus suspected to cause metrological problems, they went under the heading of unknown systematic errors. To emphasize: The true values of measurands are hidden under unknown systematic errors rendering any search for physical truth biased. To highlight the notion that measurement results are to localize the true values of the measurands the term traceability was coined. Measurands should be traceable to the true values of the underlying system of physical units. How this can be accomplished will be discussed in the following.

1.2 Where traceability begins The diction true value, to be sure, does not and cannot refer to natural philosophy but rather to an earthbound system of physical units. As it is, these units, again, dispose of true values and realistic or realized values differing from one other. That the rendering of physical quantities is based on some necessarily arbitrary system of units does not prevent us from sticking to the adjective true as any unique and exhaustive system of physical units ensures the unambiguity of the identification of physical phenomena in equal measure. This postulate, indeed, marks the bedrock of physics. As much as we expect the laws of physics to be true, we expect the constants of physics to own true values. Traceability starts from here, from the very grasping of true values. Following the modus operandi of metrology, traceability makes reference to the true values of the primary standards of the system of physical units agreed on. After all, metrology starts from here, from defining a system of physical units, which these days is the SI, and the assumption that physical laws—be they under investigation or in application—are true within their scope of validity. At this, implied variables and constants are tacitly expected to hold true values as flawed arguments would not meet true laws. Still experimenters are presented with a dilemma. Realized physical units dispose of built-in uncertainties, further, the accuracy of measuring processes is limited. From there measurements are blurred by a veil of vagueness. Hence, in order to objectify measurements metrologists have to bring results into being ensuring traceability. As ‘to measure is to compare’, the basic measuring device is the comparator. Its job evidently is to mediate the difference between two quantities of the same quality. With respect to masses for instance, the comparator is a balance and the primary standard an appointed mass, called the International Prototype of the Kilogram. By definition, the kilogram is equal to the mass of this prototype. Unfortunately, the

1-2

Truth and Traceability in Physics and Metrology

Figure 1.1. The true indication x0 of the comparator transfers the true value N0 to the true value m0.

currently used prototype has proved to be slightly unstable, hence its true mass differs from the appointed nominal value of 1 kg1. At the very beginning a realized primary standard and a working standard, suitable for practical everyday use, are to be compared. As the case may be, they are made of the same material or different materials, say, platinum–iridium versus platinum–iridium or platinum–iridium versus steel. Let N = 1 kg denote the prototype’s nominal mass, N0 its physically true mass and fN the related systematic error, then

N = N0 + fN ,

−fs,N ⩽ fN ⩽ fs,N .

To emphasize, both N0 and fN are unknown. Let m0 denote the true mass of the working standard to be linked up to the kg-prototype. The balance transfers the true value N0 of the primary standard to the true value m0 of the working standard via the true indication x0,

m 0 = N0 + x0,

(1.1)

Though this statement remains fictitious, it marks the starting point of mass metrology. Below (1.1) will be rewritten so as to become metrologically viable (figure 1.1).

1.3 Judging measurement results These days metrologists are used to backing the relevance of measurement results in terms of probability statements.

1 The prototype is a platinum–iridium cylinder of diameter and height of nearly 4 cm each and agreed on mass of 1 kg. This mass has proved to be unstable, at least to about ± 50 μg. The SI-system intends to revise the definition of the kilogram supposedly via the mass of a silicon sphere of diameter just under 10 cm with an appraised uncertainty of, perhaps, less than ± 10 μg.

1-3

Truth and Traceability in Physics and Metrology

While Gauss himself dismissed unknown systematic errors, the post-Gaussian era brought them to bear but treated them, which takes a little getting used to, as if they were random [1–3]. In short, unknown systematic errors were formally ‘randomized’, though by their very nature they offer no statistical feature whatsoever. Beyond, needless to say, a quantity being constant in time might at the most be taken as the realization of a random variable, but hardly a random variable per se. Yet metrologists got used to assigning postulated distribution densities to unknown systematic errors, supposedly in order to maintain the classical Gaussian approach. However, postulates randomizing unknown systematic errors do not map physical reality, also, measurement results comprise random as well as non-random components so that probabilities as to the relevance of measured results do not exist. Admittedly, to the orthodox metrologist this view seems a sacrilege. The exact sciences consider tests of hypothesis and the various kinds of analyses of variance a keystone of rigor. However, due to the ubiquitousness of unknown systematic errors these classical, otherwise esteemed proceedings drawn from the toolbox of statistics are plainly inapplicable to metrological proceedings, though they were once thought out just for this purpose—the disparity being obviously due to a communication problem between experimenters and statisticians. After all, metrological assessments should be confined to the localization of the true values of the targeted measurands as required by traceability. But for this, a thoroughly revised data evaluation is needed taking unknown systematic errors as non-random quantities.

1.4 True values and traceability Let the mean value x¯ of a series of repeated measurements, x1, x2, x3, … aimed at some measurand x be considered an empirical estimator of the unknown true value x0,

x¯ ≈ x0.

(1.2)

By its very nature metrology asks for an interval of the kind

x¯ − u x¯ ⩽ x0 ⩽ x¯ + u x¯;

u x¯ ⩾ 0

(1.3)

where u x¯ designates the measurement uncertainty. The given interval attempts to localize the true value x0 of the quantity x—true, as has been stated, in terms of the underlying system of physical units. In this capacity the interval expresses the definition and meaning of metrology’s key demand of traceability: every measuring result should localize the true value of the measurand. But this is technically feasible only if biases are strictly treated as what they physically are, namely as quantities being constant in time at worst exhausting the intervals rated on the part of the experimenters. From there the proceeding to randomize biases appears inappropriate. For traceability to happen, the procedure to assess u x¯ should be robust and reliable: Traceability relates measuring results to the true values of the notified measurands whereat ‘true’ points at the true values of the physical units of the effective measuring system. 1-4

Truth and Traceability in Physics and Metrology

Let

z = φ( x , y ; a , b , c ) specify some physical law with variables x, y and constants a, b, c. If put in practice, mathematics tells us we are asked to insert appropriate sets of true input data. The physical interpretation, by contrast, is more sensitive in as much as the data for x, y are known to be flawed and our knowledge of the constants a, b, c is limited. Letting x0, y0 and a0, b0, c0 denote the unknown true values of the input data we have

z0 = φ(x0, y0 ; a 0, b0 , c0)

(1.4)

where z0 designates the true value of the quantity z. A set of estimators x¯ , y¯ ; a¯ , b¯ , c¯ produces an estimator z¯ of z0,

z¯ ≈ z0,

(1.5)

z¯ = φ(x¯ , y¯ ; a¯ , b¯ , c¯ ).

(1.6)

where

Traceability asks for

z¯ − u z¯ ⩽ z0 ⩽ z¯ + u z¯;

u z¯ ⩾ 0.

(1.7)

Compared with (1.3), we expect the assessment of the uncertainty u z¯ to turn out to be a fair bit more intricate.

1.5 Consistency A key point of metrology relates to the question of whether or not two measuring results, stemming from different laboratories and aiming at one and the same physical quantity, may be considered consistent. Figure 1.2 sketches two measuring results, say, β¯1 ± u β¯1 and β¯2 ± u β¯2 where β0 denotes the unknown and inaccessible true value. An overlap of the two uncertainty forks suggests something like a ‘necessary condition’ for consistency, while a ‘necessary and sufficient condition’ would ask the cut set of the forks to localize the true value β0. Unfortunately, this latter condition leads us back to the aforementioned dilemma—which, alas, cannot be solved. From there, physical inference has no option but to navigate through the twilight as given by the limited level of compliance of physical models with metrological findings. The declarative statement that a decision pro or con a physical model was limited anyway by the measurement uncertainty offers little help. Though, by their very nature, measurement uncertainties cannot be ‘exact’, the fundamental action experimenters should take to is to shape uncertainties following verifiable criteria. To make physical statements as far reaching as possible, experimenters strive for ever smaller uncertainties. On the other hand, small uncertainties involve precarious risks: the smaller the uncertainty, the more labile the localization of the respective true value. Hidden numerical distortions within the system of physical constants and physical quantities at large might well affect metrologically critical conclusions. After all, correct theories might be discarded and faulty ones accepted. 1-5

Truth and Traceability in Physics and Metrology

Figure 1.2. Estimators β¯1 and β¯2 , true value β0.

1.6 Measuring errors Metrology knows two kinds of measuring errors, those which are visible and those which are not. While the first ones are induced via random effects and show up in the recorded data, the latter are brought about by systematic causes being constant in time and leaving no observable consequences. Such unknown systematic errors are caused by imperfect properties of the measuring device and imperfectly adjusted operating conditions. As to the measuring device, unknown systematic errors are induced in the course of the assemblage. There are alignments to be made, be they optical, mechanical, electrical or otherwise. It is not possible to do these completely perfectly. Rather, there will be smaller or larger residual deviations from the intended exact settings which the experimenter cannot resolve. Similarly, environmental and boundary conditions should be considered to deviate from preset alignments. Also, there may be mechanical or electrical switching on or off effects, or even varying theoretical approaches to measure one and the same quantity [4]. As such perturbations cannot be brought to zero, they offset the measured data at that remaining constant in time and unknown with respect to magnitude and sign. Obviously, their bearing is confined to intervals, these being not always but often symmetric to zero. For lack of insight there is no alternative other than to infer their influence from theoretical appraisements. To emphasize: Unknown systematic errors let measured findings float on undefined levels. Let us finally address the adjective unknown. Errors are always unknown, otherwise they would get eliminated on the spot. Thus the more concise naming systematic errors should be sufficient. Random errors and systematic errors have 1-6

Truth and Traceability in Physics and Metrology

Figure 1.3. Non-drifting (above) and drifting (below) sequences of repeated measurements.

nothing in common and should hence be treated independently. Given the data analysis may be based on linearizations, the error propagation splits up on its own into different branches, referring to random and systematic errors, respectively2. As to random errors it is suggested that common practice is modified to tacitly treat empirical variances as if they were more or less the same as theoretical ones. For one thing, theoretical variances are abstract constructs being experimentally inaccessible, for another they block the view to treat measured data on a sound statistical basis. Indeed, considering a bunch of related random variables jointly distributed enables experimenters to establish the associated empirical variance– covariance matrix so that, when it comes to the propagation of random errors, that matrix leads to confidence intervals according to Student—which, again, requires that random and systematic errors be kept separate. Notably, the proceeding implies a further important step towards traceability. Although systematic errors are not directly observable, experimenters are well advised to anticipate a perturbation as indicated in the upper part of figure 1.3. Here, the systematic error f has shifted the bulk of measured data as a whole ‘upwards’, away from the true value x0. At the same time, the experimenter himself cannot make a judgement about any shift at all. In particular, he cannot know whether f has shifted the bulk of data ‘upwards’ or ‘downwards’. More than that, a naive observer 2 There are occasionally what is called known systematic errors. In order to avert misconceptions such quantities could be termed, due to their being known, deviations.

1-7

Truth and Traceability in Physics and Metrology

might even contest the existence of a shift or preclude it altogether. This, in fact, is the long-term damage of Gauss’s exclusion of regular or constant errors via his demand that experimenters get rid of them. All that experimenters can do is to attempt to keep the systematic errors of the measuring process constant in time. If this applies, the process does not suffer from a ‘drift’. Figure 1.3 (bottom) illustrates a drifting measuring device. In what follows, non-drifting experimental set-ups are presupposed throughout. Repeated measurements, as issued by non-drifting devices, scatter randomly about a horizontal line. Given that the width of the scattering remains unaltered during the period the measurements are taken, the experimenter may consider his statistical process even stationary [5]. Statistical stationarity marks the most favorable experimental situation. For this, Eisenhart [6] once coined the vivid term state of statistical control. In a sense, the experimenter may consider this condition an ideal measuring process. Over longer periods, however, even the best measuring device is observed to exhibit some kind of a drift, indicating that systematic errors undergo gradual changes. Let f denote some systematic error. The only feasible approach is to assess f via an interval, say,

−fs ⩽ f ⩽ fs ;

f = const. ; fs ⩾ 0,

(1.8)

given that it appears reasonable to assume error margins being symmetric to zero. Should this be not maintainable, i.e., should the margins appear asymmetric, they may be made symmetric belatedly by shifting the bulk of the measured data as a whole by an elementary data transformation [7]. Systematic errors present experimenters with a substantial challenge, their assessment asks for scrutiny and ingenuity. Their being non-observable has casually misled experimenters to ignore them altogether. Systematic errors take their effective, however unknown values still before the measurements commence. In contrast, random errors, being due to uncontrollable, irregular, statistically born processes enter the stage as the measurements begin. Enough times they may be considered normally distributed or at least approximately so. In what follows we shall pursue normality. Given this is conceded, figure 1.4 depicts the situation: there is a normal probability density pX(x) of some random variable X. The repeated measurements as produced by the experiment are identified with appropriate realizations of X. This happens while the systematic error remains constant in time, f = const. The parameter μ marks the theoretical scattering center of the repeated measurements. Statisticians term μ the statistical expectation of the random variable X. In any individual measurement the true value x0 of the measurand is obscured by a random error ε and a systematic error f. Hence the fundamental non-Gaussian error equation reads

x = x0 + ε + f ;

− fs ⩽ f ⩽ fs ;

Regarding repeated measurements we have

1-8

fs ⩾ 0,

f = const.

Truth and Traceability in Physics and Metrology

Figure 1.4. Non-Gaussian error model. Left: the center μ of the normal probability density px(x) differs from the true value x0 by a systematic error f. Right: localization of the true value x0 by an interval μ − fs … μ + fs.

xl = x0 + εl + f ;

l = 1, 2, 3, … (1.9)

− fs ⩽ f ⩽ fs ;

fs ⩾ 0,

f = const.

Here the measurements scatter with respect to the theoretical scattering center μ and not with respect to the true value x0. The position of μ is only given by the sum of the true value x0 and the actual, however unknown, systematic error f,

μ = x0 + f ;

−fs ⩽ f ⩽ fs .

(1.10)

As f is unknown, the scattering center μ turns out to be an artefact. Combining (1.9) with (1.10) results in the classical Gaussian error equation

xl = μ + εl ;

l = 1, … , n

(1.11)

which, by its very nature, precludes any hint to the possible existence of a systematic error. The statement

x0 ≠ μ

(1.12)

makes all the difference between the classical error calculus and modern endeavors. As a result we note: Traceability cannot be transferred via artefacts but only via true values. Obviously, stationarity of the measuring device is the first requirement in order to separate random and systematic errors via linearization into two categorically different branches.

1-9

Truth and Traceability in Physics and Metrology

Let us address the common approach to treat systematic errors on a random basis [2]. According to (1.8), a postulated probability density should be symmetric to zero. By necessity, such a density forces the expectation of the systematic error to zero thus letting the true value x0 of the measurand and the center of scattering μ of the random errors coincide. Meanwhile, this implication contradicts physical reality and should thus be discarded: a postulated density feigns an experiment which the experimenter cannot carry out as it does not exist and which he hence cannot evaluate. A first seminal impetus was given by Eisenhart in the early 1950s [6]—paradoxically without arousing lasting effects. Nearly 30 years later, in the late 1970s, I revisited and extended Eisenhart’s approach and presented the result at a seminar held at the Physikalisch-Technische Bundesanstalt Braunschweig [8]. Noticeable enough, in the wake of this presentation, an international survey was started in order to disclose as to how unknown systematic errors were treated elsewhere. Insofar as the randomization of unknown systematic errors contradicts physical reality, it should be recalled that a faulty physical model is scarcely likely to issue a sensible mathematical formalism. In order not to let Eisenhart’s pioneering feat sink into oblivion de novo, I expanded my by then but tentative approach into an elaborate, self-contained error formalism being well aware that this might elicit enduring controversies—but in view of possible physical consequences fascinating prospects as well. Meanwhile, whether or not my arguments might have caused a schism within the world of metrology remained undiscernible. Officially, the traditional status quo continued to persist. In what follows, the standard procedures of data evaluation will be reformulated rendering them to explicitly embed the true values of the involved physical quantities. In particular, the associated procedures to assess measurement uncertainties will be recast. To begin with, I shall address the propagation of random errors alone in view of the as yet uncommon technique to refer to confidence intervals according to Student with respect to a bunch of measurands in one go. Here, the at times disregarded empirical covariances play an exciting role. Subject to model-related conditions, the proceeding appears to render traceability attainable and might so reset physical quantities deemed already fixed—if and up to what extent and with what implications remains to be seen.

References [1] The National Institute of Standards and Technology (NIST) Reference on Constants, Units and Uncertainty (United States Department of Commerce) [2] Wagner S 1969 Zur Behandlung systematischer Fehler bei der Angabe von Messunsicherheiten (On the treatment of systematic errors assessing measurement uncertainties) PTB-Mitt. 79 343–7 [3] Guide to the Expression of Uncertainty in Measurement (Bureau International des Poids et Mesures) [4] Grabe M and Cordes H 1986 Messung der Neigungsstreuung an rauhen Oberflächen (Measurement of the variance of slopes on rough surfaces) tm Technisches Messen 1 40–2

1-10

Truth and Traceability in Physics and Metrology

[5] Lee Y W 1970 Statistical Theory of Communication (New York: Wiley) [6] Eisenhart C 1952 The reliability of measured values – Part I: fundamental concepts Photogramm Eng. 18 543–61 [7] Grabe M 2014 Measurement Uncertainties in Science and Technology 2nd edn (Berlin: Springer) 401pp ISBN 978-3-319-04887-1 [8] Grabe M 1978 Über die Verknüpfung zufälliger und abgeschätzter systematischer Fehler (On the combination of random and estimated systematic errors) in Seminar über die Angabe der Meßunsicherheit 20. und 21. Februar 1978, PTB Braunschweig (Seminar on the statement of the measurement uncertainty)

1-11

IOP Concise Physics

Truth and Traceability in Physics and Metrology Michael Grabe

Chapter 2 Some statistics

As to the propagation of random errors the toolbox of statistics offers a beneficial change of paradigm.

2.1 Measurands and random variables Random variables will be denoted by upper case, their realizations by lower case letters. Metrology identifies the sequence of repeated measurements, say, x1, x2, x3, … of some measurand x with the successive realizations of an appropriate random variable X [1, 2].

2.2 Fisher’s density Given a set of random variables, each being normally distributed, it is suggested to consider the collective jointly normal which means nothing other than to evoke the multidimensional normal model. In the case of two measurands the latter issues Fisher’s density, in the case of more than two measurands Wishart’s density. Both densities stand out due to their being determined by arithmetic means, empirical variances and covariances and the associated theoretically given expectations. As is evident, the notion implies that each random variable disposes of the same number, say n, of realizations and, correspondingly, that each of the physical quantities be measured n times. Let us call this demand well-defined measuring conditions [3–5]. This surprisingly simple assumption will guide us to an uncommon, however beneficial propagation of random errors via confidence intervals according to Student. Let there be just two measurands. The empirical estimators to be addressed are the arithmetic means [6]

x¯ =

1 n

n

∑ xl ,

y¯ =

l=1

1 n

n

∑ yl ,

(2.1)

l=1

the empirical variances doi:10.1088/978-1-64327-096-8ch2

2-1

ª Morgan & Claypool Publishers 2018

Truth and Traceability in Physics and Metrology

n

sx2

n

1 = ∑(xl − x¯ )2 , n − 1 l=1

s y2

1 = ∑( y − y¯ )2 n − 1 l=1 l

(2.2)

and the empirical covariance n

sxy =

1 ∑(xl − x¯ )( yl − y¯ ) n − 1 l=1

(2.3)

where metrology asks for the unbiased versions of sx2, s y2, sxy as their casually deployed biased counterparts

sx2 =

sxy =

s y2

n

1 n

∑(xl − x¯ )2 ,

1 n

∑(xl − x¯ )(yl

1 = n

l=1 n

− y¯ ),

l=1 n

∑(yl

− y¯ )2

l=1

appear less advantageous. Fisher, [7, 8], has specified the joint statistical behavior of the associated random variables X¯ , Y¯ , Sx2, Sxy, S y2 ,

p X¯ ,Y¯ ,Sx2,Sxy,Sy2 (x¯ , y¯ , sx2 , sxy , s y2 ) = p1X¯ ,Y¯ (x¯ , y¯ ) × p2 Sx2,Sxy,Sy2 (sx2 , sxy , s y2 ).

(2.4)

As noted, the density factorizes into one of the arithmetic means and one of the empirical variances and the empirical covariance, the latter being commonly termed empirical moments of second order. With regard to metrology, there is something peculiar about the density

p2 (sx2 , sxy , s y2 ) =

(n − 1)n−1 2 (n −4)/2 [sx2s y2 − sxy ] 4πΓ (n − 2)∣σ∣(n−1)/2 ⎛ n−1 2 ⎞ h(sx , sxy , s y2 )⎟ ; × exp ⎜ − ⎝ 2∣σ∣ ⎠

(2.5)

h(sx2 , sxy , s y2 ) = σ y2sx2 − 2σxysxy + σx2s y2 . Given the random variables X and Y are independent, the theoretical covariance σxy vanishes [2]. Notwithstanding that, p2 Sx2, Sxy, Sy2 (sx2, sxy, s y2 ) does not factorize. But this means experimenters should bring to bear the empirical variances sx2, s y2 jointly with the empirical covariance sxy, even though the theoretical counterpart of the latter, σxy, vanishes. Hence, the two-dimensional normal model asks experimenters not to dismiss the empirical covariance, though this quantity might seem superfluous insofar as its expectation turns out to be zero. As a matter of course, given X and Y are dependent, the empirical covariance is essential at any rate. 2-2

Truth and Traceability in Physics and Metrology

Let us address two examples emphasizing the purpose of empirical covariances: a generalized test of hypothesis and the proceeding to transfer Student’s confidence interval to the case of more than one measurand.

2.3 Confidence intervals Let the two measurands x, y be linearly concatenated via some constants a, b,

z = ax + by. We take the associated random variables X and Y jointly normally distributed [2]. Whether or not X and Y are dependent, the random variable Z = aX + bY is normal [3, 9]. Considering the sequence of repeated measurements

zl = axl + byl ;

l = 1, 2, … , n

the arithmetic mean and the empirical variance read

1 z¯ = n

n

n

sz2

∑ zl , l=1

1 = ∑(zl − z¯ )2 . n − 1 l=1

Metrology understands that the measuring device relaxes between successive repeated measurements. Hence the sequence zl; l = 1, 2, 3,… is taken to be independent. In view of

z¯ = ax¯ + by¯

and

zl − z¯ = a(xl − x¯ ) + b(yl − y¯ )

we have

sz2 = a 2sxx + 2absxy + b 2syy where sxx ≡ sx2 and syy ≡ s y2 formally rewrite the empirical variances while sxy ≡ syx renders the usual notation of the empirical covariance. The complete empirical variance–covariance matrix of the input data reads

⎛ sxx sxy ⎞ ⎜ ⎟. ⎝ syx syy ⎠

(2.6)

The theoretically given expectation of the random variable Z¯ ,

μz = a μx + b μy , guides us to Student’s T,

T (ν ) =

Z¯ − μz ; Sz n

ν = n − 1,

(2.7)

being valid whether or not X and Y are dependent. Basically, this result appears extendable to arbitrary many measurands, given the empirical variance Sz2 is kept complete with regard to the implied empirical covariances [3, 5]. 2-3

Truth and Traceability in Physics and Metrology

After all Student’s confidence interval localizing the artefact μz reads s s z¯ − tP z ⩽ μz ⩽ z¯ + tP z n n

(2.8)

where the index P in tP denotes the degree of confidence [6].

2.4 Non-uniqueness of the empirical covariance Given that the random variables X and Y are independent, the series of pairings

(x1, y1),

(x2 , y2 ),



, (xn, yn)

(x1, y2 ),

(x2 , yn),



, (xn, y1)

or

are equally well admissible. While permutations do not affect the empirical variances, they alter the empirical covariance. In general, each particular pairing yields a numerically different empirical covariance thus producing a confidence interval of related length. On that note, one could even purposefully manipulate the pairing of data in order to produce an empirical covariance minimizing the length of the actual confidence interval. Nevertheless, the distribution density (2.5) covers likewise any such interval. Incidentally, the lengths of confidence intervals are never unique as they turn out to be sample-dependent. Should, however, the experimenter surmise a dependence between the measurands, the pairing of the data as established on the part of the measuring device is to be considered statistically significant, excluding subsequent permutations.

2.5 Breakdown of statistical inference Basically, metrological data are always charged by systematic errors. Against this background it is natural to enquire about the mechanisms of statistical inference. To recall, the addressed procedures attempt to discover certain metrological properties concealed within measured data. Hypothesis testing and analysis of variance for instance claim to judge as to whether • two arithmetic means, aiming at one and the same physical quantity, may be assumed compatible • a group of arithmetic means, stemming from different laboratories, obtained by means of different measuring devices may be considered compatible. As we know, the procedures of statistical inference consider random errors only. Consequently, in view of the properties of measured data as outlined in figure 1.4 and equation (1.9), those methods clearly break down. To make the point as clear as necessary the tools of statistical inference are in no way faulty, however due to the ubiquitousness of unknown systematic errors, they are inapplicable to metrological data—though, paradoxically, they were designed just for this purpose. 2-4

Truth and Traceability in Physics and Metrology

Figure 2.1. The difference between two arithmetic means, x¯ and y¯ , aiming at a common true value z0.

Hence, the present metrological situation appears somewhat inconsistent insofar as those classical procedures are obviously still in use—a verdict heralding a drama. Nevertheless, simple tests of hypothesis may be cured, the analysis of variance and much less their sophisticated varieties certainly can not.

2.6 Curing hypothesis testing Let us attempt to cure the classical test of hypothesis as addressed above. To this end we consider to redd up the traditional approach in two respects: in the first place, there is no need to distinguish between ‘large or small samples’ as we ask for equal numbers of repeated measurements in order to abandon the unfortunate misdoing of dismissing the empirical covariance. Furthermore, we have to bring to bear, as a matter of course, unknown systematic errors. Given two arithmetic means x¯ and y¯ , stemming from different laboratories and targeting the same true value x0 ≡ y0 of one and the same measuring object, by how much may the means differ to be still recognized consistent? Considering

z (x , y ) = x − y ,

(2.9)

the error equations

zl (xl , yl ) = x0 − y0 + (xl − μx ) − (yl − μy ) + fx − fy ; z¯(x¯ , y¯ ) = x0 − y0 + (x¯ − μx ) − (y¯ − μy ) + fx − fy

2-5

l = 1, 2, … , n

Truth and Traceability in Physics and Metrology

produce

zl − z¯ = (xl − x¯ ) − (yl − y¯ );

l = 1, 2, … , n .

(2.10)

As we have

z¯ =

1 n

n

∑ zl

(2.11)

l=1

and

sz2 = sx2 − 2sxy + s y2

(2.12)

μz = (x0 + fx ) − (y0 + fy ) = μx − μy ,

(2.13)

there is, with respect to

a Student’s T of degrees of freedom n − 1,

T (n − 1) =

(X¯ − Y¯ ) − ( μx − μy ) Sz

n

,

so that

( μx − μy ) − tPsz

n ⩽ (x¯ − y¯ ) ⩽ ( μx − μy ) + tPsz

n.

(2.14)

Inserting (2.13) while assuming x0 = y0,

μx − μy = ( fx − fy ), we find

( fx − fy ) − tPsz

n ⩽ (x¯ − y¯ ) ⩽ ( fx − fy ) + tPsz

n (2.15)

− fs,x ⩽ fx ⩽ fs,x ; −fs,y ⩽ fy ⩽ fs,y . A necessary condition for this to happen is

∣x¯ − y¯∣ ⩽

tp(n − 1) 2 sx − 2sxy + s y2 + ( fs,x + fs,y ). n

(2.16)

Compare this with classical proceedings, e.g. [6]. Here large samples are distinguished from small ones, corresponding to the cases of postulated theoretical variances and realistically given empirical variances, respectively. In the latter case, under the pretext of generality, different numbers of repeated measurements are notoriously imputed so that, due to formal necessity, the empirical covariance remains undefined. In particular, unknown systematic errors do not enter, though they are known to burden the measurements.

2-6

Truth and Traceability in Physics and Metrology

Finally, in view of

−sxsy < sxy < sxsy we observe

sx2 − 2sxy + s y2 ⩽

sx2 + 2sxsy + s y2 = sx + sy .

(2.17)

Hence, a more robust proceeding reads

∣x¯ − y¯∣ ⩽

tp(n − 1) n

sx + fs,x +

tp(n − 1) n

sy + fs,y = ux + u y.

(2.18)

The analysis of variance, by contrast, cannot be cured. This has been discussed in [3].

References [1] Papoulis A 1965 Probability, Random Variables and Stochastic Processes (Tokyo: McGrawHill Kogakusha Ltd) [2] Beckmann P 1968 Elements of Applied Probability Theory (New York: Harcourt, Brace & World Inc.) [3] Grabe M 2014 Measurement Uncertainties in Science and Technology 2nd edn (Berlin: Springer) 401pp ISBN 978-3-319-04887-1 [4] Grabe M 2010 Generalized Gaussian Error Calculus (Berlin: Springer) 301pp ISBN 978-3-64203304-9 [5] Grabe M 2011 Grundriss der Generalisierten Gauß’schen Fehlerrechnung (Plot of the Generalized Gaussian Error Calculus) (Berlin: Springer) 191pp ISBN 978-3-642-17821-4 [6] Chao L L 1974 Statistics Methods and Analyses (Tokyo: McGraw-Hill Kogakusha Ltd) [7] Cramér H 1954 Mathematical Methods of Statistics (Princeton, NJ: Princeton University Press) [8] Fisz M 1978 Wahrscheinlichkeitsrechnung und mathematische Statistik (Probability Theory and Mathematical Statistics) (Berlin: VEB Deutscher Verlag der Wissenschaften) [9] Graybill F A 1961 An Introduction to Linear Statistical Models (New York: McGraw-Hill)

2-7

IOP Concise Physics

Truth and Traceability in Physics and Metrology Michael Grabe

Chapter 3 Measurement uncertainties

Measurement uncertainties per se are not helpful. If assessed inadequately, they may be misleading and thus cause damage.

3.1 One measurand The arithmetic mean, the experimenter’s most commonly deployed estimator, may be derived from considerations rooting in statistics. On the other hand, it may also be taken as issued by a geometrical interpretation of the method of least squares, thence asking for no preconditions at all, see chapter 4. Regarding some measurand x, the arithmetic mean of n repeated measurements xl, ; l = 1, 2, …, n reads

x¯ =

1 n

n

∑ xl .

(3.1)

l=1

According to (1.9), the decomposition of the xl issues

xl = x0 + εl + fx ;

−fs,x ⩽ fx ⩽ fs,x ;

fs,x ⩾ 0,

fx = const.

(3.2)

Inserting (1.10),

μx = x0 + fx ;

−fs,x ⩽ f ⩽ fs,x ,

(3.3)

we arrive at

xl = x0 + (xl − μx ) + fx ;

l = 1, … , n .

(3.4)

Summing up and dividing by n yields

x¯ = x0 + (x¯ − μx ) + fx .

(3.5)

The two identities (3.4) and (3.5) will prove beneficial to the recast of the error calculus. The uncertainty u x¯ of x¯ aims at the true value x0 as has been stated in (1.3), doi:10.1088/978-1-64327-096-8ch3

3-1

ª Morgan & Claypool Publishers 2018

Truth and Traceability in Physics and Metrology

x¯ − u x¯ ⩽ x0 ⩽ x¯ + u x¯;

u x¯ ⩾ 0

We intend to treat the random errors and the systematic error separately. For this to happen we hide fx for the moment via (3.3) so that the identities (3.4) and (3.5) change into

xl = μx + (xl − μx )

(3.6)

x¯ = μx + (x¯ − μx ),

(3.7)

and

respectively. Obviously, (3.7) suggests a Student’s T of degrees of freedom n − 1, [1],

T (n − 1) =

x¯ − μx¯ . Sx n

Here the difference between (3.6) and (3.7),

xl − x, ¯ issues a suitable realization sx of the random variable Sx via n

sx2 =

1 ∑(xl − x¯ )2 . n − 1 l=1

After all, Student’s confidence interval s s x¯ − tP x ⩽ μx ⩽ x¯ + tP x , n n

(3.8)

(3.9)

localizes the artefact μx with probability P. Figure 3.1 (right) depicts the metrological situation.

Figure 3.1. Left: unknown difference between the artefact μx and the true value x0. Right: confidence interval localizing μx with respect to the arithmetic mean x¯ .

3-2

Truth and Traceability in Physics and Metrology

Though the artefact μx per se is still not known, it has at least been localized. And just this gives us the means to target the true value x0. With a view to (3.3), figure 3.1 (left) we find s s x¯ − tP x ⩽ x0 + fx ⩽ x¯ + tP x ; −fs,x ⩽ f ⩽ fs,x . n n As nothing else but the assumption of a worst-case scenario ensures traceability we observe s s x¯ − tP x − fs,x ⩽ x0 ⩽ x¯ + tP x + fs,x , n n say,

u x¯ = tP

sx + fs,x n

(3.10)

x¯ − u x¯ ⩽ x0 ⩽ x¯ + u x¯;

u x¯ ⩾ 0.

Figure 3.2 visualizes the result. Random errors may more often than not be considered at least approximately normally distributed whereat the extreme outer parts of the theoretical distribution density N(μx, σx2) seems to be lacking. Hence, relating tP to, say, P = 95% and submitting fx to a worst-case estimation implies that experimenters are entitled to take the interval (3.10) being virtually or quasi safe. While (3.9) is a probabilistic statement, (3.10) is not. From there, it is out of the question to assign a probability to the final result. Regarding (3.3), the actual position of the artefact μx, as fixed by the unknown systematic error, remains of secondary importance.

Figure 3.2. Uncertainty of the arithmetic mean with respect to the true value x0.

3-3

Truth and Traceability in Physics and Metrology

To emphasize, the proceeding supposes that the true value x0 exists. Otherwise there would be neither a metrological target value nor a reference point for the measurement uncertainty. What so far may look like a somewhat redundant proceeding will soon turn into strict methodology, proving extendable all over the new error calculus. Remark: given a group of arithmetic means, stemming from different laboratories and being notionally supposed to target one and the same measurand. Assume the means are to be averaged in order to produce what is called a grand mean. Obviously, the averaging makes sense only if the incoming individual means actually feature one and the same true value. Otherwise the uncertainty of the grand mean would be misleading. We are sure, the uncertainty of the grand mean should be due to measuring errors and not to deviant true values [2]. Example: mass metrology Let us return to (1.1), to metrology’s starting point in the case of mass metrology

m 0 = N0 + x0 and assume n repeated measurements xl; l = 1, …, n in order to link the working standard of unknown true mass m0 to the prototype of unknown true mass N0,

ml = N0 + xl ;

(3.11)

l = 1, … , n .

In view of

xl = x0 + (xl − μx ) + fx ;

−fs,x ⩽ fx ⩽ fs,x ;

fs,x ⩾ 0,

fx = const.

in what fx denotes the systematic error of the weighing, should there be any1, we have

ml = N0 + x0 + (xl − μx ) + fx ;

l = 1, … , n .

Further, averaging over l yields

x¯ =

1 n

n

∑ xl = x0 + (x¯ − μx ) + fx . l=1

With this the mean of (3.11),

m ¯ ¯ = N0 + x,

(3.12)

changes into

m ¯ = N0 + x0 + (x¯ − μx ) + fx . Traceability asks us to let the sum of N0 and x0 define the true mass m0. Hence there are the two basic relationships 1

For a prototype of silicon and a working standard of steel the buoyancy of air needs to be considered. As the density of air is known only with limited accuracy, the buoyancy of air causes an additional unknown systematic error. Alternatively, the link-up may take place in high vacuum.

3-4

Truth and Traceability in Physics and Metrology

ml = m 0 + (xl − μx ) + fx ;

l = 1, … , n (3.13)

m ¯ = m 0 + (x¯ − μx ) + fx . Though (3.12) looks fine, we cannot state m¯ , as N0 is inaccessible. Also, we have to assess the uncertainty u m¯ in order to provide traceability,

m ¯ − u m¯ ⩽ m 0 ⩽ m ¯ + u m¯ .

(3.14)

To this end we rewrite (3.13) so that the scattering center

μ m¯ = m 0 + fx ;

−fs,x ⩽ fx ⩽ fs,x

(3.15)

of the ml formally hides the systematic error fx. Hence,

ml = μ m¯ + (xl − μx );

l = 1, … , n

m ¯ = μ m¯ + (x¯ − μx ). Obviously, the second equation reminds us of a Student’s T,

T (n − 1) =

m ¯ − μ m¯ Sx n

(3.16)

at what we take a realization of the random variable Sx from the difference

ml − m ¯ = xl − x¯

l = 1, … , n

via n

sm2¯ = sx2 =

1 ∑(xl − x¯ )2 . n − 1 l=1

As of now Student’s T assists us in localizing the scattering center μ m¯ through a confidence interval with probability P,

m ¯ −

tP(n − 1) tP(n − 1) sx . sx ⩽ μ m¯ ⩽ m ¯ + n n

(3.17)

Inserting (3.15) we observe

m ¯ −

tP(n − 1) tP(n − 1) sx sx ⩽ m 0 + fx ⩽ m ¯ + n n

− fs,x ⩽ fx ⩽ fs,x . Hence, the provisional overall uncertainty reads s u x¯ = tP(n − 1) x + fs,x . n

(3.18)

Still we cannot quote the mass m¯ as given in (3.12), as the true mass N0 is unknown. All we dispose of is

3-5

Truth and Traceability in Physics and Metrology

N = N0 + fN ;

−fs,N ⩽ fN ⩽ fs,N .

The only way out is to replace the true mass N0 with the appointed mass N=1 kg, putting instead of (3.12)

(3.19)

m ¯ = N + x¯

and to include the intrinsic systematic error fN within the uncertainty u m¯ of m¯ . Hence traceability is achieved via s u m¯ = tP(n − 1) x + fs,x + fs,N n (3.20)

m ¯ − u m¯ ⩽ m 0 ⩽ m ¯ + u m¯ .

3.2 Two and more measurands High level metrology deals with relative measurement uncertainties of the order of 10−6 and less. From there it is sensible to linearize functional relationships propagating measurement errors. This measure facilitates the formalism notably as it endorses the option to keep random and systematic errors apart. Consider a relationship ϕ(x, y) and measured results

x¯ − u x¯ ⩽ x0 ⩽ x¯ + u x¯,

y¯ − u y¯ ⩽ y0 ⩽ y¯ + u y¯

(3.21)

localizing the true values x0 and y0. In order to dispose of well-defined measuring conditions the arithmetic means

x¯ =

1 n

n

∑ xl ,

y¯ =

l=1

1 n

n

∑ yl l=1

are supposed to comprise the same number n of repeated measurements. Denoting by ϕ 0 ≡ ϕ(x0, y0 ) the true value of ϕ(x, y) at x0, y0, we are looking for an uncertainty u ϕ¯ satisfying

ϕ(x¯ , y¯ ) − u ϕ¯ ⩽ ϕ(x0, y0) ⩽ ϕ(x¯ , y¯ ) + u ϕ¯ .

(3.22)

Provided the uncertainties u x¯, u y¯ are sufficiently small and the function ϕ(x, y) behaves well enough, we may expand the ϕ(xl, yl); l = 1, 2, …, n throughout a neighborhood of x0, y0 considering linear terms only

ϕ(xl , yl ) = ϕ(x0, y0) +

∂ϕ ∂x

x = x0 y = y0

(xl − x0) +

∂ϕ ∂y

x = x0 y = y0

(yl − y0) + ⋯

Approximating the derivatives at x0, y0 by derivatives at x¯ , y¯ , we have, abbreviating the notation, ∂ϕ ∂ϕ ϕ(xl , yl ) = ϕ(x0, y0) + (xl − x0) + (y − y0) + ⋯ ∂x¯ ∂y¯ l

3-6

Truth and Traceability in Physics and Metrology

The error equations

xl = x0 + (xl − μx ) + fx ,

yl = y0 + (yl − μy ) + fy

lead us to

⎡ ∂ϕ ⎤ ⎡ ∂ϕ ∂ϕ ∂ϕ ⎤ ϕ(xl , yl ) = ϕ(x0, y0) + ⎢ (xl − μx ) + (yl − μy )⎥ + ⎢ fx + f ⎥ ∂y¯ ∂y¯ y ⎦ ⎣ ∂x¯ ⎦ ⎣ ∂x¯

(3.23)

where, for the sake of lucidity, the truncated series expansion has been cast into an equality which, as a matter of course, is formally incorrect. Repeating the procedure with respect to

x¯ = x0 + (x¯ − μx ) + fx ,

y¯ = y0 + (y¯ − μy ) + fy

we find analogously

⎡ ∂ϕ ⎤ ⎡ ∂ϕ ∂ϕ ⎤ ∂ϕ (y¯ − μy )⎥ + ⎢ fx + f ⎥. ϕ(x¯ , y¯ ) = ϕ(x0, y0) + ⎢ (x¯ − μx ) + ∂y¯ y ⎦ ∂y¯ ⎣ ∂x¯ ⎦ ⎣ ∂x¯

(3.24)

Again, the center of scattering

⎡ ∂ϕ ∂ϕ ⎤ μϕ = ϕ(x0, y0) + ⎢ fx + f ⎥ ∂y¯ y ⎦ ⎣ ∂x¯

(3.25)

formally hides the systematic errors fx, fy. Also, due to their being unknown they render μϕ an artefact. Notably, we retrieve (3.24) from (3.23)

1 n

n

∑ ϕ(xl , yl ) = ϕ(x¯ , y¯ ).

(3.26)

l=1

In view of the assumptions made, we are entitled to consider the ϕ(xl, yl) as given in (3.23) the realizations of a random variable ϕ(X,Y). Reading the input data as normally distributed, ϕ(X,Y) is also normal. Moreover, as metrology considers succeeding realizations x1, y1

x2 , y2 ... xn, yn independent, irrespective of whether X and Y are among themselves dependent, sequential realizations ϕ(xl, yl) of ϕ(X,Y) may be taken to be normally distributed and independent.

3-7

Truth and Traceability in Physics and Metrology

3.3 Random errors In respect of (3.24) and (3.25) we observe

⎡ ∂ϕ ⎤ ∂ϕ ϕ(x¯ , y¯ ) = μϕ + ⎢ (x¯ − μx ) + (y¯ − μy )⎥ . ∂y¯ ⎣ ∂x¯ ⎦

(3.27)

Though it may seem obvious to produce the theoretical variance

σϕ2

2 2 ⎛ ∂ϕ ⎞2 σx2 ⎛ ∂ϕ ⎞⎛ ∂ϕ ⎞ σxy ⎛ ∂ϕ ⎞ σ y ⎜ ⎟ ⎜ ⎟ , = +2 +⎜ ⎟ ⎜ ⎟ ⎝ ∂x¯ ⎠ n ⎝ ∂x¯ ⎠⎝ ∂y¯ ⎠ n ⎝ ∂y¯ ⎠ n

the proceeding would be soon deadlocked as we know neither the theoretical variances σx2 and σy2 nor the theoretical covariance σxy. In a sense, we might replace the theoretical variances by their empirical counterparts sx2 and s y2 . A similar proceeding, meanwhile, appears prohibitive in view of the theoretical covariance. While σxy has a fixed sign, its empirical counterpart sxy can, in principle, be positive or negative. Hence, to substitute an empirical covariance for a theoretical one might lead us astray. For the sake of completeness it is noted that the theoretical variances and the theoretical covariance define what is called a theoretical variance–covariance matrix,

⎛ σxx σxy ⎞ σ = ⎜ σ σ ⎟; ⎝ yx yy ⎠

σxx ≡ σx2,

σxy = σyx,

σyy ≡ σ y2.

In so far as error calculus abstains from well-defined measuring conditions, say, if there are nx ≠ ny repeated measurements each, the empirical covariance cannot be formalized. Meanwhile, given the measurands are independent, the theoretical covariance σxy vanishes anyway and this sometimes tempts experimenters to consider

sϕ2 =

2 2 ⎛ ∂ϕ ⎞2 sx2 ⎛ ∂ϕ ⎞ s y ⎜ ⎟ , +⎜ ⎟ ⎝ ∂x¯ ⎠ nx ⎝ ∂y¯ ⎠ n y

a useful assessment of the theoretical variance σϕ2 . But let us look at this from the perspective of well-defined measuring conditions. Here we are in a position to sidestep theoretical variances and covariances [3, 4] and hence to find a way out of the dilemma. Indeed, (3.23), (3.24) and (3.25) lead to

⎡ ∂ϕ ⎤ ∂ϕ ϕ(xl , yl ) = μϕ + ⎢ (xl − μx ) + (yl − μy )⎥ ∂y¯ ⎣ ∂x¯ ⎦ ⎡ ∂ϕ ⎤ ∂ϕ ϕ(x¯ , y¯ ) = μϕ + ⎢ (x¯ − μx ) + (y¯ − μy )⎥ . ∂y¯ ⎣ ∂x¯ ⎦

3-8

(3.28)

Truth and Traceability in Physics and Metrology

The difference

ϕ(xl , yl ) − ϕ(x¯ , y¯ ) =

∂ϕ ∂ϕ (xl − x¯ ) + (y − y¯ ); ∂x¯ ∂y¯ l

l = 1, 2, … , n

(3.29)

yields the empirical variance n

sϕ2 =

1 ⎡⎣ϕ(xl , y ) − ϕ(x¯ , y¯ )⎤⎦2 ∑ l n − 1 l=1

(3.30)

issuing

sϕ2 =

n ⎡ ⎛ ∂ϕ ⎞2 1 ⎢⎜ ⎟ (xl − x¯ )2 ∑ n − 1 l = 1⎣⎝ ∂x¯ ⎠

⎤ ⎛ ∂ϕ ⎞2 ⎛ ∂ϕ ⎞⎛ ∂ϕ ⎞ + 2 ⎜ ⎟⎜ ⎟(xl − x¯ )(yl − y¯ ) + ⎜ ⎟ (yl − y¯ )2 ⎥ ⎝ ∂x¯ ⎠⎝ ∂y¯ ⎠ ⎥⎦ ⎝ ∂y¯ ⎠

(3.31)

⎛ ∂ϕ ⎞2 ⎛ ∂ϕ ⎞2 ⎛ ∂ϕ ⎞⎛ ∂ϕ ⎞ = ⎜ ⎟ sx2 + 2 ⎜ ⎟⎜ ⎟ sxy + ⎜ ⎟ s y2 ⎝ ∂x¯ ⎠ ⎝ ∂x¯ ⎠⎝ ∂y¯ ⎠ ⎝ ∂y¯ ⎠ where sx2 , s y2 and sxy denote the empirical variances and the empirical covariance, respectively, as given in (2.2) and (2.3). The associated empirical variance–covariance reads

⎛ sxx sxy ⎞ s = ⎜ s s ⎟; ⎝ yx yy ⎠

sxx ≡ sx2 ,

sxy = syx ,

syy ≡ s y2 .

Due to the inclusion of the empirical covariance sxy, the empirical variance sϕ2 of the ϕ(xl, yl) with respect to the arithmetic mean ϕ(x¯ , y¯ ) turns out to be statistically complete. Given that the random variables X and Y are independent, the expectation of Sxy, as has been stated, is zero. From this point of view it is sometimes argued that one should dismiss the empirical covariance altogether as a zero would be the best possible estimate. This happens to apply to σxy but in no way to sxy. Statement (3.27) suggests the introduction of a Student’s T of degrees of freedom n − 1,

T (n − 1) =

ϕ(X¯ ,Y¯ ) − μϕ Sϕ

n

(3.32)

so that there is a confidence interval

ϕ(x¯ , y¯ ) −

t (n − 1) tP(n − 1) sϕ sϕ ⩽ μϕ ⩽ ϕ(x¯ , y¯ ) + P n n

(3.33)

localizing the artefact μϕ with probability P. Again, μϕ points to the center of scattering of the random variable ϕ(X¯ ,Y¯ ). Evidently, there is now a confidence

3-9

Truth and Traceability in Physics and Metrology

interval in the event of error propagation, whether the incoming measurands are dependent or not. If they turn out to be dependent, the experimental device itself decides as to which of the realizations of X and Y are to be paired. If they are independent, the experimenter may preset any ordering.

3.4 Bias From (3.25), we gather that the systematic errors fx and fy cause the center of scattering μϕ to deviate from the true value ϕ(x0,y0). Regarding

−fs,x ⩽ fx ⩽ fs,x ,

fs,x ⩾ 0,

−fs,y ⩽ fy ⩽ fs,y ,

fs,y ⩾ 0

the worst-case estimation of the deviation is given by

fs,ϕ =

∂ϕ ∂x¯

fs,x +

∂ϕ ∂y¯

fs,y .

(3.34)

3.5 Overall uncertainty Inserting (3.25) into (3.33) produces

ϕ(x¯ , y¯ ) −

⎡ ∂ϕ t tP ∂ϕ ⎤ sϕ ⩽ ϕ(x0,y0) + ⎢ fx + fy ⎥ ⩽ ϕ(x¯ , y¯ ) + P sϕ n n ∂y¯ ⎦ ⎣ ∂x¯

− fs,x ⩽ fx ⩽ fs,x ,

fs,x ⩾ 0;

−fs,y ⩽ fy ⩽ fs,y , fs,y ⩾ 0.

Hence, the final results take the form

u ϕ¯ =

⎛ ∂ϕ ⎞2 2 ⎛ ∂ϕ ⎞⎛ ∂ϕ ⎞ tP(n − 1) ⎛ ∂ϕ ⎞2 2 ⎜ ⎟ s + 2⎜ ⎟⎜ s + ⎟ ⎜ ⎟s ⎝ ∂x¯ ⎠ x ⎝ ∂x¯ ⎠⎝ ∂y¯ ⎠ xy ⎝ ∂y¯ ⎠ y n +

∂ϕ ∂x¯

fs,x +

∂ϕ ∂y¯

(3.35) fs,y

ϕ(x¯ , y¯ ) − u ϕ¯ ⩽ ϕ(x0,y0) ⩽ ϕ(x¯ , y¯ ) + u ϕ¯ . Once again, there is no probability statement, rather the experimenter may consider the quotation quasi safe: the confidence interval (3.33) localizes μϕ at least with probability P and the worst-case estimation (3.34) ensures the localization of ϕ(x0,y0) with respect to μϕ according to (3.25). Incidentally it should be added that the confidence interval (3.33) is tied to what is called precision. In contrast, the overall uncertainty (3.35) expresses the accuracy of the measurement.

3-10

Truth and Traceability in Physics and Metrology

In analogy to (2.18) the experimenter may wish to add a more robust assessment of u ϕ¯ putting

⎛ ∂ϕ ⎞2 2 ⎛ ∂ϕ ⎞2 2 ⎛ ∂ϕ ⎞⎛ ∂ϕ ⎞ ⎜ ⎟ s + 2⎜ ⎟⎜ s + ⎟ ⎜ ⎟ sy xy ⎝ ∂x¯ ⎠ x ⎝ ∂x¯ ⎠⎝ ∂y¯ ⎠ ⎝ ∂y¯ ⎠ ⎛ ∂ϕ ⎞2 ⩽ ⎜ ⎟ sx2 + 2 ⎝ ∂x¯ ⎠

∂ϕ ∂x¯

∂ϕ ∂y¯

⎛ ∂ϕ =⎜ ⎝ ∂x¯

∂ϕ ∂y¯

⎞2 sy⎟ . ⎠

sx +

⎛ ∂ϕ ⎞2 sxsy + ⎜ ⎟ s y2 ⎝ ∂y¯ ⎠

With this, (3.35) passes into

u ϕ¯ ⩽

tP(n − 1) ⎛ ∂ϕ ⎜ n ⎝ ∂x¯

sx +

⎞ sy⎟ + ⎠

∂ϕ ∂y¯



∂ϕ ∂x¯

⎛ tP(n − 1) ⎞ sx + fs,x ⎟ + ⎜ ⎝ ⎠ n

=

∂ϕ ∂x¯

u x¯ +

∂ϕ ∂y¯

∂ϕ ∂y¯

∂ϕ ∂x¯

fs,x +

∂ϕ ∂y¯

fs,y

⎛ tP(n − 1) ⎞ sy + fs,y ⎟ ⎜ ⎝ ⎠ n

u y¯

so that

u ϕ¯ ⩽

∂ϕ ∂x¯

∂ϕ ∂y¯

u x¯ +

u y¯

(3.36)

reflects the option for an indeed basic proceeding. Example: treating a measured constant Let ϕ(x , y, c ) = x 2 − 2 c y refer to some measurement process where c denotes a given, previously measured constant with true value c0,

c¯ − u c¯ ⩽ c0 ⩽ c¯ + u c¯. Whether or not the uncertainty uc¯ is made up of a random and a systematic component or expresses but an assessed systematic error remains immaterial. The constant c enters passively, say, the experimenter has no idea as to which particular value of c might come into effect. Consequently, there is no other choice but to formally attribute a systematic error to c¯ putting

−u c¯ ⩽ fc¯ ⩽ u c¯ ;

fc¯ = const.

3-11

(3.37)

Truth and Traceability in Physics and Metrology

Hence, traceability should be established via

u ϕ¯ =

⎡ t (n − 1) 2 2 x¯ sx − 2 c¯ x¯ sxy + c¯ 2s y2 × ⎢P 2 ⎣ n x¯ − 2 c¯ y¯ ⎤ + ∣x¯∣ fs,x + ∣c¯∣ fs,y + ∣y¯∣u c¯ ⎥ ⎦ 1

(

)

(3.38)

ϕ(x¯ , y¯ , c¯ ) − u ϕ¯ ⩽ ϕ(x0, y0 , c0) ⩽ ϕ(x¯ , y¯ , c¯ ) + u ϕ¯ . A more robust assessment of u ϕ¯ yields

u ϕ¯ ⩽

∂ϕ ∂x¯

u x¯ +

∂ϕ ∂y¯

u y¯ +

∂ϕ ∂c¯

u c¯.

(3.39)

3.6 Error propagation at a glance The formalism discussed so far rests on assumptions and approximations as follows: • stationarily operating, non-drifting measuring devices, • decoupled random and systematic errors, • linearized series expansions, • well-defined measuring conditions, • assessment of random errors via confidence intervals according to Student, rating of systematic errors by worst-case estimations and, finally, • linear combination of uncertainty components due to random and systematic errors. After all, robust uncertainties appear more promising than scant ones [2]. Eisenhart, in [5], lucidly defined the bias as the difference between the center of scattering and the true value of the measurand, explaining: The systematic error, or bias, of a measurement process refers to its tendency to measure something other than what was intended.

References [1] Chao L L 1974 Statistics Methods and Analyses (Tokyo: McGraw-Hill Kogakusha Ltd) [2] Grabe M 2014 Measurement Uncertainties in Science and Technology 2nd edn (Berlin: Springer) 401pp ISBN 978-3-319-04887-1 [3] Grabe M 1990 A new formalism for the combination and propagation of random and systematic errors Measurement Science Conf. (Los Angeles, 8-9 February 1990)

3-12

Truth and Traceability in Physics and Metrology

[4] Grabe M 1990 Über die Interpretation empirischer Kovarianzen bei der Abschätzung von Meßunsicherheiten (On the interpretation of empirical covariances assessing measurement uncertainties) PTB-Mitt. 100 181–6 [5] Ku H H 1969 Precision Measurement and Calibration, NBS Special Publication 300 vol 1 Washington DC: United States Department of Commerce). Includes Realistic Evaluation of the Precision and Accuracy of Instrument Calibration Systems by C Eisenhart pp 21–47

3-13

IOP Concise Physics

Truth and Traceability in Physics and Metrology Michael Grabe

Chapter 4 Method of least squares

The method of least squares if considered in terms of geometry dispenses with concepts rooting in statistics.

4.1 Geometry of adjustment Let us base the least squares adjustment of a linear inconsistent system on geometrical considerations. To this end we refer to an orthogonal projection. For a vivid illustration, we may visualize a light source throwing parallel rays perpendicularly onto a screen, figure 4.1. The shadow being cast by a rod held obliquely above the screen stands for the idea of the method of least squares. As a first step we turn to the construction of the conventional arithmetic mean, arguably metrology’s most basic estimator. In this, the true value appears at the very beginning. Given n repeated measurements

x1, x2 , … , xn

Figure 4.1. The method of least squares as an orthogonal projection.

doi:10.1088/978-1-64327-096-8ch4

4-1

ª Morgan & Claypool Publishers 2018

Truth and Traceability in Physics and Metrology

aiming at the true value x0 of some measurand x. How can we find an appropriate estimator? Referring n times to the error equation (1.9), we have

x1 = x0 + ε1 + f x2 = x0 + ε2 + f ⋯ ⋯ ⋯ ⋯⋯⋯ ⋯ xn = x0 + εn + f .

(4.1)

Introducing the vectors

⎛ x1 ⎞ ⎜x ⎟ x = ⎜ 2 ⎟, ⎜⋯ ⎟ ⎝ xn ⎠

⎛1⎞ ⎜ ⎟ x0 = x0 ⎜ 1 ⎟ , ⎜⎜⋯⎟⎟ ⎝1⎠

⎛ ε1 ⎞ ⎜ε ⎟ ε = ⎜ 2 ⎟, ⎜⋯⎟ ⎝ εn ⎠

⎛1⎞ ⎜ ⎟ f = f ⎜1⎟ ⎜⎜⋯⎟⎟ ⎝1⎠

(4.2)

(4.1) takes the form

x = x0 + ε + f . While x0 is in the subspace

⎛1⎞ ⎜ ⎟ a = ⎜1⎟ ⎜⎜⋯⎟⎟ ⎝1⎠ x is not. We are looking for an estimator β according to

⎛ 1 ⎞ ⎛ x1 ⎞ ⎜ ⎟ ⎜x ⎟ β ⎜ 1 ⎟ ≈ ⎜ 2 ⎟. ⎜⎜⋯⎟⎟ ⎜⋯ ⎟ ⎝ 1 ⎠ ⎝ xn ⎠ Let β0 be the true value of the formally introduced estimator β. Here we have to confront the two statements aβ ≈ x and aβ0 = x0 , the right-hand equation implies β0 = x0 as a matter of course. The left-hand approximation obviously requests us to do something. As equality does not exist and cannot be established, we project x orthogonally onto a, thus producing a conditional equation for β by brute force,

⎛1⎞ ⎛ x1 ⎞ ⎜ ⎟ ⎜x ⎟ β¯ ⎜ 1 ⎟ = P ⎜ 2 ⎟ , ⎜⎜⋯⎟⎟ ⎜⋯ ⎟ ⎝ xn ⎠ ⎝1⎠ more concisely,

β¯a = Px. 4-2

(4.3)

Truth and Traceability in Physics and Metrology

Figure 4.2. Left: vector x of input data, vector r of residuals and true solution vector x0. Right: orthogonal projection Px of x onto a, r¯ vector of residuals perpendicular to a, β¯ stretch factor.

Here, P designates the geometrical operator projecting x orthogonally onto a. As the projection Px is in the subspace a, the equality is solvable and fixes the floating factor β to β¯ , figure 4.2, right [1]. The method of least squares is controlled via the difference vector

r = x − x0 the components of which being traditionally called residuals, figure 4.2 (left). In order to identify the orthogonality of the projection with the method of least squares, we compare an arbitrary vector of residuals,

r = x − βa, with the vector of residuals being due to the orthogonal projection,

r¯ = x − β¯a. It is easy to prove that the Euclidean norm of r exceeds the Euclidean norm of r¯ ,

r Tr ⩾ r¯ Tr, ¯ letting T symbolize the transpose of a vector. Written in terms of sums of squared residuals we have

(x1 − β )2 + ⋯ + (xn − β )2 ⩾ (x1 − β¯ )2 + ⋯ + (xn − β¯ )2 . As

a Tr¯ = 0,

i.e. a Ta β¯ = a Tx

(4.4)

we observe

β¯ = (a Ta )−1a Tx , hence

1 β¯ = n

n

∑ xl . l=1

4-3

(4.5)

Truth and Traceability in Physics and Metrology

The least squares estimator β¯ turns out to be the arithmetic mean of the observations xl ; l = 1, …, n. Its rendering is obviously clear of conceptions rooting in statistics. Multiplying aTa β¯ = aTx on the left by a(aTa )−1 issues

aβ¯ = a(a Ta )−1a Tx . Hence the projection operator reads

P = a(a Ta )−1a T.

(4.6)

In this case the elements of P hold the common value 1/n. The method of least squares cannot do wonders. All it does is to transfer the approximation aβ ≈ x by, say, a gimmick into the formally self-consistent statement (4.3): x is decomposed into the mutually orthogonal vectors Px and r¯ ,

x = Px + r, ¯ while Px enters (4.3), r¯ is thrown away. The input data x1, x2, …, xn are not subjected to requirements of any kind. Rather, the orthogonal projection ‘adjusts’ the given metrologically induced inconsistencies in a like manner, treating fatal errors of reasoning and the finest, metrologically unavoidable errors of measurement side by side, on the same level. Hence, the method of least squares is independent of the properties of the input data. However, these properties become relevant when it comes to assessing measurement uncertainties. Indeed, from a metrological point of view the construction of the least squares estimator does not finish the job. As x0 and β¯ differ, the experimenter rather wants to know by how much they might differ—otherwise his endeavors turned out to be futile. Meanwhile let us replace β¯ by the more familiar symbol x¯ = β¯ . Fortunately, the uncertainty of x¯ has already been quoted in (3.10), s x¯ − u x¯ ⩽ x0 ⩽ x¯ + u x¯, u x¯ ⩾ 0; u x¯ = tP x + fs,x . n It should be noted that the components of the vector of residuals, r¯ = x − aβ¯ , are not related to physical errors. As r¯ is due to an orthogonal projection, its components are mathematically induced auxiliary quantities. By contrast, the components of the vector r = x − x0 do express physical errors.

4.2 Linear systems Let there be some measuring assembly linking a set of linear relationships to measured data,

a11 β1 + a12 β2 + ⋯ + a1r βr = x1 a21 β1 + a22 β2 + ⋯ + a2r βr = x2 . ⋯ ⋯ ⋯ ⋯ ⋯ am1 β1 + am2 β2 + ⋯ + amr βr = xm

4-4

(4.7)

Truth and Traceability in Physics and Metrology

To rewrite the system in matrices, we let A denote an (m × r) matrix of coefficients aik, β an (r × 1) column vector of unknowns βk and, finally, x an (m × 1) column vector gathering the measured data xi,

⎛ a11 ⎜ a21 A=⎜ ⎜⋯ ⎜ ⎝ am1

a12 ⋯ a1r ⎞ ⎟ a22 ⋯ a2r ⎟ , ⋯ ⋯ ⋯⎟ ⎟ am2 ⋯ amr ⎠

⎛ β1 ⎞ ⎜ ⎟ ⎜β ⎟ β = ⎜ 2 ⎟, ⋯ ⎜ ⎟ ⎝ βr ⎠

⎛ x1 ⎞ ⎜ ⎟ ⎜ x2 ⎟ x = ⎜ ⎟, ⋯ ⎜⎜ ⎟⎟ ⎝ xm ⎠

thus

Aβ = x. Taking the input data to be flawed,

xi = x0,i + (xi − μi ) + fi , −fs,i ⩽ fi ⩽ fs,i ;

i = 1, … , m ,

the notified equality no longer holds. Rather we have to concede

(4.8)

Aβ ≈ x.

Notwithstanding that, we should assume a true, flawless linear system does exist, as otherwise the underlying metrological problems were ill-posed. Letting

x0 = (x0,1

x0,2



x0,m )T

denote the unknown and inaccessible vector of flawless input data, the corresponding true system reads

Aβ0 = x0

(4.9)

where β0 stands for the true solution vector. In view of the pending least squares approach we ask for m > r. We shall come back to this later. In respect of the true system (4.9), m = r appears appropriate. Yet, as long as the linear system remains physically and mathematically meaningful, we may deduce β0 even from m > r equations. On this understanding, there will be a true solution vector

β0 = (β0,1

β0,2



β0,r )T

be m = r or m > r. Multiplying (4.9) on the left by AT yields

AT Aβ0 = AT x0 so that

β0 = B Tx0,

B = A(AT A)−1,

given rank (A) = r. In this case the r column vectors

4-5

(4.10)

Truth and Traceability in Physics and Metrology

⎛ a1k ⎞ ⎜a ⎟ 2k ak = ⎜ ⎟ ; ⎜⋯⎟ ⎜ ⎟ ⎝ amk ⎠

k = 1, … , r

are linearly independent and may be taken to span an r-dimensional vector space, [1–2], so that the left-hand side of (4.9),

⎛ a11 ⎞ ⎜a ⎟ 21 β0,1 ⎜ ⎟ + β0,2 ⋯ ⎜ ⎟ ⎜ ⎟ ⎝ am1⎠

⎛ a12 ⎞ ⎜a ⎟ ⎜ 22 ⎟ + ⋯ + β0,r ⎜⋯⎟ ⎜ ⎟ ⎝ am 2 ⎠

⎛ a1r ⎞ ⎜a ⎟ ⎜ 2r ⎟ , ⎜⋯⎟ ⎜ ⎟ ⎝ amr ⎠

reproduces the vector x0. To recall: the linear system is solvable for β0 if the vector x0 is inside the column space of A. However, due to measuring errors the empirical input vector x is outside. From there, we look for a least squares solution which will be by its very nature an approximation by brute force. The idea is illustrated in figure 4.3. Indeed, we decompose the m-dimensional space, holding the erroneous vector of observations, x, into the r-dimensional column space of the matrix A and the (m − r)-dimensional null space of its transpose AT. As every vector of A is orthogonal to every vector of the null space of AT and vice versa, the two spaces are orthogonal. Hence, the same holds for the decomposition of x. Thus, all we have to do is to project x orthogonally onto the column space of A, the projection rendering the inconsistent linear system solvable,

Figure 4.3. The vector r¯ of residuals is perpendicular to the r-dimensional subspace, spanned by the column vectors ak; k = 1, …, r.

4-6

Truth and Traceability in Physics and Metrology

Aβ¯ = Px,

(4.11)

P = A(ATA)−1AT

(4.12)

denoting by

the projection operator. The remaining vector of residuals,

r¯ = x − Aβ¯ ,

(4.13)

is in the null space of AT, hence orthogonal to the column vectors of A,

AT r¯ = 0, 1

(4.14)

and may be shown to minimize the Euclidean norm of

r = x − Aβ , this again reflecting the idea of the method of least squares. Inserting (4.13) into (4.14) we observe

AT (x − Aβ¯ ) = 0 so that

β¯ = (β¯1 β¯2



β¯r )T = B Tx ,

B = A(AT A)−1.

(4.15)

In the case of m = r, the system (4.8) would be intractable. The projection has to start from a higher-dimensional space so as to direct itself ‘down’ to a lower dimensional one. Thus, it is required to have at least m = r + 1 observational equations which is why we asked for m > r. Once again let us address the necessity to assign a true linear system to the erroneous linear system: uncertainty assignments ask for physically meaningful reference values. This, in the end, is the reason for invoking the vector x0 of true observations and the true solution vector β0. Hence we have to put the two systems

β¯ = B Tx

and

β0 = B Tx0

(4.16)

side by side. Again, the crucial point is to relate the estimator β¯ to the unknown and inaccessible true vector β0. To be sure, the searched for uncertainties of the β¯1, … , β¯r are to localize the true values β0,1, …, β0,r of the physical quantities considered so as to render traceability a realistic metrological ambition. This will be addressed in chapter 6.

1 Alternatively, we may conceive a three-dimensional space holding a rectangular coordinate system. Any vector (x, y, z)T may be decomposed into a vector (x, y, 0)T and an orthogonal vector (0, 0, z)T.

4-7

Truth and Traceability in Physics and Metrology

For convenience let us explicitly jot down the elements of the matrix B,

⎛ b11 ⎜ ⎜b B = ⎜ 21 ⎜⋯ ⎜ ⎝ bm1

b12 ... b1r ⎞ ⎟ b22 ... b2r ⎟ ⎟. ⋯ ⎟ ⎟ bm2 ... bmr ⎠

(4.17)

4.3 Quintessence of the method of least squares The key idea of the method of least squares is the orthogonal projection [1, 3, 4]. In that, the errors of the input data are distinguished neither with respect to their properties nor with respect to their causes. Given the design matrix A has full rank, any vector x of incoming data gives rise to a solution vector β¯ , whether the approach is meaningful or not. A meaningful adjustment asks for the existence of a physically true system,

Aβ0 = x0,

(4.18)

otherwise the underlying problem was ill-posed. The construction of the solution vector via orthogonal projection does not ask for assumptions rooting in statistics. However, in order to specify the uncertainties of the least squares estimators, sufficient information as to the properties of the measuring errors of the incoming data is needed.

References [1] Strang G 1988 Linear Algebra and its Applications (New York: Harcourt Brace Jovanovich College Publishers) [2] Ayres F 1974 Schaum’s Outline Series Matrices (New York: McGraw-Hill) [3] Seber G A F 1977 Linear Regression Analysis (New York: Wiley) [4] Draper N R and Smith H 1981 Applied Regression Analysis (New York: Wiley)

4-8

IOP Concise Physics

Truth and Traceability in Physics and Metrology Michael Grabe

Chapter 5 Fitting of straight lines

The fitting of a straight line to a measured set of data points is a basic yet eminently important application of the method of least squares.

5.1 True straight line As far as the underlying physical model is linear, the demand for traceability suggests the existence of a true straight line, formally constituted via flawless, say, true data pairs. Flawed data pairs, in contrast, ask for a least squares fit of a straight line where the measurement uncertainties of the ordinate-intercept and the slope are meant to relate these same estimators to the true parameters of the true straight line. For formal reasons, we initially pursue the construction of the true straight line. Let there be m > 2 pairs of true coordinates

(x0,1, y0,1),

(x0,2 , y0,2 ), … , (x0,m , y0,m );

m>2

(5.1)

establishing the true straight line

y0(x ) = β0,1 + β0,2 x .

(5.2)

In this, β0,1 and β0,2 denote the y-intercept and the slope, respectively. In terms of the m data points we have

y0,1 = β0,1 + β0,2x0,1 y0,2 = β0,1 + β0,2x0,2 ..... .......................

(5.3)

y0,m = β0,1 + β0,2x0,m .

doi:10.1088/978-1-64327-096-8ch5

5-1

ª Morgan & Claypool Publishers 2018

Truth and Traceability in Physics and Metrology

To dispose of a compact notation, we introduce a matrix A and vectors β0 and y0,

⎛ 1 x0,1 ⎞ ⎜ ⎟ ⎜ 1 x0,2 ⎟ A=⎜ , ⋯ ⋯⎟ ⎜⎜ ⎟⎟ ⎝ 1 x0,m ⎠

⎛ y0,1 ⎜y 0,2 y0 = ⎜ ⎜⋯ ⎜y ⎝ 0,m

⎛ β0,1⎞ β0 = ⎜⎜ β ⎟⎟ , ⎝ 0,2 ⎠

⎞ ⎟ ⎟, ⎟ ⎟ ⎠

turning (5.3) into

Aβ0 = y0 .

(5.4)

Assuming rank (A) = 2, the solution vector reads

β0 = B Ty0 ,

B = A(AT A)−1.

(5.5)

When it comes to measurement uncertainties, the true straight line will serve as a reference. This is its meaning.

5.2 Fitting conditions Let us consider three situations: Straight line (I): Abscissas error-free. Ordinates erroneous measurements charged with a common systematic error and individual random errors stemming from one and the same normal distribution,

(x0,i , yi );

i = 1, ⋯ , m > 2 (5.6) yi = y0,i + (yi − μ yi ) + fy ;

−fs,y ⩽ fy ⩽ fs,y .

Straight line (II): Abscissas error-free. Ordinates arithmetic means charged with various systematic errors and random errors coming from varying normal distributions,

(x0,i , y¯i );

i = 1, ⋯ , m > 2 y¯i =

1 n

n

∑ yil

(5.7) = y0,i + (y¯i − μ y¯i ) + fy¯i ;

−fs,y¯i ⩽ fy¯i ⩽ fs,y¯i .

l=1

Straight line (III): Abscissas and ordinates arithmetic means charged with various systematic errors and random errors stemming from varying normal distributions,

(x¯i , y¯i );

i = 1, ⋯ , m > 2 x¯i =

1 n

1 y¯i = n

n

∑ xil = x0,i + (x¯i − μx¯ ) + fx¯ ; i

i

−fs,x¯i ⩽ fx¯i ⩽ fs,x¯i

l=1 n

∑ yil

= y0,i + (y¯i − μ y¯i ) + fy¯i ;

l=1

5-2

−fs,y¯i ⩽ fy¯i ⩽ fs,y¯i .

(5.8)

Truth and Traceability in Physics and Metrology

5.3 Straight line (I) Figure 5.1 depicts the data set (5.6). The abscissas x0,i are assumed to be error-free, the ordinates erroneous and to have been measured independently

yi = y0,i + (yi − μ yi ) + fy ,

i = 1, … , m (5.9)

μ yi = y0,i + fy ;

−fs,y ⩽ fy ⩽ fs,y .

As usual, we formally associate random variables Yi with the measured ordinates yi, where each Yi has been triggered only once. Obviously, the available information is as scarce and simple as possible which, however, corresponds to typical metrological situations. Hence it will be somewhat intricate to extract the information needed in order to quantify the properties of the underlying physical problem. The Yi are assumed to have theoretical expectations E{Yi } = μyi ; i = 1, … , m and the random errors (yi − μyi ); i = 1, … , m to refer to normal distributions N( μyi , σ 2 ). As has just been hinted at, μyi marks the notional scattering center at the ith measuring point (though there is but one measurement (x0,i, yi)) and σ2 the theoretical variance as given by the assumed normal distributions. Both parameters remain fictional, there is however, no other choice to proceed. Further, the ordinates are offset due to one and the same unknown systematic error fy = const. This, alas, is all we have at our disposal. So far, we have no quantitative knowledge as to the uncertainties of the input data. At best, the scattering of the individual measurements yi ; i = 1, …, m provides a qualitative impression. Meanwhile, with the benefit of hindsight, we shall nonetheless be in a position to localize the true values y0,i of the input data via appropriately tailored inequalities

Figure 5.1. Input data (x0,i , yi ); i = 1, … , m , least squares line y(x ) = β¯1 + β¯1 x .

5-3

Truth and Traceability in Physics and Metrology

yi − u y ⩽ y0,i ⩽ yi + u y;

i = 1, ⋯ , m .

But above all, we are looking for the least squares estimators β¯1, β¯2 and their uncertainties

β¯1 − u β¯1 ⩽ β0,1 ⩽ β¯1 + u β¯1,

β¯2 − u β¯2 ⩽ β0,2 ⩽ β¯2 + u β¯2

(5.10)

meant to localize the ordinate-intercept and the slope of the unknown true straight line. As the case may be, we might also wish to formalize the mutual dependence of the estimators β¯1 and β¯2 as confined via their uncertainties u β¯1 and u β¯2 . There are varying approaches to fit straight lines to given sets of data points. We may consider distances parallel to the y-axis, parallel to the x-axis or even perpendicular to the straight line itself. Whatever path is taken, the pertaining uncertainties according to (5.10) have to be ascertained. The approach discussed here refers to distances parallel to the y-axis. Least squares estimators As (5.2) has to be modified according to (5.9), the inconsistent over-determined linear system to be submitted to least squares reads

β1 + β2x0,i ≈ yi ;

i = 1, ⋯ , m > 2.

(5.11)

Putting

⎛ 1 x0,1 ⎞ ⎜ ⎟ ⎜ 1 x0,2 ⎟ A=⎜ , ⋯ ⋯⎟ ⎜⎜ ⎟⎟ ⎝ 1 x0,m ⎠

⎛ β1 ⎞ ⎜ ⎟ β = ⎜ ⎟, ⎜ ⎟ ⎜ β2 ⎟ ⎝ ⎠

⎛ y1 ⎞ ⎜ y2 ⎟ y=⎜ ⎟ ⎜⋯⎟ ⎜ ⎟ ⎝ ym ⎠

the matrix notation of (5.11) takes the form

Aβ ≈ y.

(5.12)

Albeit there are no repeated measurements, we nevertheless denote the least squares estimators with a bar on top. The orthogonal projection of the vector y of observations onto the column space of the matrix A produces the least squares solution vector

β¯ = B Ty ,

B = A(AT A)−1 = (bik )

m

β¯k = ∑ bik yi ;

(5.13) k = 1, 2.

i=1

The estimators β¯1 and β¯2 establish the least squares line

y(x ) = β¯1 + β¯2 x . 5-4

(5.14)

Truth and Traceability in Physics and Metrology

There are good reasons to go over the elements of the matrix B, m ⎡ m ⎢ ∑ x0,2j − x0,1 ∑ x0,j ⎢ j=1 j=1 ⎢ m m 1 ⎢ ∑ x0,2j − x0,2 ∑ x0,j B= ⎢ j=1 D ⎢ j=1 ⋯ ⎢m m ⎢ 2 x − x x0,j ∑ ∑ 0, m 0,j ⎢ j=1 ⎣ j=1

⎤ − ∑ x0,j + mx0,1 ⎥ ⎥ j=1 ⎥ m − ∑ x0,j + mx0,2 ⎥⎥ j=1 ⎥ ⋯ ⎥ m ⎥ − ∑ x0,j + mx0,m ⎥ j=1 ⎦ m

(5.15)

where m

D = ∣A A∣ = m ∑ T

x0,2j

j=1

⎤2 ⎡m − ⎢∑ x0,j ⎥ . ⎥⎦ ⎢⎣ j=1

Summing the elements

bi1 =

⎤ ⎡m m 1⎢ x0,2j − x0,i ∑ x0,j ⎥ , ∑ ⎥⎦ D ⎢⎣ j = 1 j=1

i = 1, … , m (5.16)

⎤ ⎡ m 1⎢ bi 2 = −∑ x0,j + mx0,i ⎥ ⎥⎦ D ⎢⎣ j = 1 we observe m

m

∑ bi1 = 1

and

i=1

∑ bi 2 = 0.

(5.17)

i=1

Uncertainties of the input data The traditional handling of the underlying least squares approach starts from the minimized sum of squared residuals

Q¯ = (y − Aβ¯ )T (y − Aβ¯ ) ¯ σ 2 to follow a χ2 distribution with degrees of freedom (m − 2), considering Q/

χ 2 (m − 2) =

(m − 2)s 2 , σ2

so that

s2 =

Q¯ m−2

5-5

(5.18)

Truth and Traceability in Physics and Metrology

estimates the unknown theoretical variance σ2 of the random errors of the input data. Meanwhile, due to systematic errors we should be cautious as to the validity of (5.18). In this situation however, all that the common systematic error does is to shift the least squares line either up or down the y-axis at it in no way affecting the scattering of the random errors via the N(μyi , σ 2 ) distributions. Thus, (5.18) remains valid. Hence, there is a Student’s T′,1

T ′(m − 2) =

(Yi − μ yi ) S

;

(5.19)

i = 1, … , m ,

spawning confidence intervals

yi − tP′ (m − 2) s ⩽ μ yi ⩽ yi + tP′ (m − 2) s ;

i = 1, … , m .

(5.20)

for the expectations μyi = E{Yi }. Though there are no proper repeated measurements, according to (5.18) the least squares mechanism and the assumption of a common systematic error f = const. at each of the measuring points yield the empirical variance s2 of the scattering of the measured data. Again, s denotes the pertaining empirical standard deviation and S the associated random variable, the implied degree of freedom being m − 2. Drawing

μ yi = y0,i + fy ;

−fs,y ⩽ fy ⩽ fs,y

from (5.9), we observe via (5.20)

yi − tP′ (m − 2) s ⩽ y0,i + fy ⩽ yi + tP′ (m − 2) s ;

i = 1, … , m

− fs,y ⩽ fy ⩽ fs,y . Hence, the true values y0,i of the ordinates yi are localized by

u y = tP′ (m − 2) s + fs,y yi − u y ⩽ y0,i ⩽ yi + u y;

(5.21)

i = 1, … , m .

Here uy denotes the uncertainty of any of the measured ordinates yi. 1

As discussed in [1], there are in fact two differing Student variables,

T (n − 1) =

(X¯ − μ y ) x

Sx

n

and

T ′(n − 1) =

(X − μ y ) x

Sx

;

referring to the same Student distribution of degrees of freedom ν = n − 1.

5-6

i = 1, … , m,

Truth and Traceability in Physics and Metrology

Uncertainties of the least squares estimators Inserting (5.9) into (5.13) yields m

β¯k = ∑ bik [y0,i + (yi − μ yi ) + fy ] i=1 m

= β0,k +

(5.22)

m

∑ bik (yi − μ y ) + fy ∑ bik ; i

i=1

k = 1, 2.

i=1

Systematic errors The statistical expectations m

μ β¯k = β0,k + fy

∑ bik ;

k = 1, 2

(5.23)

i=1

are to be compared against the true values β0,k with respect to the propagated systematic errors. Hence m

fβ¯k = fy

∑ bik ;

k = 1, 2.

i=1

Due to (5.17), m

m

∑ bi1 = 1, ∑ bi 2 = 0, i=1

i=1

we have, as was to be expected,

fβ¯1 = fy ,

fβ¯2 = 0.

Random errors The elements of the empirical variance–covariance matrix of the least squares solution vector

⎛ s β¯1 β¯1 s β¯ = ⎜⎜ ⎝ s β¯2 β¯1

s β¯1 β¯2 ⎞ ⎟ s β¯2 β¯2 ⎟⎠

are attainable as follows: the theoretical variances and the theoretical covariance come from m

β¯k = μ β¯k +

∑ bik (yi − μ y ); i

k = 1, 2

(5.24)

i=1

as m

σ β¯1 β¯1 = σ 2 ∑ bi21 , i=1

m

σ β¯1 β¯2 = σ 2 ∑ bi1bi 2 , i=1

5-7

m

σ β¯2 β¯2 = σ 2 ∑ bi22 i=1

Truth and Traceability in Physics and Metrology

where σ β¯1 β¯1 ≡ σ β2¯1, σ β¯1 β¯2 = σ β¯2 β¯1 and σ β¯2 β¯2 ≡ σ β2¯2 . The empirical counterparts of the theoretical moments of second order obviously read m

m

s β¯1 β¯1 = s 2 ∑ bi21 ,

m

s β¯2 β¯2 = s 2 ∑ bi22

s β¯1 β¯2 = s 2 ∑ bi1bi 2

i=1

i=1

(5.25)

i=1

wherein s β¯1 β¯1 ≡ s β2¯1, s β¯1 β¯2 = s β¯2 β¯1 and s β¯2 β¯2 ≡ s β2¯2 . Student’s T′ pertaining to β¯1 takes the form2

T ′(m − 2) =

β¯1 − μ β¯1 m

S

.

bi21

∑ i=1

so that m

β¯1 − tP′ (m − 2) s

m

∑ bi21

∑ bi21

⩽ μ β¯1 ⩽ β¯1 + tP′ (m − 2) s

i=1

(5.26)

i=1

spans a confidence interval for μ β¯1. Supplementary, we let the least squares line

y(x ) = β¯1 + β¯2 x spawn a line connecting the expectations μy(x ) = μ β¯1 + μ β¯2 x where m

μ β¯1 =

m

∑ bi1 μ yi = β0,1 + fy ;

μ β¯2 =

i=1

∑ bi 2 μ y

i

= β0,2 .

i=1

Overall uncertainties Inserting μ β¯1 = β0,1 + fy into (5.26) results in m

β¯1 − tP′ (m − 2) s



m

bi21

⩽ β0,1 + fy ⩽ β¯1 + tP′ (m − 2) s

i=1

∑ bi21 i=1

− fs,y ⩽ fy ⩽ fs,y . Hence, traceability asks for m

u β¯1 = tP′ (m − 2) s

∑ bi21

β¯1 − u β¯1 ⩽ β0,1 ⩽ β¯1 + u β¯1

+ fs,y ;

i=1

(5.27)

m

u β¯2 = tP′ (m − 2) s

∑ bi22 ;

β¯2 − u β¯2 ⩽ β0,2 ⩽ β¯2 + u β¯2.

i=1

2

For the sake of clarity β¯1 has not been replaced with a capital letter.

5-8

Truth and Traceability in Physics and Metrology

As fy shifts the straight line parallel to itself, fs,y does not enter the uncertainty u β¯2 of the slope β¯2 . Uncertainty band It may be of interest to localize the true ordinate y0(x) for any value of x. To this effect a so-called uncertainty band is helpful. Inserting the least squares estimators (5.22) into the least squares line (5.14) yields

y(x ) = β¯1 + β¯2x m

= (β0,1 + β0,2x ) +

∑(bi1 + bi 2x)(yi − μ y ) + fy .

(5.28)

i

i=1

Subtracting the expectation

μy(x ) = (β0,1 + β0,2x ) + fy ;

−fs,y ⩽ fy ⩽ fs,y

(5.29)

we have m

y(x ) − μy(x ) =

∑(bi1 + bi 2x)(yi − μ y ). i

i=1

Obviously we are in need of an empirical variance s y2¯(x ) of y(x) with respect to μy(x). To this end we observe the theoretical variance m

σ y2(x )



2

∑(bi1 + bi 2x)2 i=1

and substitute the empirical variance s2 for the inaccessible theoretical variance σ2. This yields m

s y2¯ (x )

=s

2

∑(bi1 + bi 2x)2 i=1

which we consider to be an empirical estimator of σ 2y(x). With this, Student’s T reads

Y (x ) − μy(x )

T ′(m − 2) =

m

S

.

∑(bi1 + bi 2x)2 i=1

Hence, there is a confidence interval m

y(x ) − tP′ s

∑(bi1 + bi 2x)2

m

⩽ μy(x ) ⩽ y(x ) + tP′ s

i=1

∑(bi1 + bi 2x)2 i=1

5-9

Truth and Traceability in Physics and Metrology

localizing μy(x) with probability P. With a view to (5.29), we deduce m

y(x ) − tP′ s

m

∑(bi1 + bi 2x) ⩽ y0(x) + fy ⩽ y(x) + tP′ s

∑(bi1 + bi 2x)2

i=1

i=1

2

− fs,y ⩽ fy ⩽ fs,y . Finally, the searched for uncertainty band turns out to be m

u y(x ) = tP′ (m − 2) s

∑(bi1 + bi 2x)2

+ fs,y

i=1

(5.30)

y(x ) − u y(x ) ⩽ y0(x ) ⩽ y(x ) + u y(x ). The true straight line (5.2) should lie within this band. EP-region Betimes the experimenter might ask for a two-dimensional region localizing the pair of true values β0,1 and β0,2 with respect to the pair of estimators β¯1 and β¯2 . Such areas are confined by what I have called EP-regions as their boundaries are given by the combination of Ellipses and Polygons. Incidentally, in the case of three dimensions, say, when there are three variables to be adjusted, e.g. in fitting a circle, ellipsoids and polyhedra have to be assembled [1–4]. The addressed ellipse is, as a matter of course, a confidence ellipse which in this case makes itself out to be [5]

s β¯2 β¯2(β1 − β¯1)2 − 2s β¯1 β¯2(β1 − β¯1)(β2 − β¯2 ) + s β¯1 β¯1(β2 − β¯2 )2 (5.31) = tP′ 2(2, m − 2)∣s β¯∣ , being centered in (β¯1, β¯2 ). The confidence ellipse is expected to localize the point

⎛ μ β¯1 ⎞ μ β¯ = ⎜⎜ ⎟⎟ ⎝ μ β¯2 ⎠

(5.32)

with probability P. Security polygon To underpin the exceptional role of the polygon referred to and to have a linguistic correspondence to confidence ellipse I have termed the new figure security polygon. In the case at hand, due to the measuring conditions of error-free abscissas and erroneous ordinates charged by one and the same unknown systematic error, the security polygon degenerates into an interval

5-10

Truth and Traceability in Physics and Metrology

− fs,y ⩽ fβ¯1 ⩽ fs,y (5.33) fβ¯2 = 0. For convenience, one might wish to visualize this interval as a ‘stick’ of length 2fs,y. The merging of a confidence ellipse and a degenerated security polygon has been discussed in [5]. Data simulation As the true values of the measurands are inaccessible, the formalism under discussion cannot be verified on the basis of actually measured data. Meanwhile, for a check it stands to reason to simulate data having the property of measured data. In doing so, figure 5.2 displays the true straight line (5.2), the fitted straight line (5.14), the uncertainties (5.27) of the estimators β¯1, β¯2 , the uncertainty band (5.30) and, finally, the EP-region, as deduced by merging (5.31) with (5.33), localizing the tuple of true values β0,1 and β0,2 with respect to the estimators β¯1, β¯2 . Concerning realistic data, the evaluation procedures remain the same, apart from the impossibility of drawing the true straight line and marking the pair of true values β0,1, β0,2.

Figure 5.2. Straight line (I), simulated data. Left: true straight line y0(x), least squares line y(x), error bars of the input data yi ± uy and uncertainty band y(x) ± uy(x). Top right: uncertainties of the estimators u β¯1, u β¯2 . Bottom right: EP-region, localizing the pair of true values β0,1, β0,2 with respect to the estimators β¯1, β¯2 .

5-11

Truth and Traceability in Physics and Metrology

Figure 5.3. Straight line (II). Left: least squares line y(x) with uncertainty band y(x) ± uy(x), error bars of the input data y¯i ± u y¯i , true straight line y0(x). Top right: uncertainties of the estimators u β¯1, u β¯2 . Bottom right: EPregion, localizing the pair of true values β0,1, β0,2 with respect to the estimators β¯1, β¯2 .

5.4 Straight line (II) Given the ordinates are charged by varying systematic errors, repeated measurements in each measuring point prove indispensable as the minimized sum of squared residuals does not issue viable information regarding the scattering of random errors. Moreover, the ordinates were anyway admitted to scatter differently. In regard to repeated measurements, well-defined measuring conditions are advisable as annotated in (5.7). Figure 5.3 rests on simulated data. Again, we observe the true straight line (5.2), the fitted straight line (5.14), the uncertainties of the estimators β¯1, β¯2 , the uncertainty band and, finally, the EP-region, as established by merging a confidence ellipse with a security polygon, localizing the tuple of true values β0,1 and β0,2 with respect to the estimators β¯1, β¯2 .

5.5 Straight line (III) Strictly speaking, the abscissas of any straight line fit are flawed, that however is rarely brought to bear. Naturally, the minimized sum of squared residuals does not issue any information in respect of the scattering of random errors. The flawed abscissas ask for series expansions of the least squares estimators. 5-12

Truth and Traceability in Physics and Metrology

Figure 5.4. Straight line (III). Left: least squares line y(x) with uncertainty band y(x) ± uy(x), error bars of the input data x¯i ± u x¯i , y¯i ± u y¯i and true straight line y0(x). Top right: uncertainties of the estimators u β¯1, u β¯2 . Bottom right: EP-region, localizing the pair of true values β0,1, β0,2 with respect to the estimators β¯1, β¯2 .

Figure 5.4 is due to data simulation. Depicted are the true straight line (5.2), the fitted straight line (5.14), the uncertainties of the estimators β¯1, β¯2 , the uncertainty band and, eventually, the EP-region, as produced by merging a confidence ellipse with a security polygon, localizing the tuple of true values β0,1 and β0,2 with respect to the estimators β¯1, β¯2 .

References [1] Grabe M 2014 Measurement Uncertainties in Science and Technology 2nd edn (Berlin: Springer) 401pp ISBN 978-3-319-04887-1 [2] Grabe M 1989 Anpassung eines Kreises nach kleinsten Quadraten (Least Squares Fit of a Circle) Jahresbericht der Physikalisch-Technischen Bundesanstalt pp 210–1 [3] Grabe M 1992 Uncertainties, confidence ellipsoids and security polytopes in LSA Phys. Lett. A165 124–32 1995 Erratum A205 425 [4] Grabe M 1993 On the estimation of one- and multidimensional uncertainties Proc. of National Conf. of Standards Laboratories (25-29 July 1993, Albuquerque, USA) pp 569–76 [5] Grabe M 2010 Generalized Gaussian Error Calculus (Berlin: Springer) 301pp ISBN 978-3-64203304-9

5-13

IOP Concise Physics

Truth and Traceability in Physics and Metrology Michael Grabe

Chapter 6 Features of least squares estimators

Least squares estimators come from a brute force approximation. Hence, uncertainties are required to cover the differences between estimators and respective true values.

6.1 Uncertainties Let us return to linear systems as addressed in (4.8). In view of input data being charged by varying systematic errors, the minimized sum of squared residuals does not result in usable information with respect to the properties of random errors [1, 2]. Thus, experimenters may not rely on input data being individual measurements xi ; i = 1, …, m. Rather, they have to resort to arithmetic means

x¯i =

1 n

n

∑ xil ;

i = 1, … , m

l=1

which should, however, be based on well-defined measuring conditions, say on equal numbers of repeated measurements. The error equations read

x¯i = x0,i + (x¯i − μi ) + fi ;

−fs,i ⩽ fi ⩽ fs,i ;

i = 1, … , m .

(6.1)

Again, the inconsistent linear system is formally backed by a true system,

Aβ ≈ x¯

and

Aβ0 = x0.

The least squares approach reads

β¯ = B Tx¯ ;

B = A(AT A)−1 = (bik ).

Inserting the error equations (6.1) into the components m

β¯k =

∑ bik x¯i ;

k = 1, … , r

i=1

doi:10.1088/978-1-64327-096-8ch6

6-1

ª Morgan & Claypool Publishers 2018

Truth and Traceability in Physics and Metrology

of the least squares solution vector issues m

β¯k = ∑ bik ⎡⎣x0,i + (x¯i − μi ) + fi ⎤⎦ ;

k = 1, … , r

i=1 m

= β0,k +

m

∑ bik (x¯i − μi ) + ∑ bik fi . i=1

i=1

Putting m

μ β¯k = β0,k +

∑ bik fi ;

k = 1, … , r

i=1

we have m

β¯k = μ β¯k +

∑ bik (x¯i − μi ).

(6.2)

i=1

In view of well-defined measuring conditions the experimenter has the complete empirical variance–covariance matrix of the input data at his command. In fact, the elements n

sij =

1 ∑(xil − x¯i )(xjl − x¯j ); n − 1 l=1

i , j = 1, … , m

establish

⎛ s11 ⎜ s21 s=⎜ ⎜⋯ ⎜ ⎝ sm1

s12 s22 ⋯ sm2

⋯ s1m ⎞ ⎟ ⋯ s2m ⎟ . ⋯ ⋯ ⎟ ⎟ ⋯ smm ⎠

As to the empirical variance–covariance matrix of the estimators β¯k ; k = 1, … , r we observe, due to a common n, m

β¯k = ∑ bik x¯i i=1 m

⎡1 = ∑ bik ⎢ ⎢⎣ n i=1

⎤ 1 ∑ xil ⎥ = ⎥⎦ n l=1 n

⎡m ⎤ ∑⎢∑ bik xil ⎥; ⎢ ⎥⎦ l = 1⎣ i = 1 n

k = 1, … , r .

In this, the quantities m

β¯kl =

∑ bik xil ;

l = 1, … , n

i=1

may be taken to implement least squares adjustments based on input data x1l, x2l, …, xml where each of the m means x¯i throws in the respective lth measured datum,

6-2

Truth and Traceability in Physics and Metrology

x11 x12 ⋯ x1l ⋯ x1n ⇒ x¯1 x21 x22 ⋯ x2l ⋯ x2n ⇒ x¯2 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯⋯ xm1 xm2 ⋯ xml ⋯ xmn ⇒ x¯m

β¯k1 β¯k 2

β¯kl

β¯kn ⇒ β¯k .

Subtracting β¯k from β¯kl yields m

β¯kl − β¯k =

∑ bik (xil − x¯i );

k = 1, … , r ;

l = 1, … , n .

i=1

Ultimately, these differences lead us to the empirical variances and covariances n

s β¯k β¯k′ =

1 ∑(β¯ − β¯k )(β¯k ′ l − β¯k ′); n − 1 l = 1 kl

k , k′ = 1, … , r

and thus to the complete matrix

⎛ s β¯1 β¯1 ⎜ ⎜ s β¯2 β¯1 s β¯ = ⎜ ⎜⋯ ⎜s ¯ ¯ ⎝ βr β1

s β¯1 β¯2 ⋯ s β¯1 β¯r ⎞ ⎟ s β¯2 β¯2 ⋯ s β¯2 β¯r ⎟ 2 ⎟ , s β¯ β¯ ≡ s β¯k , ⋯ ⋯ ⋯ ⎟ k k s β¯r β¯2 ⋯ s β¯r β¯r ⎟⎠

(6.3)

the elements of which are

s β¯k β¯k′ =

⎤ n ⎡ m ⎤⎡ m 1 ∑⎢⎢∑ bik (xil − x¯i )⎥⎥⎢⎢∑ bjk ′(xjl − x¯j )⎥⎥ n − 1 l = 1⎣ i = 1 ⎦⎣ j = 1 ⎦ m

= ∑ bik bjk ′sij ;

s β¯k β¯k ≡ s β2¯k .

i, j

Regarding (6.2), there is a Student’s T of degrees of freedom (n-1), β¯k − μ β¯k T (n − 1) = , S β¯k n engendering confidence intervals of probability P

t t β¯k − P s β¯k ⩽ μ β¯k ⩽ β¯k + P s β¯k ; n n

6-3

k = 1, … , r

(6.4)

Truth and Traceability in Physics and Metrology

with respect to the artefacts m

μ β¯k = β0,k +

∑ bik fi ;

k = 1, … , r .

i=1

Hence we observe

t β¯k − P s β¯k ⩽ β0,k + n

m

∑ bik fi i=1

t ⩽ β¯k + P s β¯k ; n

k = 1, … , r

fs,i ⩽ fi ⩽ fs,i . Ultimately, the overall uncertainties read

u β¯k

t (n − 1) = P n

m

m

∑ bik bjk sij i, j

+

∑∣bik ∣fs,i i=1

β¯k − u β¯k ⩽ β0,k ⩽ β¯k + u β¯k ;

(6.5)

k = 1, … , r .

In their capacity to localize the true values β0,k of the estimators β¯k , the inequalities ensure traceability [3–5].

6.2 Weighted least squares A point of interest refers to the introduction of weight factors, their purpose being to boost the influence of the more accurate input data and to reduce the influence of the less accurate ones. There are, however, serious problems. For one, in view of the unknown systematic errors there is no longer a recipe of how to choose weight factors. For another, weight factors are known to shift the least squares estimators at that reducing their uncertainties. Unfortunately, the mechanism of the prevailing formalism does not and cannot localize the true values of the least squares estimators via appropriately constructed uncertainty forks. To date, error calculus draws on a recipe, termed the Gauss–Markoff theorem in order to choose ‘optimal weights’, say an ‘optimal weight matrix’, so that experimenters may hope for minimal diagonal elements within the variance– covariance matrix of the least squares estimators. Just these diagonal elements are commonly taken as the measurement uncertainties of least squares adjustments. As long as there are no unknown systematic errors, this proceedure may just work so-so. But strictly speaking, metrology is not and was never in a position to apply the Gauss–Markoff theorem [6, 7] as it is tied to theoretical variances and covariances which are unknown and inaccessible. This very fact abolishes the widely propagated idea of what is called a Best Linear Unbiased Estimator (BLUE). What is more, due to unknown systematic errors, least squares estimators turn out to be biased, a property causing the Gauss–Markoff theorem to break down anyway.

6-4

Truth and Traceability in Physics and Metrology

Notwithstanding that, least squares adjustments are still in need for weight factors. Meanwhile, there is no rule controlling their choice. Rather, it is up to experimenters to make their selections under whatsoever points of view, the decisive aspect being that the inequality (6.5) continues to hold for any choice of weights. Convenient starting values for the selection of weight factors may be drawn from the uncertainties uxi of the input data x¯i ,

gi = 1 u xi ;

1 = 1, … , m ,

(6.6)

the weights gi being advantageously stored in a diagonal matrix

G = diag{g1, g2, … , gm}. Multiplying the left-hand side of (4.8) by G ,

GAβ ≈ Gx, ¯

(6.7)

β¯ = [(GA)T (GA)]−1(GA)T Gx¯ .

(6.8)

results in

Analogously to (4.15) we set

B˜ = (GA)[(GA)T (GA)]−1 = (b˜ik ) ; i = 1, … , m ; k = 1, … , r . We are sure, a weight matrix does not alter the true solution vector of the flawless linear system. Indeed, (4.9) issues T β0 = B˜ Gx0.

GAβ0 = Gx0;

The results of the modified least squares adjustment read

u β¯k =

tP(n − 1) n

m

m

∑ b˜ik b˜jk gigjsij i, j

+

∑∣b˜ik ∣gifs,i i=1

β¯k − u β¯k ⩽ β0,k ⩽ β¯k + u β¯k ;

(6.9)

k = 1, … , r .

Weight factors are known to shift the numerical values of the estimators and to affect their uncertainties. At that, the uncertainty intervals of the estimators should not lose track of the respective true values. Fortunately, (6.9) discloses the good news that this applies to any set of weights may they act more or less smart. Figure 6.1 summarizes the various influences giving rise to a set of least squares estimators and their uncertainties [1, 2, 8]. Weight matrices may even be chosen by trial and error so as to minimize either the uncertainties of some selected estimators or of all estimators.

6.3 Transfer of true values We might wish to conceive the measuring result (6.5) as an arbitrary ‘metrological intersection’ to inquire: is it possible to preserve the flow of true values from the 6-5

Truth and Traceability in Physics and Metrology

Figure 6.1. Least squares adjustments depend on the input data, the structure of the design matrix A, the chosen weight matrix G and, in particular, the underlying error model.

current stage of concatenation to some following one? This, indeed, would be a prerequisite for traceability. Let us consider some concatenation of a subset of least squares estimators. For instance, the quantities β1 and β2 might implement a third quantity ϕ(β1, β2)—not being involved in the adjustment. We observe

β¯1 ± u β¯1, β¯2 ± u β¯2



ϕ(β¯1, β¯2 ) ± u ϕ¯ .

(6.10)

The uncertainty u ϕ¯ asks for the empirical variance–covariance matrix of the least squares estimators β¯1, β¯2 which we take from (6.3),

⎛ s β¯1 β¯1 ⎜⎜ ⎝ s β¯2 β¯1

s β¯1 β¯2 ⎞ ⎟. s β¯2 β¯2 ⎟⎠

Expanding and linearizing ϕ(β¯1, β¯2 ) and ϕ(β¯1l , β¯2l ) throughout a neighborhood of β0,1, β0,2 produces

∂ϕ ¯ ∂ϕ ¯ (β1 − β0,1) + (β − β0,2 ) + ⋯ ϕ(β¯1, β¯2 ) = ϕ(β0,1, β0,2 ) + ∂β¯2 2 ∂β¯1 ∂ϕ ¯ ∂ϕ ¯ (β1l − β0,1) + (β − β0,2 ) + ⋯. ϕ(β¯1l , β¯2l ) = ϕ(β0,1, β0,2 ) + ¯ ∂β¯2 2l ∂β1 Subtraction yields

∂ϕ ¯ ∂ϕ ¯ (β − β¯1) + (β − β¯2 ) ϕ(β¯1l , β¯2l ) − ϕ(β¯1, β¯2 ) = ∂β¯2 2l ∂β¯1 1l

6-6

Truth and Traceability in Physics and Metrology

so that n

sϕ2 =

1 ∑(ϕ(β¯1l , β¯2l ) − ϕ(β¯1, β¯2 ))2 n − 1 l=1

⎛ ∂ϕ ⎞2 ⎛ ∂ϕ ⎞⎛ ∂ϕ ⎞ ⎛ ∂ϕ ⎞2 = ⎜ ⎟ s β¯1 β¯1 + 2 ⎜ ⎟⎜ ⎟ s β¯1 β¯2 + ⎜ ⎟ s β¯2 β¯2. ⎝ ∂β¯1 ⎠ ⎝ ∂β¯1 ⎠⎝ ∂β¯2 ⎠ ⎝ ∂β¯2 ⎠

(6.11)

Inserting the least squares estimators β¯1, β¯2 into the truncated series expansion of ϕ(β¯1, β¯2 ), we have



m

∂ϕ ) ∑⎜⎝ ∂β¯

(

ϕ(β¯1, β¯2 ) = ϕ β0,1, β0,2 +

bi1 +

1

i=1

⎞ ∂ϕ bi 2⎟(x¯i − μi ) ∂β¯2 ⎠

⎛ ∂ϕ ⎞ ∂ϕ +∑⎜ bi1 + bi 2⎟ fi . ∂β¯1 ∂β¯2 ⎠ i = 1⎝ m

(6.12)

The artefact



m

∂ϕ ) ∑⎜⎝ ∂β¯

(

μϕ = ϕ β0,1, β0,2 +

bi1 +

1

i=1

⎞ ∂ϕ bi 2⎟ fi ∂β¯2 ⎠

(6.13)

guides us to m

ϕ(β¯1, β¯2 ) = μϕ +

⎛ ∂ϕ

∑⎜ ∂β¯ i = 1⎝

bi1 +

1

⎞ ∂ϕ bi 2⎟(x¯i − μi ) ∂β¯2 ⎠

(6.14)

so that there again is a Student’s T of degrees of freedom n − 1,

T (n − 1) =

ϕ(β¯1, β¯2 ) − μϕ Sϕ

n

,

and a corresponding confidence interval of probability P,

t(n − 1) t(n − 1) Sϕ, Sϕ ⩽ μϕ ⩽ ϕ(β¯1, β¯2 ) + ϕ(β¯1, β¯2 ) − n n

(6.15)

localizing the artifact (6.13) in which m

fϕ =

⎛ ∂ϕ

∑⎜ ∂β¯ i = 1⎝

bi1 +

1

6-7

⎞ ∂ϕ bi 2⎟ fi , ∂β¯2 ⎠

(6.16)

Truth and Traceability in Physics and Metrology

notifies the propagated systematic error. Inserting (6.13) in (6.15) produces

t(n − 1) t(n − 1) Sϕ, Sϕ ⩽ ϕ β0,1, β0,2 + fϕ ⩽ ϕ(β¯1, β¯2 ) + ϕ(β¯1, β¯2 ) − n n

(

)

so that

t (n − 1) sϕ + uϕ = P n

m

∑ i=1

∂ϕ ∂ϕ bi1 + bi 2 fs,i . ∂β2 ∂β1

(6.17)

ϕ(β1, β2 ) − u ϕ ⩽ ϕ(β0,1, β0,2 ) ⩽ ϕ(β1, β2 ) + u ϕ ensures the preservation of traceability.

6.4 Fundamental constants of physics Physical constants of fundamental importance such as Planck’s constant, the elementary charge, Avogadro’s constant, the proton mass etc, may be measured directly or drawn from known concatenations with other constants. If there are distinct paths for their calculation, the obtained numerical values may more or less differ. However, the associated uncertainties should display an overlap as a cut set indicates at least something like a necessary condition for compatibility. Another rendering of fundamental constants refers to the least squares adjustment of a selected subset and to deduce those being not involved from the few ones which are [1, 5, 9–12]. Let there be m > r non-linear relationships between fundamental constants K1, K2… Kr and right hand sides aiy¯i ,

ϕi (K1, K2, … , K r ) ≈ ai y¯i ;

i = 1, … , m ,

(6.18)

the ai denoting known constants and the y¯i measured quantities. Formally the true values of the constants K0,1, K0,2, …, K0,r enter via the true values y0,i ; i = 1, … , m of the input data,

ϕi (K 0,1, K 0,2, … , K 0,r ) = ai y0,i ;

i = 1, … , m .

(6.19)

The m relationships (6.18) may be seen to express a non-linear least squares problem. The usual solution strategy linearizes the system with respect to suitable starting values, Kv,i ; i = 1, … , m,

ϕi (K1, … , K r ) = ϕi ,v (Kv,1, … , Kv,r ) +

∂ϕi ∂ϕi (K1 − Kv,1) + ⋯ + (K r − Kv,r ) + ⋯≈ai y¯i , ∂Kv,r ∂Kv,1

(6.20)

and solves it via least squares. The issued estimators are taken to improve the initial starting values and the procedure is repeated via successive iteration.

6-8

Truth and Traceability in Physics and Metrology

The concept of true values becomes slightly blurred as the disparity cropping out in (6.20) is due to measuring errors and also to linearization errors [5].

References [1] Grabe M 1981 On the assignment of uncertainties within the method of least squares, poster paper Second Int. Conf. on Precision Measurement and Fundamental Constants (Washington, DC) 8–12 [2] Grabe M 1986/87 Principles of “Metrological Statistics” Metrologia 23 213–9 [3] Grabe M 2001 Estimation of measurement uncertainties–an alternative to the ISO Guide Metrologia 38 97–106 [4] Grabe M 2001 On measurement uncertainties derived from ‘Metrological Statistics’ Proc. of the Fourth Int. Symp. on Algorithms for Approximations (University of Huddersfield, July 2001) ed I. Anderson Levesley and J C Mason, pp 154–61 [5] Grabe M 2014 Measurement Uncertainties in Science and Technology 2nd edn (Berlin: Springer) 401pp ISBN 978-3-319-04887-1 [6] Seber G A F 1977 Linear Regression Analysis (New York: Wiley) [7] Draper N R and Smith H 1981 Applied Regression Analysis (New York: Wiley) [8] Grabe M 1978 Note on the application of the method of least squares Metrologia 14 143–6 [9] Birge R T 1929 Probable values of the general physical constants Rev. Mod. Phys. 1 1–73 [10] The National Institute of Standards and Technology (NIST) Reference on Constants, Units and Uncertainty (United States Department of Commerce) [11] Guide to the Expression of Uncertainty in Measurement (Bureau International des Poids et Mesures) [12] Grabe M 1996 An alternative algorithm for adjusting the fundamental physical constants Phys. Lett. A213 125–37

6-9

IOP Concise Physics

Truth and Traceability in Physics and Metrology Michael Grabe

Chapter 7 Prospects

Recasting the procedures of error calculus might launch a reshuffle of physical findings.

7.1 Revising the error calculus Measurement uncertainties should be robust and reliable so as to localize the true values of the measurands in each and every case and thus maintain traceability. In essence the procedures are based on the following guidelines. • traceability is established via measurement results localizing the true values of the measurands, • traceability and randomization of systematic errors prove mutually exclusive, • systematic errors should be submitted to worst case estimations, • combining a bunch of measurands, experimenters should maintain equal numbers of repeated measurements in order to express the influence of random errors via generalized confidence intervals according to Student, • unknown systematic errors suspend the common practice to back measurement results by statements of the kind, the physical effect under investigation is expected to happen with a probability of, say, 99.5%, • instead experimenters should expound: it appears quasi safe to assume the uncertainty to localize the true value of the measurand, • as weights in least squares adjustments shift the issued estimators numerically, the measurement uncertainties should nevertheless persistently localize the true values,

doi:10.1088/978-1-64327-096-8ch7

7-1

ª Morgan & Claypool Publishers 2018

Truth and Traceability in Physics and Metrology

• tests on the correspondence between theory and experiment, being at the heart of physics and tied down to a net of physical constants satisfying the demand for traceability, are being judged on the true values of the measurands, • due to the ubiquitousness of unknown systematic errors the procedures of statistical inference, as yet considered sound and well established, break down in general—an observation striking those sciences which consider tests of hypothesis and analyses of variance basic decision-making methods, • the addressed procedures might give rise to new vistas in physics.

7.2 Redefining the SI base units The rendering of physical quantities and physical relationships in general rests on the International System of Units, (SI). This system is founded on seven base units for seven base quantities assumed to be mutually independent. In order to improve the accuracy and the long term stability of the currently used base units, the legally responsible National Metrology Institutes tend to link the future definitions and realizations of the SI to fixed values of fundamental physical constants. The proceedings are fascinating, however stunningly complex as compared to the artifact standards from the past. There is meanwhile a vital point to be addressed: the coming base units result from concatenations of various physical laws and fundamental physical constants and thus ask for robust uncertainty assessments. If assessed according to the presently worldwide used formalism, their resulting uncertainties might come out too small and hence induce misinterpretations.

7-2

IOP Concise Physics

Truth and Traceability in Physics and Metrology Michael Grabe

Chapter 8 Epilogue

Thirty spokes unite in the hub, but it is the void that makes the wheel. Tao Te Ching – Lao Tzu

8.1 Verification by experiment It has been proposed that the error calculus at issue should be elucidated via metrologically realistic examples. This however would require co-operation with experimenters. So far we can state that the given procedures formally localize the true values of the measurands by properly guided assessments, say, consecutively combined inequalities which ensure the respective localizations on their own accord —meaning that renewed, belated ‘verifications’ of these concatenations, localizing the true values searched for by numerical data would not add much. Nevertheless, a wide scope of applications of the error calculus at issue would be desirable, the precarious point meanwhile refers to the thorny procedures to assess the unknown systematic errors.

8.2 Deciding by reasoning In my view the stumbling block of the present-day controversy on error calculus refers to the question of whether unknown systematic errors should be interpreted as random variables or, alternatively, as quantities being constant in time. Here, following basic reasoning, experimenters have hardly an option—physics itself sets the road map.

8.3 What is right, what is wrong? Traditional experimental reasoning rests on alleged assurances. As it happens, repeated measurements seem to substantiate statements of the kind ‘theory and doi:10.1088/978-1-64327-096-8ch8

8-1

ª Morgan & Claypool Publishers 2018

Truth and Traceability in Physics and Metrology

experiment agree within, say, 99.5%’. De facto however, such probabilities are unjustified and, more than that, might suffocate justifiable objections. For physical reasons unknown systematic errors hardly relate to statistics. To make the point as clear as necessary: in as much as uncertainties include random and systematic error components side by side, confidence levels concerning overall uncertainties do not seem available. Indeed, the treatment of unknown systematic errors according to basic physical principles alters the interpretation of measurement results. This does not imply metrologists had lost their orientation. Rather, it implies that they should shift the interpretation of measurement results from statistics to plain uncertainty statements and here to the simple question of whether or not it appears sensible to assume a given uncertainty to localize the true value of the measurand aimed at. The assessment of the error margins ±fs of some unknown systematic error f, may not and cannot be based on subjective judgements, rather they should be assessed according to realistic metrological conditions, howsoever they happen to occur. In each and every case, those margins should be open to scrutiny. Though unknown systematic errors are apt to cause a bothersome vagueness, probability declarations would offer no way out. If anything, the only practicable way to interpret measurement results appears to quote uncertainties being quasi safe and to state that in consequence of all accessible considerations the true value of the physical quantity in question may be expected to lie within the specified uncertainty margins. This, it seems, is all we can say. Beyond, further burdens are looming: with respect to physical truth metrology is not in a position to rely on necessary and sufficient conditions. Though a tentatively issued physical concept can be proved wrong, it cannot be proved right. There is but one reasoning to judge the agreement between theory and experiment: given theory and experiment agree within the limits of the measurement uncertainty, a contradiction has not been disclosed—this being a necessary condition for the queried theory to be right. A sufficient condition, unfortunately, does not exist. Rather, other theories might subsist and agree with the given uncertainty margins. This being of critical interest in the case of ‘tiny’ physical effects implying far reaching consequences of fundamental character. Though concealed by the providence of nature, the true values of the measurands constitute the backbone of physics. Hence, it is deemed appropriate to assess measurement uncertainties as being robust and reliable and designed from the start so as to localize the true values of the measurands. Only this will put metrology in a position to appraise new ideas.

8-2

IOP Concise Physics

Truth and Traceability in Physics and Metrology Michael Grabe

References and Suggested Readings

Journals [1] Birge R T 1929 Probable values of the general physical constants Rev. Mod. Phys. 1 1–73 [2] The National Institute of Standards and Technology (NIST) Reference on Constants, Units and Uncertainty (United States Department of Commerce) [3] Eisenhart C 1952 The reliability of measured values – Part I: fundamental concepts Photogramm Eng. 18 543–61 [4] Wagner S 1969 Zur Behandlung systematischer Fehler bei der Angabe von Messunsicherheiten (On the treatment of systematic errors assessing measurement uncertainties) PTB-Mitt. 79 343–7 [5] Guide to the Expression of Uncertainty in Measurement (Bureau International des Poids et Mesures)

Works by the Author [6] Masseanschluß und Fehlerausbreitung (Mass Link-up and Error Propagation), PTB-Mitt. 87 (1977) 223–227 [7] Grabe M 1978 Über die Verknüpfung zufälliger und abgeschätzter systematischer Fehler (On the combination of random and estimated systematic errors) in Seminar über die Angabe der Meßunsicherheit 20. und 21. Februar 1978, PTB Braunschweig (Seminar on the statement of the measurement uncertainty) [8] Grabe M 1978 Note on the application of the method of least squares Metrologia 14 143–6 [9] Grabe M 1981 On the assignment of uncertainties within the method of least squares, poster paper Second Int. Conf. on Precision Measurement and Fundamental Constants (Washington, DC) 8–12 June [10] Grabe M 1986/87 Principles of “Metrological Statistics” Metrologia 23 213–9 [11] Grabe M 1989 Anpassung eines Kreises nach kleinsten Quadraten (Least Squares Fit of a Circle) Jahresbericht der Physikalisch-Technischen Bundesanstalt pp 210–1 [12] Grabe M 1990 A new formalism for the combination and propagation of random and systematic errors Measurement Science Conf. (Los Angeles, 8-9 February 1990)

doi:10.1088/978-1-64327-096-8ch9

9-1

ª Morgan & Claypool Publishers 2018

Truth and Traceability in Physics and Metrology

[13] Grabe M 1990 Über die Interpretation empirischer Kovarianzen bei der Abschätzung von Meßunsicherheiten (On the interpretation of empirical covariances assessing measurement uncertainties) PTB-Mitt. 100 181–6 [14] Grabe M 1992 Uncertainties, confidence ellipsoids and security polytopes in LSA Phys. Lett. A165 124–32 1995 Erratum A205 425 [15] Grabe M 1993 On the estimation of one- and multidimensional uncertainties Proc. of National Conf. of Standards Laboratories (25-29 July 1993, Albuquerque, USA) pp 569–76 [16] Grabe M 1996 An alternative algorithm for adjusting the fundamental physical constants Phys. Lett. A213 125–37 [17] Grabe M and Cordes H 1986 Messung der Neigungsstreuung an rauhen Oberflächen (Measurement of the variance of slopes on rough surfaces) tm Technisches Messen 1 40–2 [18] Grabe M 2000 Gedanken zur Revision der Gauss’schen Fehlerrechnung (Thoughts concerning the Revision of the Gaussian Error Calculus), tm Technisches Messen 67 283–288 [19] Grabe M 2000 Schätzen von Messunsicherheiten in Wissenschaft und Technik (Estimation of Measurement Uncertainties in Science and Technology), Books on Demand GmbH, ISBN 38311-1309-2 December 2000 [20] Grabe M 2001 Estimation of measurement uncertainties–an alternative to the ISO Guide Metrologia 38 97–106 [21] Grabe M 2001 On measurement uncertainties derived from ‘Metrological Statistics’ Proc. of the Fourth Int. Symp. on Algorithms for Approximations (University of Huddersfield, July 2001) ed I. Anderson Levesley and J C Mason, pp 154–61 [22] Grabe M 2002 Neue Formalismen zum Schätzen von Meßunsicherheiten – Ein Beitrag zum Verknüpfen und Fortpflanzen von Meßfehlern (New Formalism for the Assessment of Measurement Uncertainties – Combination and Propagation of Measuring Errors), tm Technisches Messen 3 142–150 [23] Grabe M 2005 Neue Formalismen zum Schätzen von Meßunsicherheiten – Ausgleich nach kleinsten Quadraten (New Formalism for the Assessment of Measurement Uncertainties – Least Squares Adjustment), tm Technisches Messen 9 531–540 [24] Grabe M 2014 Measurement Uncertainties in Science and Technology 2nd edn (Berlin: Springer), 401 pp ISBN 978-3-319-04887-1 [25] Grabe M 2010 Generalized Gaussian Error Calculus (Berlin: Springer), 301 pp ISBN 978-3642-03304-9 [26] Grabe M 2011 Grundriss der Generalisierten Gauß’schen Fehlerrechnung (Outline of the Generalized Gaussian Error Calculus) (Berlin: Springer), 191pp ISBN 978-3-642-17821-4

Monographs and Anthologies [27] Cramér H 1954 Mathematical Methods of Statistics (Princeton, NJ: Princeton University Press) [28] Graybill F A 1961 An Introduction to Linear Statistical Models (New York: McGraw-Hill) [29] Papoulis A 1965 Probability, Random Variables and Stochastic Processes (Tokyo: McGrawHill Kogakusha Ltd) [30] Beckmann P 1968 Elements of Applied Probability Theory (New York: Harcourt, Brace & World Inc.) [31] Ku H H 1969 Precision Measurement and Calibration, NBS Special Publication 300 vol 1 Washington DC: United States Department of Commerce). Includes Realistic Evaluation of the Precision and Accuracy of Instrument Calibration Systems by C Eisenhart pp 21–47

9-2

Truth and Traceability in Physics and Metrology

[32] Chao L L 1974 Statistics Methods and Analyses (Tokyo: McGraw-Hill Kogakusha Ltd) [33] Seber G A F 1977 Linear Regression Analysis (New York: Wiley) [34] Fisz M 1978 Wahrscheinlichkeitsrechnung und mathematische Statistik (Probability Theory and Mathematical Statistics) (Berlin: VEB Deutscher Verlag der Wissenschaften) [35] Draper N R and Smith H 1981 Applied Regression Analysis (New York: Wiley) [36] Strang G 1988 Linear Algebra and its Applications (New York: Harcourt Brace Jovanovich College Publishers) [37] Ayres F 1974 Schaum’s Outline Series Matrices (New York: McGraw-Hill) [38] Lee Y W 1970 Statistical Theory of Communication (New York: Wiley) [39] Flowers J and Petley B 2008 Handbook of Metrology (Encyclopedia of Applied Physics) ed M Gläser and M Kochsiek (New York: John Wiley & Sons)

9-3