195 36 10MB
English Pages 387 [388] Year 2023
Wei Zhou · Zhiqi Li · Lina Bai · Xiaoning Fu · Bayi Qu · Miao Miao
The Border Effect in High-Precision Measurement
The Border Effect in High-Precision Measurement
Wei Zhou · Zhiqi Li · Lina Bai · Xiaoning Fu · Bayi Qu · Miao Miao
The Border Effect in High-Precision Measurement
Wei Zhou School of Mechano-Electronic Engineering Xidian University Xi’an, Shaanxi, China
Zhiqi Li School of Mechano-Electronic Engineering Xidian University Xi’an, Shaanxi, China
Lina Bai School of Mechano-Electronic Engineering Xidian University Xi’an, Shaanxi, China
Xiaoning Fu School of Mechano-Electronic Engineering Xidian University Xi’an, Shaanxi, China
Bayi Qu School of Information Engineering Chang’an University Xi’an, Shaanxi, China
Miao Miao School of Mechano-Electronic Engineering Xidian University Xi’an, Shaanxi, China
This book was supported by grants from National Natural Science Foundation of China projects 11773022 and 11873039. We hereby express our gratitude. ISBN 978-981-10-3592-0 ISBN 978-981-10-3593-7 (eBook) https://doi.org/10.1007/978-981-10-3593-7 The translation was done with the help of artificial intelligence (machine translation by the service DeepL. com). A subsequent human revision was done primarily in terms of content. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
Foreword
In the measurement technology, the resolution of the detection system or device is critical to the measurement precision of equipment and instruments. The traditional understanding often takes this resolution as the evaluation of measurement precision. Due to the limited resolution, a fuzzy area of the measured value is usually found in the detection process of various quantities. This includes the measurement of time and space, digital processing, the border of the change of the sensed signal and other physical quantities. Through a large number of systematic experiments and theoretical analysis, this book has proved that the measurement precision can be generally improved by two or three orders of magnitude by using the stability of the border of the fuzzy area of the measurement resolution. This has a universal effect on the progress of extensive measurement and instrument technology. Compared with traditional methods, although it is important to explore and understand the detection of fuzzy areas by making full use of the stability of resolution and detecting the boundary of fuzzy areas more accurately, the most obvious improvement in precision is the different derived quantities being measured, which can greatly improve the measurement precision by playing the role of border effect. For example, the frequency, frequency stability and phase noise are derived from the phase difference; the distance, distance change, velocity and acceleration are derived from the position quantity of the length; and the change, rate of change, and a large number of dependent variables of the measured signal are derived from the jump border of analog digital conversion and its occurrence time, as well as the corresponding relationship between the multiple jump borders and the occurrence time, more widely in the case of “quantization error” in digital processing. In such derived quantities, the “quantization error” disappears, and the measurement precision is often affected by the stability of quantization. The application of sensors faces a wide range of measured objects. Through the continuous digital processing of the sensed signal, not only the characteristics of the measurement ambiguity area are clearer, but also the dynamic characteristics of the signal can be measured with higher precision by combining with the digitization. It can also play the role of sensor sensing stability. The implementation examples of measurement in this book refer to and select the role and function of phase information in time-frequency parameter measurement as v
vi
Foreword
well as the reference in wider measurement. This is the thinking and the easiest way to apply the border effect. The frequency and frequency stability are derived from the accumulation of phases and their changes. In particular, we have developed a phase processing method in a wide frequency range, so that the application of this method breaks the restriction that the nominal values of frequencies must be the same. These show adaptability for a wider range of measurement purposes. Through the detection of phase state, some special phase states, such as the fuzzy area of phase coincidence resolution, can be detected intuitively, and it is easy to capture the border information of the fuzzy area. The same or similar methods can easily be used for reference in a wide range of digital measurement and processing. The authors of this book have been engaged in the research and teaching of instrument and measurement technology for 50 years at most and more than 10 years at least and have maintained in-depth cooperation and exchange relations with the international community. We are well aware that the progress of basic measurement principles and methods plays an important role in the development of instruments and measurement technology. Although the progress of measurement technology involves the physical principles and structures of a large number of different instruments, as well as the progress and development of overall related science and technology, any measurement of magnitude cannot be separated from the progress of new measurement methods. This is also the focus of this book. This book is the result of close combination of theoretical analysis and experimental practice. The workload is very large, which is due to the large amount of experimental works and principle analysis and progress. To this end, more than 30 of our teachers and graduate students participated in this work, which took eight years. I hope our work can be beneficial to different readers. Xi’an, China
Wei Zhou Zhiqi Li Lina Bai Xiaoning Fu Bayi Qu Miao Miao
Contents
1 Preamble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Background and Significance of Precision Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Overview of Precision Measurement . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Basics of Measurement and Test Precision and the Effect of Measurement Methods on Measurement Precision . . . . . . . . . . . . . . 2.1 Measurement Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Definition of Measurement Error . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Representation of Measurement Errors . . . . . . . . . . . . . . . . . . 2.1.3 Classification of Measurement Errors . . . . . . . . . . . . . . . . . . . 2.1.4 Errors in Indirect Measurements . . . . . . . . . . . . . . . . . . . . . . . 2.1.5 Synthesis of Measurement Errors . . . . . . . . . . . . . . . . . . . . . . . 2.1.6 Small Error Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Data Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Valid Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Coarse Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Simple Processing of the Results Obtained from the Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 Least Squares Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Various Test Measures and Their Classification . . . . . . . . . . . . . . . . . 2.3.1 Test Measurement Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Traditional Digital Measurement and Processing . . . . . . . . . . . . . . . . 2.5 Effect of Measurement Resolution on Measurement Accuracy in Conventional Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 10 11 12 16 19 20 21 22 24 28 29 31 33 33 35 40 45 46 46 83 85 88
vii
viii
Contents
3 Border Effects and Its Principle Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Measurement Resolution and Stability of Different Quantities . . . . . 3.1.1 Measurement Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Stability of the Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Fuzzy Areas of Measurement Resolution . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Measuring Fuzzy Areas and Their Borders . . . . . . . . . . . . . . . 3.2.2 Concentrated and Discrete Fuzzy Regions . . . . . . . . . . . . . . . 3.3 Quantization Resolution and Quantization Stability in Digital Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Preliminary Analysis of Quantization Resolution and Quantization Error in Analog-to-Digital Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Stability of Quantization Resolution in Analog-to-Digital Conversion . . . . . . . . . . . . . . . . . . . . . . . 3.4 Border Effects and Manifestations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Border Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Manifestations of Border Effects . . . . . . . . . . . . . . . . . . . . . . . 3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Precision Frequency/Phase Difference Measurements Based on Border Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Precision Frequency Measurements Based on Border Effects . . . . . 4.1.1 Common Frequency Measurement Methods Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Application of Border Effects in High-Precision Frequency Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Precision Phase Difference Measurement Based on Border Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Phase and Conventional Phase Differences Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Study of Phase Relationship Patterns Between Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Phase Difference Measurement Based on Border Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Measurement of Transient Frequency Stability Based on Border Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Transient Frequency Stability Characterization . . . . . . . . . . . 4.3.2 Transient Stability Measurements Based on Border Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91 92 92 93 93 94 94 100
101 102 103 103 104 107 108 109 110 110 118 125 125 131 136 142 142 145 159 159
Contents
5 Digital Measurement Techniques and Digital Dynamic Measurements Based on Border Effects . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Border Effects in Direct Digital Dynamic Measurements . . . . . . . . . 5.1.1 Analog and Digital Measurement Methods . . . . . . . . . . . . . . . 5.1.2 Fundamentals of Border Effects in ADCs . . . . . . . . . . . . . . . . 5.1.3 Clock Cursor Relationships in ADC Sampling . . . . . . . . . . . 5.1.4 ADC Quantization Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.5 Quantifying Border Stability as Well as Uniformity . . . . . . . 5.1.6 Principles of Digital Measurement of Border Effects . . . . . . 5.1.7 Quantifying Ordinality and the Data Processing Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.8 Suppression of Quantization Errors and Precision Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 AC Voltage RMS Measurement Based on Border Effects . . . . . . . . . 5.2.1 Method of Measuring the RMS Value of AC Voltage . . . . . . 5.2.2 Data Acquisition and System Hardware Components . . . . . . 5.2.3 Curve-Fitting Principles in Border Effects . . . . . . . . . . . . . . . 5.2.4 Amplitude-Frequency Characteristics and Analysis of Border Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Principle Advantages of Precision Measurements Using Border Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Summary of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Developments in Digital Linear Phase Comparison Techniques . . . . . 6.1 Linear Phase Comparison Technique and Its Important Role . . . . . . 6.2 Conceptual Advances in the Principle of Phase Comparison and High Linearity Phase Comparison . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Definition of Conventional Phase Comparison and Comparison Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Phase Difference Variation Pattern Between Different Frequency Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 High Resolution of Linear Phase Comparison and Its Advantages Compared to Other Methods . . . . . . . . . . . . . . . . 6.3 Development of Linear Phase Comparison Techniques . . . . . . . . . . . 6.3.1 Technical Progress of Linear Phase Comparison from Analog to Digital, Multiple Processing Methods of Digital Linear Phase Comparison . . . . . . . . . . . . . . . . . . . . 6.3.2 Experimental Verification and Error Analysis When Complex Clock Frequency Signals Are Used . . . . . . . . . . . . . 6.4 Comparison of Frequency Counting Processing Methods and Linear Phase Direct Processing Methods Based on Precision Digital Phase Information Processing . . . . . . . . . . . . . .
ix
161 161 162 164 166 172 174 182 183 185 188 188 189 189 193 200 201 201 203 203 204 204 206 207 209
209 216
222
x
Contents
6.4.1 Frequency Counting Processing Method Based on Precision Phase Processing . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Method of Direct Phase Processing, Measurement . . . . . . . . 6.5 Frequency Control on the Basis of Digital Linear Phase-Ratio Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Basic Work and Experiments on the Use of Digital Linear Phase Comparison for Frequency Control . . . . . . . . . 6.6 Substantial Improvement in the Precision of Digital Linear Phase Comparison Using Border Effects . . . . . . . . . . . . . . . . . . . . . . . 6.6.1 The Combination of Digital Linear Phase Comparison and Digital Border Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Comparison of Digital Linear Phase Comparison Techniques with the State-of-the-Art International DMTD Method . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
222 224 227 227 233 234 243 245
7 Border Processing Method-Based Sensor Technology . . . . . . . . . . . . . . 7.1 Introduction of Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Sensors and Their Measurement Applications . . . . . . . . . . . . 7.1.2 Sensor Performance Specifications . . . . . . . . . . . . . . . . . . . . . 7.1.3 Causes and Effects of Dynamic Errors . . . . . . . . . . . . . . . . . . 7.2 Sensor Signal Border Processing-Related Techniques . . . . . . . . . . . . 7.2.1 Dynamic Fuzzy Zones Problems and Overcoming . . . . . . . . 7.2.2 Border Sensing Processing Technology Implementation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Clock Synchronization in Dynamic Sensing Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 Border Sensing Measurement Systems . . . . . . . . . . . . . . . . . . 7.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
247 247 248 250 257 258 259
8 Border Effect in Virtual Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 The Causality of Traditional Time–Frequency Measurements and the Significance of Their Change . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 High-Resolution, No-Time Interval, Phase-Continuous Frequency Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Virtual Reconstruction and Its Role . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Phase Processing of the Measured Signal Using Virtual Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Acquisition of Continuous Phase Information . . . . . . . . . . . . 8.4.2 Cumulative Phase Difference Processing . . . . . . . . . . . . . . . . 8.5 Implementation of Virtual Reconstruction Multiple Functions in Frequency Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
279
262 271 274 276 277
279 284 290 292 292 295 298 311
Contents
9 Research Developments in the Phase Question and Its Role in Advances in Time–Frequency and the Broader Field . . . . . . . . . . . . 9.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Conceptual Expansion and Deepening . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Time–Frequency Fingerprinting Phenomena and Their Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 High-Resolution Measurements as a Basis for Time–Frequency Fingerprinting Phenomena . . . . . . . . . . 9.3.2 Time–Frequency Fingerprinting of Precision Crystal Oscillators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Time–Frequency Fingerprinting of Passive Atomic Clocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.4 Exploring the Application of the Time–Frequency Fingerprinting Phenomenon . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Phase Noise Measurement Technique with Self-Preserving Phase Quadrature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Basics About Phase Noise Measurement Techniques . . . . . . 9.4.2 Phase Noise Measurement Techniques Based on Digital Linear Phase Comparison Techniques . . . . . . . . . . 9.5 Application of Phase Processing and Measurement Techniques in Atomic Clocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 Application of Digital Time–Frequency and Phase Processing Methods to Precision AC Voltage Measurements and Waveform Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 Prospects for Further Development of the Phase Question . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xi
313 313 318 322 322 324 328 332 336 336 343 351
362 376 377
About the Authors
Wei Zhou graduated from the specialty of automatic control at Xi’an Jiao Tong University in 1970; earned his Doctor of Engineering Degree from Shizuoka University, Japan in 2000. From 1970 to 1988, he was engaged in scientific research and measurement at the Shaanxi Provincial Institute of Metrology and Measurement, acting as engineer, senior engineer and chief engineer. From 1988 on, he has been engaged in teaching and research at Xidian University, acting as senior engineer, professor and advisor of Ph.D. candidates. He has over 10 years’ joint research experiences with the institutions of higher learning, research institutes and enterprises in the United States, Japan, Canada and Europe. He is member of the FCS Technical Steering Committee of the IEEE, special reviewer of the Chinese Physics Letters, member of the Executive Council of the China Instrument and Control Society, member of the editorial board of the Journal of Scientific Instrument, member of the Academic Committee of the Key National Defense Science and Technology Laboratory of Measurement and Calibration Technology, and part-time professor at the Shenyang University of Technology. He has obtained 2 National Invention Awards and 1 Chinese Instrumentation Invention Award. Areas for training master students: measuring techniques and instruments; detection techniques and automatic devices. Zhiqi Li received her doctorate in Xidian University in 2012. Doctor of Engineering, associate professor, doctoral supervisor. She has been engaged in the research of time and frequency measurement and control technology, automatic detection technology of precision electronic measuring instruments. She has won second prize of the Science and Technology Progress of the Instrument and Meter Society, secondary second prizes of the Science and Technology Progress of the Ministry of Education for Universities, and second prize of the Science and Technology Progress of Shaanxi Province. Lina Bai received her doctorate in Xidian University in 2015, now she is a professor and doctoral supervisor of Xidian University. Her main research interests are timefrequency measurement and control technology, satellite navigation as well as xiii
xiv
About the Authors
measuring and instruments. She presided over 21 vertical projects such as international cooperation of the Ministry of Science and Technology and other class items, won 10 awards for scientific research achievements, published 2 academic works and published 27 academic papers, including 9 in SCI and 17 in EI. Dr. Xiaoning Fu received his Master degree and Doctor degrees in from Xidian University, Xi’an, China, in 1994 and 2005, respectively. Since August 2005, Dr. Fu has been worked with Xidian University as an Associate Professor. Xiaoning Fu has worked in the development of petroleum instruments, participated in major scientific and technological research projects of the Oil and Gas Group Corporation, and won the second prize for scientific and technological innovation of the Corporation; The research on the measurement method of overdamped geophones testing has been incorporated into the standards of China’s oil and gas industry. After 2000, he joined Xi’an University of Electronic Science and Technology, and has been engaged in photoelectric positioning, ground penetrating missile/missile tracking, etc. for a long time. He has written four books, including Photoelectric Positioning and Photoelectric Countermeasures, Photoelectric Detection Technology and System, and participated in the preparation of Basic Course of Electronic Packaging Technology and Equipment Technology, C51 Foundation and Application Examples, etc. Bayi Qu is currently an associate professor at the School of Information Engineering, Chang’an University. He graduated from Xidian University, majoring in measurement and control technology and instruments. The current research directions are: time-frequency measurement and control technology, navigation equipment and related signal processing technology, multi-sensor fusion technology and information collection and processing technology in the field of intelligent transportation. His main scientific research projects include research on miniature atomic clock circuit based on digital compensation, GPS locking secondary frequency standard, research on precise time interval measurement based on length vernier method, and research on time-frequency processing and measurement based on time-phase and time-space relationship, etc. In terms of academic and scientific research achievements, he has published more than 10 academic papers in recent years, of which 7 are retrieved by EI. The representative papers include a new type of precise time difference measurement technology, the development of a high-precision time-domain frequency stability measuring instrument, and the frequency conversion compensation technology. Improve the frequency and temperature characteristics of rubidium clocks, and develop high-precision time interval measuring instruments. Miao Miao received her B.E. in Automation Engineering in 2003, and M.E. degrees and the Ph.D. degree in Instrumentation Science and Engineering, respectively, from Xidian University, Xian, Shanxi, China, in 2007 and 2012. Since April 2007, Dr. Miao has been in School of Mechano-Electronic Engineering, Xidian University as a
About the Authors
xv
Lecture. Dr. Miao focuses on the research of time and frequency technology. Her recent research interests include: time and frequency signal transmission and synchronization, Intelligent aging compensation for OCXOs of batch production.
Chapter 1
Preamble
Abstract It mainly introduces the current background and significance of precision measurement, the current research status and development trend of test and measurement technology at home and abroad, the key problems in the current measurement field and the outlook for new measurement paths. In particular, it is shown that only by proposing fundamentally innovative measures can a leap in measurement precision be achieved in the whole field of metrology and testing. Users require stronger functions, higher precision, wider measurement range, faster response time, more extensive software coordination and compensation, and targeted intelligent processing and application. This book is aimed at solving these problems by means of border effect. Keywords Precision measurement · measurement and testing methods · innovation Metrology is the science that studies the principles and methods of measuring physical quantities, and metrology is the source of information technology, which involves a number of fields such as measurement theory, measurement technology, measurement standards and measurement practice. Measurement standards, highresolution measurement devices and precision monitoring devices are constantly evolving toward higher accuracy, precision and stability. However, a large number of natural principle-based measurement techniques have limitations in terms of resolution, sensitivity and precision of the measurement device under different measurement conditions. For example, in time or phase measurements, the response time of switching circuits is often limited to the nanosecond (ns) level, while high-precision phase or time measurements and processing require the picosecond (ps) level or even higher; in digital processing, the number of quantization bits generally ranges from 8 to 24 bits, but the measurement metrics expressed digitally are sometimes much higher than the limits of ADC converter quantization errors. Since the existence of certain limits on device resolution can lead to poor precision of voltage measurements, how to use the limited resolution of devices to improve the precision of voltage measurements is an important direction of research. How to take advantage of the high stability of this resolution and thus significantly improve the measurement precision within the limits of the resolution allowed by existing and further developed measurement devices is a challenging task and a research direction to improve the precision of various instruments and measurements. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 W. Zhou et al., The Border Effect in High-Precision Measurement, https://doi.org/10.1007/978-981-10-3593-7_1
1
2
1 Preamble
Analog instruments have the disadvantages of single function, large size and low precision, etc. The future measuring instruments are moving toward digital and intelligent development, thus solving the shortcomings existing in analog instruments to some extent. At the same time, digital instruments can also avoid the pointer and dial reading errors due to human factors in the use of traditional analog instruments, and the speed of processing data and system operation is also faster than traditional analog instruments. With the development of intelligent and integrated digital circuits, various types of digital instruments show the advantages of high precision, speed, small size and strong anti-interference ability and are widely used in industrial automation production, measurement and control. However, the contradiction between high quantization precision and high sampling rate exists in all digital measurements, and the existence of this contradiction restricts the method of using increasing the number of conversion bits to reduce the quantization error. Improving the precision of digital measurement by eliminating or improving the quantization error in digital measurement also means improving the measurement precision of various physical quantities, which will have great economic benefits and scientific value, and also promote the development of other science and technology. To address this problem we can apply the idea of border effect to digital measurement and use the stability of quantized borders to improve the precision of measurement. For example, for a typical mV-level measurement resolution of the A/D converter, the sensitivity at the conversion border can be as small as µV level or even lower. With the help of this property of the A/D converter, it is possible to determine the ratio of the measured condition between two adjacent jump borders to its total amplitude, thus obtaining a higher precision measurement. Over the years, through careful study of measuring devices for arbitrary measured quantity, we have discovered a regular phenomenon that the resolution of the measuring device is often much lower than the stability of the resolution itself, compared to the latter. It is absolutely impossible to measure by conventional methods with higher precision than the measurement resolution of the system, device, etc. that is being measured. However, measurement in various different fields is constantly demanding completely new and higher specifications, which makes it necessary to develop a pathway for measurement technology with universal applicability and to enhance this technical field in general. This makes the use of the high stability of the device resolution to improve the precision of different devices or techniques an attractive way to achieve higher measurement precision with the help of the stability of the resolution. The uncertainty of the voltage value between measurement intervals, but the measured versus time at the border of the interval reflects a high degree of precision, depending only on the stability of the digital quantization value, the uniformity of the quantization, etc. Moreover, the stability of the border of the quantization fuzzy region is only related to the physical characteristics of the data converter and the processing process. When the structure of the measurement system is determined, the adoption of the characteristic of the stability of the quantization fuzzy region can get rid of the traditional method of relying on increasing the number of ADC quantization bits to achieve high-precision measurements. There are adjacent step-like quantization fuzzy regions in the analog-to-digital converter, and if the moment of the jump in
1 Preamble
3
the quantization fuzzy region has higher precision and sensitivity, the measurement points in the region have higher accuracy and measurement precision, which is the border effect idea of quantization fuzzy region border stability in the analog-to-digital converter. Using these high-precision sampling points can obtain information closer to the original signal, and use this information for subsequent data processing, which can achieve higher precision and achieve the purpose of suppressing quantization errors. Applying the border effect principle to a low-resolution, high-sampling-rate ADC can achieve unity between accuracy and rate, providing a feasible idea for modern high-precision, high-speed digital measurements. This is a very important innovation compared to the traditional measurement method with the help of the resolution of the measurement device. In the field of measurement technology, the progress of specific physical and specialized technical principles may be limited to the improvement of one aspect, but the problems of resolution, stability of resolution and precision of measurement can cover all areas of measurement technology. Corresponding studies include: measurement resolution, stability of resolution, measurement fuzzy zones due to limited measurement resolution, structure of fuzzy zones, classification—concentrated and discrete fuzzy zones, periodic and non-periodic fuzzy zones, borders of fuzzy zones and their border effects, precision improvement factors, etc. It is on the basis of our extensive research results on this subject that this book builds up the measurement precision of different quantities substantially by exploiting the border effect of measurement resolution, which is prevalent in the field of detection. Industrial production and scientific experiments are a process of acquiring, developing, transmitting, processing and applying information resources. In scientific experiments and industrial production, especially when using automatic detection and automatic control systems to obtain raw information, a variety of sensors are used, and further analysis and processing converts the raw signal into an electrical signal that can be easily transmitted and processed, and most of them are voltage signals. Therefore, the improvement of the precision of voltage measurement means the improvement of the precision of measurement of various physical quantities in various fields, thus generating greater economic benefits and promoting the development of other science and technology. The quantization error in digitization processing has a completely different nature from the stability of the quantization value reflected by the signal-digital change border in digitization, which exists along with digitization but gives a series of high-precision correspondences such as measured-occurrence time-digital change point, and this precision has a higher index compared to the quantization error. The difference between the application of this symbiotic state accompanied by digitization and the processing relying on the usual quantization principles is that the dynamic application needs to be coupled with fitting algorithms, the measured objects often differ, and there are new requirements for the device, with more emphasis on the stability and speed of the device, rather than just on the number of bits transformed. A large number of experiments have demonstrated that the application of border processing techniques is significantly better than traditional measurement techniques, especially in the measurement of the RMS value of AC voltages, the measurement of precision time intervals, the improvement of the
4
1 Preamble
precision of digital linear phase comparison and the digital dynamic application of sensors. Precisely because it is a dynamic application and targets a changing quantity, a large amount of discrete and accurate information in digitization can be obtained and applied, thus suppressing the quantization error and improving the precision of the measurement. According to the current situation and development trend in precision measurement of each physical quantity at home and abroad, and in response to the problems of limited device resolution and improving measurement precision, this book proposes a series of new concepts and their development applications, including fuzzy regions of detection (concentrated fuzzy regions and discrete fuzzy regions), stepwise multifuzzy regions in digital measurement, fuzzy region borders and border effects. The border effect is an innovative method to improve the precision of measurement and control of physical quantities. All measurements and controls have limited measurement resolution and control precision can lead to uncertainty and fuzzy zones in measurements and controls. As the measurement value varies greatly within and outside the fuzzy zone, the measurement result will be more easily identified by the instrument, so that the detection precision will be substantially higher than the measurement resolution itself, and will be expressed as a higher indicator of the stability of the measurement resolution, which will significantly improve the final precision of physical quantity measurement. This phenomenon is universal in the measurement of physical quantities, and we call it the border effect. The border effect is to apply the high stability of the border of the fuzzy area of measurement resolution and control precision to achieve high-precision measurement and control, and at the same time to find new measurement methods by analyzing the small changes in the quantized border of physical quantities. The utilization of the border effect will have significant advantages in frequency signal measurement, voltage measurement, resonator spectral line compression, digital measurement, length measurement, etc., and will meet the needs of multifaceted basic science research in precision measurement and control. Therefore, intensive research and technological development in this area can lead to a more advanced level of precision measurement and control. The use of the border effect enables, on the one hand, precision measurement of a wide range of measurement objects with exceptionally high resolution as indicated by the stability of the resolution of the detection device, and in some measurement quantities, the final measurement precision can be improved by 2 to 3 orders of magnitude using the border effect. On the other hand, measurement of transient characteristics of the measured signal and measurable characteristics of extremely weak signals, which are difficult to achieve using conventional methods, can also be realized. For measurement targets and accuracies that are normally difficult to achieve, the use of border effects can give corresponding solutions. Especially in dynamic applications, where the ambiguous region of the sensor occurs in the region that affects the resolution of the sensed information, the sensed measurement characterized numerically appears in a large number of discrete border detection values that represent a numerical (voltage)–time relationship with a detection precision several orders of
1 Preamble
5
magnitude higher than the quantized value itself. The capture of this border information and the indirect measurement against the measured quantity can substantially suppress the quantization error of the measurement and improve the measurement precision of the different quantities. In terms of the response time of the measurement, the border effect can also significantly improve the characteristics of the measured signal change in short time and transient state, which is significant for the capture of high-speed motion target characteristics, etc. It can make the phenomenon of fast motion in the performance of static, which is important for the development of future measurement technology. By measuring the RMS value of the AC voltage, it has been verified that the precision can be tens to hundreds of times higher than the quantitative resolution. This application has a wide market in the whole field of detection technology, especially in combination with sensor technology for border sensing, which facilitates the improvement of measurement precision under dynamic applications. Another outstanding aspect of the border effect is reflected in the combination with many new ideas for the development of measurement technology. For example, the high-resolution, interval-free, phase-continuous frequency measurement achieved with the help of the border effect, and again by combining it with the virtual reconstruction of the signal to measure the change in phase difference of an arbitrary frequency signal, has led to certain advances in time-frequency measurement and control technology. Thus, the utilization of the border effect provides a directional means for the development of future measurement techniques. The border effect has been demonstrated in the field of time frequency, and relatively high-precision measurements have been obtained with the low resolution of the devices used, which relies on the high stability of the device resolution and the reasonable use of the border effect in the measurements. In the precision measurement of a wide range of physical quantities, the theory of the border effect can be used to improve the original measurement method to achieve higher precision based on the existing measurement and control precision. In this paper, preliminary analysis and software simulation of using the border effect in length measurement, voltage measurement and sensor measurement are also carried out, and its actual experimental situation is still being further explored. The border effect has considerable practical application in a wide range of measurement areas. In some advanced measurement techniques in the world today, some methods have actually used the idea of border effect unintentionally to achieve high measurement precision, but this theory has not been presented systematically and comprehensively. This paper provides a comprehensive explanation of the theory of the border effect, which is of guidance for the development of all kinds of measurement techniques in the future. On the basis of the existing measurement precision, using the border effect to improve the precision of measurement is a cost-effective method, and the border effect also provides new ideas for some measurement control techniques that are otherwise unattainable. For example, the frequency linking of microwave quantum frequency scales and optical frequency scales, the compression of spectral line width and the increase of equivalent Q value for various precision frequency sources. The implementation of these new technologies can help us to
6
1 Preamble
keep pace with the international advanced technology development better and faster, and can well promote economic development and enhance social benefits. The book is presented in nine chapters, moving from the shallow to the deep, from the basic to the sublime, and incorporating the idea of border effects throughout the book, arranged in chapters as follows. Chapter 1: Introduction. It mainly introduces the current background and significance of precision measurement, the current research status and development trend of test and measurement technology at home and abroad, the key problems in the current measurement field and the outlook for new measurement paths. In particular, it is shown that only by proposing fundamentally innovative measures can a leap in measurement precision be achieved in the whole field of metrology and testing. Chapter 2: Basics of Metrology Test Precision and the Influence of Measurement Methods on Measurement Precision. It mainly introduces measurement error, data processing, various test measurement methods and their classification, traditional digital measurement and processing, the influence of measurement resolution on measurement precision in traditional methods, and also injects the idea of fuzzy area border effect error analysis. Measurement methods have a significant impact on measurement precision and resolution, and are one of the most critical factors in determining the level of measurement technology. This chapter will provide a detailed introduction to test measurement methods, mainly including: direct measurement, indirect measurement, differential measurement methods, coincidence measurement methods and intermediate source methods. The measurement precision of most measurement methods is limited by the resolution of the testing device, but certain methods of indirect measurement also have the possibility of breaking the resolution limit. Chapter 3: Border Effect and Its Principle Analysis. The resolution of measurement, the stability of resolution, the fuzzy area of measurement, the border characteristics of the fuzzy area, the concentrated and discrete fuzzy areas, the stepped multifuzzy area in digital measurement and its border characteristics, and the factors affecting the border effect of the stepped multifuzzy area are introduced. It is pointed out that the stability of the border resolution of the fuzzy region is a decisive factor affecting the actual measurement precision, called the border effect. The phase change law is analyzed in detail, the measurement precision can be improved by using the principle of the border effect, and the application of the border effect is outlined. These analyses are not limited to a specific physical quantity, but are of general applicability to the measurement of different physical quantities, and thus contribute to the overall technological progress in the field of measurement technology. Chapter 4: Precision Frequency/Phase Difference Measurements Based on Border Effects. Measurements of periodic signal parameters are performed from four application perspectives: precision frequency measurements based on border effect concentrated ambiguous regions, precision phase difference measurements, high-precision complex frequency signal measurements and precision measurements of transient frequency stability. In addition the principles of transient frequency stability measurements and phase noise measurements implemented using the border effect are analyzed in detail.
1 Preamble
7
Chapter 5: Digital Measurement Techniques and Digital Dynamic Measurement Based on Border Effect. The whole process of measuring the RMS value of AC voltage using border effect is introduced in detail, including the experimental principle, block diagram, hardware design, etc. The theoretical basis and influencing factors involved in the application of border effect by ADC converters are analyzed in detail, and relevant experiments are designed for verification, and the obtained experimental results are analyzed in detail. It is verified that the measurement precision is tens to hundreds of times higher than the quantization resolution, and the high sampling rate is taken into account while obtaining high-precision measurement results, and the unification of sampling rate and measurement precision is achieved. At the same time, the theoretical method of border effect will play its unique advantage in some special applications, such as the measurement of transient characteristics of rapidly changing signals, the measurement of tiny time intervals and the measurement of sensor technology. The border effect meets the needs of digital measurement and can play an advantage in some special applications that is irreplaceable by other methods and has a wide application market in the whole field of detection technology. Chapter 6: Developments in Digital Linear Phase Comparison Techniques. The representation of the border effect in the digital linear phase comparison technique and the combination of this method and effect also take the advantage that the border effect has. The quantitative and highly accurate acquisition of parameters such as measurement frequency using phase information is one of the methods for the highest precision measurements. However, phase processing includes a comparison between the frequency counting processing method based on precision phase processing and the linear phase direct processing method. However, the latter does not require gates and counting lines, resolution improvement is easy, and equipment structure is simpler. This method of direct phase processing and measurement is often suitable for situations where the frequency deviation between signals is very small (especially in the analog case, where there is a phase difference—waveform conversion—voltage integration processing—voltage measurement). However, due to advances in digital high-speed, high-resolution measurement techniques, it is also becoming possible to measure cases with relatively large frequency deviations. The improved technique of high linearity reflects the progress of the principle at the basic level—the linear phase comparison method with a proportional relationship of frequency and sampling has the best linearity. This is a reversal of the traditional belief that phase comparison can only be performed between signals of the same frequency nominal value. The digital application overcomes the drawbacks of the traditional analog method of linear phase comparison. Not only is the resolution substantially improved, but it also far surpasses other measurement methods in terms of response time. The continuous frequency stability measurements from ns to days or more achieved by digital direct linear phase comparison are also not available with conventional measurement techniques. The central part of this chapter is the application of border effects in digital linear phase comparison, which is a good example of digital techniques where border effects come into play and break through the resolution limits of A/D converters. The measurement precision achieved by digital linear
8
1 Preamble
phase comparison through the intensive application of the border effect is already at a level comparable to that of the DMTD method, which has the highest international precision index. Chapter 7: Border Processing Methods Combined with Sensor Technology. Sensors are the most widely used devices in measurement technology. The sensitivity and resolution of a sensor and its stability are often in conflict with each other. In applications of sensors including static and dynamic applications, it is expected that the measurement precision of sensors is continuously improved. Especially in dynamic applications, where the sensor’s fuzzy area occurs in the region that affects the resolution of the sensed information, the digitally characterized values measured by the sensed information appear as a large number of discrete borderline detected values. The value (voltage)–time relationship represented by these values has a detection precision that is several orders of magnitude higher than the quantized values themselves. The capture of this border information and the indirect measurement of the measured value can substantially improve the measurement precision of the different quantities. Chapter 8: Other Measurement Techniques in Conjunction with Border Effects—Virtual Reconstruction Measurements. Measurement techniques often deal with some special signals, constrained by line devices as well as device structures, etc., which can be broken through by digitized measurements and aided by digital virtual reconstructions based on them. This chapter covers virtual reconstruction and its role, the measurement of periodic parameters such as high resolution, no time interval, phase-continuous frequencies, the use of virtual reconstruction for phase processing of measured signals, the use of virtual reconstruction for a variety of functions in frequency control and the use of the principle of phase measurement at arbitrary frequencies for the locking of excitation signals in atomic clocks. In these measurement processes, analog-to-digital variations of border effects are often used at the front end of the measurement system for overall high precision. Chapter 9: Research Developments in the Phase Question and Its Role in Advances in Time Frequency and the Broader Field. The traditional approach to phase, both conceptually and experimentally, is only for the case of the same frequency nominal values. This is constrained above all by the definition of phase. In fact the concept of the phase problem can be deepened, the methods of processing can be further innovated, the object and scope of application can be considerably broadened, and the correspondence with the frequency quantities can be developed from a one-way to a two-way direction. Phase processing is actually arbitrary in frequency. This phase comparison between signals with different nominal values of frequency not only simplifies the system, but in combination with the digital processing path it is possible to obtain excellent linearization by sampling the linear segment of the sinusoidal waveform. The accumulation of innovative research on phase problems has extended phase processing to the measurement and control of a wide range of frequencies, measurements such as frequency stability based on the rate of change of phase have enabled the detection of fast responses of stability at low frequencies and biological clocks, the detection and application of timefrequency fingerprinting of frequency sources, new methods for phase quadrature,
1 Preamble
9
Fig. 1.1 Professor Kenzo Watanabe lectured at Xidian University of China
phase stepping control of frequency signals, etc. have been proposed and have been used in phase noise measurements and precision frequency correction as well as synthetic change instruments have gained applications. Through the generalization of the phase processing problem and the deepening of the underlying concepts, we see that the periodicity and continuity of the phase concept is still preserved, but the phase processing implications and applications are greatly expanded. The deepening of phase research also simultaneously provides a broader venue for the application of border effects. The combination of digital phase processing and border effect pathways can reach into the development of a broader field of digital technology, reflecting the vitality of the border effect application field. I would like to dedicate this book to my teacher Prof. Kenzo Watanabe. Watanabe is one of the most famous measuring instrument experts in the world. He was also my instructor when I was a doctoral student at Shizuoka University in Japan. He has served as the president and vice president of the IEEE Institute of instrumentation and measurement for many years, and has presided over the IEEE International Conference on instrumentation and measurement technology (IMTC) for many times. He has always insisted and asked us to do experiments in person, track the research objectives of international hot spots and require the research work to face the solution of common problems in the industry. Many parts of this book have been influenced by Prof. Watanabe’s thoughts. This year is the tenth year since the Prof. Kenzo Watanabe died (Figs. 1.1 and 1.2). In addition to the years of accumulation and book editing by the undersigned authors, many of our students have also put in hard work. They are: Yang Wang,
10
1 Preamble
Fig. 1.2 A group photo of Prof. Watanabe and his wife with Wei Zhou’s couple
Chen Faxi, Ge Xiaoxia, Liu Haidong, Guo Shengfei, Xu Longfei, Jia Zhaomin, Luo Dan, Ye Bo, Ren Junqi, Zhao Yan, Qiao Wenbo, Yang Ning, Liu Huifang, Guo Qianqian, Jia Yucong, Zhai Hongqi, Liu Suyan, Xue Xinyu, Zhao Qingwen and Jia Yangjie. There are more than 20 students.
1.1 Introduction Measurement technology plays a vital role in various fields. Precision measurement is an important measure of the level of science and technology and industrial technology of a country [1–6]. Without advanced measurement technology and means, it is difficult to design and manufacture products with excellent performance, not to mention the development of modern high-tech cutting-edge technology. In order to make the measurement results accurate and reliable, minimize errors and improve measurement precision, it is of great significance to use new principles and methods for accurate measurement and precise control [3–6]. In all kinds of quantity measurement and control there are problems of limited resolution and precision, and the stability of the resolution of different measurement and control devices is significantly better than the resolution itself, so the reasonable use of the stability of the resolution can improve the precision of the measurement range and control as a whole [7].
1.1 Introduction
11
1.1.1 Background and Significance of Precision Measurement Precision measurement technology is a frontier science with its own professional system, covering a variety of disciplines, theoretical and practical, and is a comprehensive interdisciplinary discipline integrating optics, electronics, sensors, imaging, precision mechanics and computer science [7–15]. In recent years, precision measurement technology has developed in the direction of high precision, high sensitivity, high resolution, integration, miniaturization, standardization, etc. [16] It also presents some characteristics, such as the development of measurement precision from micron level to nanometer level, measurement means from analog processing to digital processing, discrete devices to measurement and processing in one highly integrated multifunctional chip development, data transmission technology diversification, etc. [16] However, limited by the current manufacturing process technology and theoretical foundation, the research and development of new technologies and methods in the field of measurement is very difficult, thus there is an urgent need to conduct a large number of experiments to promote the progress of measurement technology based on the research of new theories, which requires improvement in the methods of measurement and the improvement of measurement precision through new measurement theories and methods, thus promoting the further development of various disciplines [17–19]. In the context of the rapid development of modern science and technology, various precision measurement methods and measurement theories have emerged in order to meet the new needs of modern test and measurement. In the development of precision measurement technology, time–frequency measurement and control technology has been the highest measurement precision among physical quantities, so it plays a leading role in all other kinds of physical quantity measurement and control [3, 20, 21]. Time frequency is also the most fundamental physical quantity in the time domain or frequency domain and plays an important role in precision measurement. The improvement in the precision of time–frequency measurement far exceeds that of other physical quantities. The realization of high precision of time–frequency measurement has promoted the development of measurement control technology and triggered a new trend of modern test and measurement, that is, the application of various conversion means to convert other different physical quantities into time or frequency quantities for measurement, so that the measurement precision is significantly improved [22]. For example, the international definition of the unit of length in meters, which is the distance traveled by light in a vacuum within a certain period of time, is the conversion of the measurement of spatial quantities into the measurement of temporal quantities. The precision measurement of time and frequency provides new means and high standards for research in other disciplines, promoting the development of other disciplines and the continuous improvement of the precision of time and frequency measurement [22, 23].
12
1 Preamble
Digital measurement technology has been widely developed in many fields due to its advantages such as easy acquisition, processing, storage, transmission, calculation and display, and the measurement of various physical quantities is gradually changing from the traditional analog-to-digital methods [21, 22, 24]. In general, to obtain high measurement precision and stability, the requirements for circuit noise, circuit layout, and device selection are high in analog schemes, making it difficult to meet the needs of high performance in various fields today. In contrast the digital scheme can be flexibly adapted and modified to the system according to the actual requirements, which can reduce the dependence on hardware circuits to a certain extent, while the digital devices work with good stability, high resolution and good product consistency, but the digital conversion process inevitably brings quantization errors. This book systematically analyzes the principle of border processing technique in frequency measurement process with the help of border measurement technique of time frequency and applies it to digital measurement to suppress the quantization error generated in the digital processing by border processing technique. The paper describes in detail the application of the border metrology technique in the measurement of digital dynamic signal parameters and in the control of thermostatic crystal oscillators. The results of numerous studies show that in the case of a fixed system device resolution, border processing using the high stability of the system can break through the limits of the device resolution and substantially improve the measurement precision while ensuring a high sampling rate [25]. This book illustrates the wide application of border processing technology in digital measurement, control and other fields, which has great promotion value and can drive the development of the overall measurement and control technology level. The new digital measurement technology is not simply a simple conversion of the original analog processing and measurement technology into a digital way, but should be combined with the digital characteristics and the performance of the measured and processed signal itself through an in-depth study in order to get better results. For example, for the topic of digital linear phase comparison, within traditional thinking one would consider the nonlinear correction of the commonly used sine wave signals to get the linearization effect. However, as will be presented in this book, it is possible to obtain the linearization effect of phase comparison without losing the realistic representation of the amount of phase difference change by repeatedly sampling the linear segment of the signal based on the periodicity and continuity of phase change from the waveform characteristics of the signal itself [26]. In addition, the combination with the border effect, the core of this book, can break through the constraints of digital AD device resolution and greatly improve the precision of phase comparison.
1.1.2 Overview of Precision Measurement With the continuous development of modern technology, the implementation of higher precision measurement of various physical quantities has been the goal of
1.1 Introduction
13
continuous exploration in the field of measurement. However, due to the constraints of electronic technology and process level, the development of measurement technology has been limited to a certain extent. Take the domestic and foreign comparison in the field of time–frequency measurement as an example, at present, frequency signal measurement, phase comparison, phase noise measurement and other instruments are dominated by foreign products [21, 22], such as the United States NIST, Agilent, AD-YU Company and Germany’s TimeTech company and other research institutions have been in the world’s leading position. In particular, the internationally recognized high-performance phase noise and Allan variance analyzer equipment is Symmetricom’s Allan variance test systems 5120A, 5125A and 3120A [27, 28]. Among them, 5120A is a product with high measurement precision and wide measurement range, measuring Allan variance of 10 MHz signal source better than 5 × 10–14 /s. 3120A is A typical digital measurement device, and its success lies in the implementation of a traditional analog measurement scheme using an advanced, high-speed, low-noise analog-to-digital converter in an all-digital architecture, while combining advanced digital signal processing with high-performance analog-to-digital conversion techniques. The practical value of the 3120A is to give a solution to the development of modern high-precision measurement techniques that can be followed. The 5125A, by means of an all-digital, by combining a fully digital measurement scheme with a dual-mixing time difference technique, obtains a resolution of the order of 0.1 ps [27]. Therefore, based on the existing measurement technology, high-precision instruments combined with fully digital processing technology are the mainstream direction of instrumentation measurement development. In practical measurement applications, if the circuit is used only to solve problems such as noise interference, it will make the equipment line structure complex, so that the power consumption of the system increases, the operating frequency decreases and the cost increases. For example, in traditional phase processing, a frequency conversion circuit is used to normalize the frequency of the comparison signal [21]. Its complex frequency conversion circuit expands the application range of phase processing and enhances the flexibility of processing, but it lengthens the signal processing path, and each link through which the signal passes inevitably brings additional noise, which limits the improvement of comparison precision. In response to these drawbacks of analog measurements, digital measurement techniques have been developed. Digital measurement is to convert for continuous input signal, and finally get a series of discrete digital quantities, and the degree of discrete for continuous signal depends on the effective quantization bits of the digital measurement system, the more quantization bits, the finer the discrete degree, the more comprehensive the description of the input signal. The discrete nature of the digital quantity, on the other hand, determines the difference between the output digital quantity and the actual input quantity, and the magnitude of the difference is related to the number of quantization bits [18]. This is a common problem in any digital measurement system, and this error is due to the quantization error inherent in conventional digital dynamic measurements. The quantization error is the biggest problem in digital applications, and a common treatment for this error is to reduce the measurement error caused by quantization error by increasing the number of quantization bits in the system. For
14
1 Preamble
example, both domestic and international methods are used to substantially increase the number of bits of the analog-to-digital converter, as well as in combination with measurement methods such as microdifference method [29, 30], which in some areas (e.g., audio decoding) can achieve 32-bit quantization. This approach, while satisfying engineering and application needs, is predicated on the sacrifice of sampling rate. A high number of quantization bits inevitably represent a low sampling rate, which can only be in the order of MHz for a digital system with a 24-bit analogto-digital converter (ADC), and up to the order of GHz for a digital system with a low number of quantization bits. Most of today’s measurements are digital, and their precision often depends on the precision of the voltage measurement [22, 31, 32]. With the continuous development of modern science and technology, the improvement of the precision of measurement of various physical quantities has been the goal of continuous forward exploration in the field of measurement. High-precision digital measurement is the direction of development in the field of test and measurement at this stage [22, 26], the gap between domestic and foreign precision measurement field is large, and the development level of China is still behind the developed countries. Digital measurement method is in the mainstream in the field of measurement and control at this stage, its precision depends on the precision of the analog-to-digital converter, and the precision of measurement and control is improved by increasing the resolution of the device, but there are still some limitations for the demand of higher precision testing [7, 17]. The limitations of semiconductor materials and process levels lead to differences in electrical parameters, response bandwidth, noise and other indicators, making the actual performance of the measurement system fall short of the expected design requirements, not to mention the inability to break the limitations of the existing precision level. The use of methods to increase the number of effective quantization bits of the data converter will inevitably face a lower sampling rate, and the measurement precision or sampling rate cannot achieve the essence on the improvement [33]. On January 22, 2013, Intersil, a global leader in the design and manufacture of high-performance analog mixed-signal semiconductors, announced the launch of its latest ultra-low noise 24-bit analog-to-digital converter (ADC) with an integrated programmable gain amplifier, which provides the highest measurement precision with minimal external components over a wide conversion rate range [16], but still does not meet the current precision requirements for digital measurements. At this stage, in the field of high-precision measurement, it is no longer limited to the traditional digital measurement, but pursues more perfect test methods for high performance, high precision and special applications [34]. This book uses the border effect to solve the problem of suppression of quantization error in digital dynamic measurement, takes the RMS measurement of AC voltage as a breakthrough and innovatively proposes a method to improve the measurement precision affected by quantization error in digital quantization by obtaining the effective information of the border in the dynamic fuzzy region. Its actual effective precision is independent of the number of quantization bits and only related to the physical characteristics of the quantization system, verifying the result that its measurement precision is tens to hundreds of times higher than the quantization resolution [26, 34]. The unification of sampling
1.1 Introduction
15
rate and measurement precision is achieved by taking into account the need for a high sampling rate while obtaining high-precision measurements. This is an innovative idea for eliminating quantization errors in today’s digital fields, such as digital sensor devices and digital measurement devices, and opens up a new path for high-precision digital measurement. At the same time, the theoretical approach of border effect will play its unique advantage in some special applications, such as the measurement of transient characteristics of rapidly changing signals [35], the measurement of tiny time intervals and the measurement of sensor technology. The border effect meets the needs of the digital measurement era and has a wide application market in the field of detection technology [34]. Frequency and time quantities have the highest accuracy and stability of all physical quantities measured, and one of the outstanding advantages of phase measurement and processing in the field of measurement is its high resolution, so that the measurement and processing techniques for this area are ahead of other physical quantities measured and processed. In order to improve the precision of the measurement of physical quantities in general, the trend is to convert other physical quantities into frequency or time quantities for measurement whenever possible [22]. In recent years, the precision and stability of international frequency standards have been improving very fast, almost by an order of magnitude every 8 to 10 years, and the stability of frequency standards has now reached 10–14 /s [21, 36]. Constant temperature crystal oscillator is an essential device for modern science and technology, navigation and positioning, testing and measurement, etc. The resonant Q value of the crystal in the oscillator is an important factor affecting its technical index, and the traditional oscillation technology has a load Q value significantly smaller than the no-load, so now the core technology of the technical index is to try to maintain the no-load Q value [37, 38]. Using the border effect of the crystal resonance curve can work well here. In general, the future of precision measurement technology will show the development trend of more powerful functions, higher precision, greater share of non-contact measurement, more extensive software compensation technology, online measurement and higher degree of integration [7]. In the field of measurement and control at home and abroad, the main method to improve measurement and control precision is to improve the resolution and precision of the device. For example, in time–frequency measurement, the resolution of the device is improved and the performance of the device is enhanced [39]. In digital measurements, quantization errors are reduced by increasing the number of conversion bits and multiplication methods of ADCs; in oscillators, better materials and advanced physical principles are pursued. However, due to the influence of factors such as manufacturing process, limited response time, quantization error during digital processing and noise, a single improvement can only improve the resolution and precision of the instrument to a limited extent [7]. In the field of precision measurement and control China’s technology lags behind the international advanced level, in order to narrow the gap between China and foreign countries in the field of science and technology, the country should strengthen the attention to the development of high-tech fields, strengthen the understanding of the basic principles and innovation, especially the research of new measurement and
16
1 Preamble
control technology [36, 40, 41], the need to break through the technical bottleneck in the field of aerospace and defense, the development of our proprietary high-tech field core strength.
References 1. Finkelstein L (1983) Measurement. Measurement 1(1):2–4 2. Rossi GB (2007) Measurability. Measurement 40(6):545–549 3. Zhou W, Li Z, Bai L et al (2013) Generalized phase measurement and processing with application in the time-frequency measurement control and link. In: 2013 Joint UFFC, EFTF and PFM symposium, pp 429–433 4. Ennis JM, Christensen RHB (2014) Precision of measurement in Tetrad testing. Food Q Preference 32:98–101 5. Chapman-Novakofski K (2014) Measuring up. J Nutr Educ Behav 46(1):1–4 6. Ali MH, Kurokawa S, Uesugi K (2014) Camera based precision measurement in improving measurement accuracy. Measurement 49:138–141 7. Zhou W, Li ZQ, Bai LN et al (2014) Verification and application of the border effect in precision measurement. Chinese Phys Lett 31(10):100602 8. Sturm S et al (2014) High-precision measurement of the atomic mass of the electron. Nature 506:467–470 9. Budovsky I, Hammond G (2005) Precision measurement of power harmonics and flicker. IEEE Trans Instrum Meas 54(2):483–487 10. Giorgetta FR, Coddington I, Baumann E et al (2010) Fast high-resolution spectroscopy of dynamic continuous-wave laser sources. Nat Photonics 4:853–857 11. Liu YT, Liu XW, Wang Y et al (2012) A sigma-delta interface ASIC for force-feedback micromachined capacitive accelerometer. Analog Integr Circ Sig Process 72:27–35 12. Sipos M, Paces P, Rohac J et al (2012) Analyses of triaxial accelerometer calibration algorithms. IEEE Sens J 12(5):1157–1165 13. Du C, He C, Yu J et al (2012) Design and measurement of a piezoresistive triaxial accelerometer based on MEMS technology. J Semiconductors 33(10):104005 14. KurtenIhlenfeld WG et al (2005) Characterization of a high-resolution analog-to-digital converter with a Josephson AC voltage source. IEEE Trans Instrument Measurement 54(2):649–652 15. Jiang YY, Ludlow AD, Lemke ND et al (2011) Making optical atomic clocks more stable with 10–16 —level laser stabilization. Nat Photonics 5(3):158–161 16. Taraldsen G (2006) Instrument resolution and measurement accuracy. Metrologia 43:539 17. Zhou W (2000) Systematic research on high accuracy frequency measurements and control. Shizuoka University, Japan 18. Zhou W et al (2019) Two quantization phenomena in digital processing and their effects. J Univ Electron Sci Technol 38(5):9 19. Eramo R, Cavalieri S, Corsi C et al (2011) Method for high-resolution frequency measurements in the extreme ultraviolet regime: random-sampling Ramsey spectroscopy. Phys Rev Lett 106:213003 20. Zhou H, Zhou W (2004) A high-resolution frequency standard comparator based on a special phase comparison approach. In: Proceedings of the 2004 IEEE international frequency control symposium, pp 689–692 21. Zhou W, Ou X, Zhou H et al (2006) Time-frequency measurement and control technology, First edn. Xidian University Press, Xi’an, pp 63–92 22. Zhou W (2004) Fundamentals of testing and metrology technology. Xidian University Press, 2004-04
References
17
23. Zhou H, Zhou W (2006) A time and frequency measurement technique based on length vernier. In: Proceedings of the 2006 IEEE international frequency control symposium, pp 267–272 24. Petrovic P, Marjanovic S, Stevanovic M (1998) Measuring active power, voltage and current using slow A/D converters. In: 15th annual IEEE instrumentation and measurement technology conference on where instrumentation is going, vol 2, pp 732–737 25. Bai L, Su X, Zhou W et al (2015) On precise phase difference measurement approach using border stability of detection resolution. Rev Sci Instruments 86:015106 26. Zhou W, Li Z, Qiao W (2019) The most promising technology in time and frequency— digital linear phase matching. In: 2019 National conference on time and frequency sciences (CFTS2019) 27. A5125 high-performance, extended-range phase noise and allan deviation test set. Symmetricom 2009 28. A3120 high-performance phase noise test probe. Symmetricom 2012, 11 29. Baccigalupi A (1999) ADC testing methods. Measurement 26:199–205 30. Michael R (2002) Measuring digital systems performance. Broadcast Eng 44:14–19 31. Zlobin GG, Kremin’ VT (1998) Universal digital measuring system. Meas Tech 31:572–576 32. Dinnis AR (1996) A high-resolution time-dispersive electron spectrometer for fast voltagecontrast measurements. Microelectron Eng 31:101–108 33. Zhou W, Miao M, Liu H, Yang C, Bayi Q (2021) A measurement method considering both phase noise and full frequency stability. Sens Transducers 254(7):22–30 34. Liu HD (2017) Clock cursor principle and its application in direct digital measurement. Xi’an University of Electronic Science and Technology, Xi’an 35. Bai LNA, Zhou WE, Hui XM et al (2014) Precision measurement of frequency standard transient stability. J Xidian Univ 41(2):102–106 36. Zhou W, Watanabe K (2005) Trends in foreign instrumentation and measurement technology in recent years. J Instrument 2005(7):764–770 37. Killingbeck JP (2007) Resonances for coupled oscillators. J Phys A: Math Theor 40:5149–5154 38. Bai L, Liu H, Ge X, Zhai H, Zhou W (2019) Precise frequency correction technique for crystal oscillator output. J Univ Electron Sci Technol China 48(1):58–61 39. Huang BY, Zhou W, Zhang YB (1996) Handbook of test and measurement techniques, vol 11. China Metrology Press, Beijing, pp 73–82 40. Zhou W, Zhou H, Fan W (2008) Equivalent phase comparison frequency and its characteristics. In: Proceedings of the 2008 IEEE frequency control symposium, pp 468–470 41. Zhiqi L, Wei Z, Hui Z et al (2013) The optimization of super-high resolution frequency measurement techniques based on phase quantization regularities between any frequencies. Rev Sci Instrum 84:025106
Chapter 2
Basics of Measurement and Test Precision and the Effect of Measurement Methods on Measurement Precision
Abstract It mainly introduces measurement error, data processing, various test measurement methods and their classification, traditional digital measurement and processing, the influence of measurement resolution on measurement precision in traditional methods and also injects the idea of fuzzy area border effect error analysis. Measurement methods have a significant impact on measurement precision and resolution and are one of the most critical factors in determining the level of measurement technology. This chapter will provide a detailed introduction to test measurement methods, mainly including: direct measurement, indirect measurement, differential measurement methods, coincidence measurement methods and intermediate source methods. The measurement precision of most measurement methods is limited by the resolution of the testing device, but certain methods of indirect measurement also have the possibility of breaking the resolution limit. Keywords Measurement error · test method · fuzzy area · border effect · detection precision Test and metrology methods are one of the most important tools that must be available to determine test protocols and to complete instrument design work. The study of testing and metrology methods is primarily a part of experimental physics, and errors are inevitable in any metrological test. In order to fully recognize and then reduce or eliminate errors, it is necessary to study the errors that are always present in measurement processes and scientific experiments. Therefore, excellent technicians should not only be familiar with the object being measured and its characteristics, but also should have a broad understanding of the basic methods of various measurement problems and their connections, so as to choose a good measurement method. One should have a wide range of ideas for dealing with measurement problems and not be limited by the scope of knowledge one is simply familiar with. The literature [1– 3] suggests that measurement problems can be solved with both analog and digital methods, can be solved within the range of physical quantities to be measured, or can be converted into other easily handled quantities to be solved (e.g., commonly used measurement methods combined with sensors), can use electrical, acoustic and optical methods to deal with measurement problems of other signals and can use thermal, length and mechanical methods to deal with power measurement problems. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 W. Zhou et al., The Border Effect in High-Precision Measurement, https://doi.org/10.1007/978-981-10-3593-7_2
19
20
2 Basics of Measurement and Test Precision and the Effect …
The choice of method is limited not only by our reserve of knowledge, but also often by the means and material conditions available to us. Any complex measurement problem will have multiple methods of measurement, both complex ones and simple ones. The use of weighing methods to solve more complex area calculation problems occurred in ancient China. Knowledge of measurement methods was used centuries ago to solve measurements and calculations that are currently done only with calculus and complex modeling. The literature [4] presents an analysis of the roots of the results of domestic and foreign instrumentation and measurement technology, where technological innovation is particularly important for the development of the field. Because various fields are ready to make various requirements for the accuracy of measurements, the range of measurements and the extension of measurement objects, it is often difficult to adapt practical instrumentation and measurement techniques to these requirements by merely following the conventional approach to development. A flexible and systematic approach to test and measurement methods must be linked to the source of instrument design. Depending on the measurement object and the purpose of the measurement, measuring instruments are composed not only of circuits and systems, computer hardware and software, the structure and even the physical background of the equipment, but also of measurement methods and data processing for measurement errors as a resource. As a precision measuring instrument, it should be supported first of all by the basic physical phenomena and the regular features of nature. It should be recognized that the highest accuracy exists in the laws of nature and is not something that can be created by man. The main reason why we have achieved high accuracy on specific equipment is that we are able to realize and utilize the phenomena inherent in the regularity of nature, matter, materials, etc. Secondly, the measurement method is another important pillar for achieving high accuracy in equipment. The literature [5–7] proposes that the selection of test metrology methods requires consideration in terms of measurement accuracy, range of measurement, response time of measurement and the cost of the solution or instrument to be designed. With the development of modern science and technology, the requirements for measurement accuracy in various fields are becoming higher and higher. If we want to achieve particularly high accuracy, it is difficult to achieve obvious results simply by improving the performance of the device or the device itself, and we can only seek a breakthrough from the direction of improving the method and scheme, hoping to get higher measurement accuracy of the measured from the experimental results which are not necessarily high in direct experimental accuracy.
2.1 Measurement Errors Due to the imperfection of the test method and test equipment, the influence of the surrounding environment, as well as by the people’s ability to understand the limitations of the data obtained from the measurement and experiment and the true
2.1 Measurement Errors
21
value of the measured, there is inevitably a difference, which is expressed in the value of the error. With the increasing development of science and technology and the continuous improvement of people’s understanding, the error can be controlled smaller and smaller, but in the end it cannot be completely eliminated. The literature [8] suggests that the inevitability and universality of the existence of error has been proven by a large number of practices and is also recognized by all people engaged in scientific experiments. As a result, the axiom of error holds: All experimental results have error, and error is present in the process of all scientific experiments from the beginning to the end. In any measurement test, errors are inevitable. In order to fully recognize and then reduce or eliminate errors, it is necessary to study the errors that are always present in the measurement process and in scientific experiments. That is, the reliability of the measurement results must be studied, estimated and judged. The analysis, study and judgment of measurement results must use the error theory, which has been developed through long practice and is a powerful tool for understanding the objective world. In the field of science, some significant discoveries have been made by applying error theory to the analysis and study of measurement results directly. Error theory is an important component of measurement science and addresses three main areas in the study of measurement error. (1) A reasonable evaluation of the error in the measurement results. (2) Correct processing of the measurement data in order to obtain the best result close to the true value. (3) To guide the design of experiments and the rational selection of measuring instruments, measurement methods and prescribed measurement conditions in order to obtain optimal results.
2.1.1 Definition of Measurement Error Measurement error: the difference between the result of a measurement and the true value of the quantity being measured. True value of a quantity: the value of a quantity that is perfectly determined or strictly defined under some certain conditions. It is an ideal concept and is generally unknown. The measurement value of the higher level of the measurement standard instrument and the average of a series of measurement results are usually used to replace the true value. When the measurement results contain only random errors, the arithmetic mean (mathematical expectation) of the measurement results is the best estimate of the true value being measured.
22
2 Basics of Measurement and Test Precision and the Effect …
2.1.2 Representation of Measurement Errors 1. Absolute error After a quantity has been measured, the difference obtained by subtracting the true value of the measured quantity x from its true value x 0 is called the absolute error (also referred to as error) Δx. That is Δx = x − x0
(2.1)
2. Relative error Relative error is the ratio of the absolute error to the true value being measured, usually expressed as a percentage. That is δx =
x − x0 Δx × 100% = × 100% x0 x0
(2.2)
And the smaller the relative error, the higher the accuracy of the measurement is 3. Decibel error Decibel error is another manifestation of relative error and is commonly used in electrical and acoustic metrology to indicate relative error. Definition of decibel. For voltage and current type parameters D = 20 lg x dB
(2.3)
where x = UU12 or x = II12 , U 1 , U 2 are voltages, and I 1 , I 2 are currents. For power-like parameters D = 10 lg x dB
(2.4)
where x = PP12 , P1 , P2 are the power. If x has an error Δx, then the decibel also has a corresponding error ΔD, i.e., for voltage and current type parameters ΔD = 20 lg(1 + δx) dB
(2.5)
ΔD = 10 lg(1 + δx) dB
(2.6)
for power-like parameters
The formula for calculating the relative error from the decibel error is
2.1 Measurement Errors
23
δx =
Δx × 100% = (10ΔD/20 − 1) × 100% x0
(2.7)
When the error itself is not large, the decibel error with respect to the relative error can also be expressed as For voltage and current type parameters ΔD ≈ 8.69 δx, the δx ≈ 0.115ΔD For power-like parameters ΔD ≈ 4.34 δx, the δx ≈ 0.230ΔD 4. Citation errors The citation error is the ratio of the absolute error of the indicated value of the measuring instrument to the specific value of the instrument, usually also expressed as a percentage. That is δxlim =
Δx × 100% xlim
(2.8)
Here, x lim called a specific value, or reference value, it is usually the full scale value (maximum scale value) or the upper limit of the nominal range in a measuring instrument range. The quoted error is widely used for instrumentation with multiple and continuous scales. It is proposed that the measurable range of such instrumentation is not a point but a range, and that the indicated values of each scale point and their corresponding true values are different, so that the denominators used in calculating the relative errors are also different. In practice, the quoted error is used instead of the general relative error. The accuracy classes of electrical instruments are 0.1, 0.2, 0.5, 1.0, 1.5, 2.5 and 5.0, respectively. Note: Let the full scale value of the meter be xn , and the measurement point be x, then the indicated error of the meter in the vicinity of the point x should be absolute error ≤ xn × S% xn × S% relative error ≤ x In general, x /= xn , therefore, the closer x is to xn (because x is in the denominator), the higher its measurement accuracy; the further x is from xn , the lower its measurement accuracy; therefore, when using this kind of meter, it should be measured within 2/3 of the full scale value of the meter as much as possible. And not simply based on the instrument’s reference error as the criteria for selecting the instrument, but
24
2 Basics of Measurement and Test Precision and the Effect …
according to the size of the measured, taking into account the level of the instrument and the upper limit of measurement reasonable selection of the instrument.
2.1.3 Classification of Measurement Errors According to the nature of the errors, measurement errors can be divided into three categories: systematic errors, random errors and gross errors. 1. Systematic errors In measurement errors analyzing, systematic errors must be excluded in order to treat measurement errors according to random error theory. In fact, there often exist systematic errors in the measurement process. In some cases, the value of the systematic error is even larger, so that the accuracy of the measurement results depends not only on the random error, but also on the influence of the systematic error. (1) Definition of systematic error The error that remains fixed when the same quantity is measured repeatedly under the same conditions, or the component of the measurement error that varies according to a defined law when conditions change. That the degree of “correctness” of the measurement result was determined. (2) Characteristics of systematic errors (a) It is present in any measurement process. (b) Its effect is independent of the number of measurements and is more dangerous than random error. (3) Classification of systematic errors Systematic errors can be divided into constant and variable system errors according to the characteristics they present; variable system errors can be divided into cumulative, periodic and system errors that vary according to complex laws. (4) Generation of systematic errors (a) Device error: device structure, process, wear and tear, aging, failure. (b) Environmental error: inconsistent with the required standard state, space, time caused by the device and the measured changes. (c) Method or theory error: measurement method or theory of imperfection caused by the error. (d) Personnel error: physiological differences, technical unskilled. (5) Elimination of systematic errors
2.1 Measurement Errors
25
(a) Removal of eliminable sources of error prior to measurement. Starting from the four links involved in measurement: the operator; the equipment used; the measurement method; and the measurement conditions, they are carefully studied and analyzed in depth, so as to find out the causes of systematic errors and try to eliminate these systematic errors. (b) Appropriate experimental methods are used, such as substitution method, reverse compensation method, symmetry method, etc., will eliminate the systematic error. (c) Elimination by correction method. Under the premise of known systematic error law, the direct measurement results are corrected by calculation or software processing to improve the measurement accuracy by relatively eliminating the systematic error. (d) Repeated measurement using different personnel or other processing means or through automated testing and intelligent processing-eliminating personnel error. 2. Random errors Random error refers to the error caused by many random factors which exist in the measurement. It is hard to predict in both size and orientation. This random error is the main object of error theory research. (1) Definition of random error The component of measurement error that varies in an unpredictable manner even the same quantity is measured repeatedly under the same conditions. It reflects the degree of “precision” of the measurement result. It reflects the randomness of the influencing factors, and the measurement results cannot be corrected or completely eliminated. (2) Basic properties of random errors (mostly obey normal distribution) (a) Boundedness. Under certain conditions, the probability of an error with a large absolute value appearing is zero, and the absolute value of the random error will not exceed a certain limit. (b) Symmetry. When the number of measurements is enough, the probability of the appearance of positive and negative errors of equal absolute value is the same, that is P(+Δ) = P(−Δ)
(2.9)
(c) Offsetting. When the number of measurements increases infinitely, the limit of the arithmetic mean of the error is zero, i.e., ) ( n 1Σ lim Δi = 0 (2.10) n→∞ n i=1
26
2 Basics of Measurement and Test Precision and the Effect …
(d) Single-peakedness. In a series of equal precision measures, the probability of the occurrence of an error with a small absolute value is greater than the probability of the occurrence of an error with a large absolute value, that is, the error with a small absolute value occurs more often than the error with a large absolute value. (3) Representation of random errors (a) Residual error (νi ). The absolute error obtained by taking the arithmetic mean of the measured values of a finite number of measurements as the true value. vi = xi − x
(2.11)
(b) Maximum absolute error (U ). The absolute value of the absolute error of the measured value does not exceed U . U ≥ |x − x0 | i.e. U = sup|Δx|
(2.12)
(c) Standard deviation (σ ). The value obtained by taking n measurements of a fixed quantity and squaring the arithmetic mean of the squared absolute error of each measurement, also known as the standard deviation and root mean square deviation. [ | [ | | n | n | | |1 Σ |1 Σ | | σ =| (xi − x0 )2 || =| (Δx)2 || (2.13) n i=1 n i=1 | | n→∞
n→∞
The standard deviation is a function of the value of each measurement is a more sensitive reflection of both large and small errors in a set of measurements and is a better way to indicate the accuracy of a measurement. The standard deviation characterizes the dispersion of the results of n measurements of a measured quantity. Its geometric meaning is the horizontal coordinate of the inflection point on the curve of a normal distribution, with a probability of 68% that the error does not exceed ±σ . (d) Arithmetic mean error (θ ). The arithmetic mean of the absolute values on all random errors from multiple measurements |δ1 | + |δ2 | + · · · + |δn | 1Σ |δ i | = n n i=1 n
θ=
Among them, δi = xi − x0.
(2.14)
2.1 Measurement Errors
27
(e) Contingency error (ρ). The contingency error is defined in terms of the probability of occurrence of the error. 1 p(|δ| ≤ ρ) = perhaps 2 ⎧ +ρ 1 f (δ)dδ = 2 −ρ
(2.15)
Relationship to standard deviation ρ = 0.6745, σ ≈ 23 σ . Its geometric significance: between −ρ ∼ ρ, the area consisting of the normal distribution curve and the horizontal coordinate is half of the total area. Therefore, the confidence probability that the error of the measured value does not exceed ±ρ is 50%. (f) Limiting error (δlim ). The limit error is expressed as three times the standard deviation: δlim = 3σ . Its meaning: theoretically, when the number of measurements is infinite, if the measured value obeys the normal distribution, the probability that the measured value of the error is less than the limit error is 99.73%, that is, only 3/1000 can exceed the limit error. Difference from maximum absolute error (U): by definition, U cannot be exceeded, while δlim can be exceeded. (g) Polar difference (R). The absolute value of the difference between the maximum value and the minimum value of a series of values obtained from a measure. R = |xmax − xmin |
(2.16)
Disadvantage: Only two data in the measurement column are used, which does not reflect the randomness of the error in the measurement process and its probability. Summary: The parameters that reflect the accuracy of the measurement column: standard deviation, arithmetic mean error, contingent error and limiting error, also known as they are the accuracy parameters of the measurement column. If the same measurement column is arranged vertically by size, there are δlim > σ > θ > ρ
(2.17)
The corresponding confidence probabilities are 99.73% > 68% > 57.62% > 50%. For different measurement columns, comparisons should be made by taking the precision parameters corresponding to the same confidence probabilities, with larger values having lower precision. 3. Gross errors (1) Meaning Errors that exceed what is expected under the specified conditions become gross errors.
28
2 Basics of Measurement and Test Precision and the Effect …
(2) Source Staff errors, malfunctioning of metering instruments and equipment, and influencing quantities beyond the specified limits. The main causes are (a) Subjective reasons of the measurement personnel. Due to the lack of experience of the measurement personnel, improper operation, work is too tired or measurement is not careful, impatient, not careful, etc. caused by the wrong reading, wrong record or wrong calculation of the results and cause coarse error. (b) Objective causes of external conditions. Due to unexpected changes in measurement conditions (such as mechanical shocks, external vibration, etc.) caused by changes in the value of the instrument, changes in measurement conditions, etc. If a data is found to contain a gross error, the value may be crossed out of the record, provided the original record is retained and the reason is given. It is not correct to cross out a figure at random without stating the reason. After the measurement has been performed, special care should be taken to determine whether a number contains a gross error, and adequate analysis and research should be conducted, with the rejection guidelines for gross errors used as the basis for rejection of outliers. The specific rejection methods are described in detail in the data processing section.
2.1.4 Errors in Indirect Measurements In many cases, depending on the characteristics of the object to be measured, it can be difficult to carry out direct measurements or to ensure the accuracy of the measured, so indirect measurement methods need to be used. For example, when measuring the resistance of a wire ρ, the resistance of the wire R, the length of the wire l and the diameter of the wire d are usually measured first, and then the resistivity is calculated according to the formula ρ=
π Rd 2 · 4 l
(2.18)
It follows that indirect measurements are based on some direct measurements to find the quantity being measured by a certain relational equation, then the literature [9] proposes that indirect measurements are functions of parameters from direct measurements, which can be expressed as y = f (x1 , x2 , . . . , xn ) = f (xi )
(2.19)
The error in indirect measurements is then also strongly related to the error introduced by direct measurements. 1. Absolute errors in indirect measurements
2.1 Measurement Errors
29
Σ ∂y ∂y ∂y ∂y Δx1 + Δx2 + · · · + Δxn = Δxi ∂x1 ∂x2 ∂xn ∂xi i=1 n
Δy =
(2.20)
2. Relative error of indirect measurements Δy Σ ∂y Δxi = · y ∂xi y i=1 n
(2.21)
3. Standard deviation of indirect measurements Let xi in y = f (xi ) contain only random errors and make m separate measurements of the directly measured quantity xi y1 = f (x11 , x21 , . . . , xn1 ) y2 = f (x12 , x22 , . . . , xn2 ) .. .
(2.22)
ym = f (x1m, x2m , . . . , xnm ) Then the standard deviation of indirect measured σy2
=
) n ( Σ ∂f 2 i=1
∂xi
σx2i
) n ( Σ ∂f ∂f +2 · ρij σxi σxj ∂xi ∂xj 1≤i≤j
(2.23)
If the random errors of the measurements are independent of each other and the correlation coefficient should be zero when m is sufficiently large, the formula for calculating the standard deviation of the indirect measurements is obtained. [ | n ( )2 |Σ ∂f σx2i σy = | ∂x i i=1
(2.24)
2.1.5 Synthesis of Measurement Errors In practical measurement tests, there may often be several errors introduced by many factors for a measured quantity. How to synthesize all the errors in a reasonable way has been a matter of concern. In this section, several of the more common methods of measurement synthesis are presented. To simplify the problem, let the errors be independent of each other. 1. Algebraic sum method
30
2 Basics of Measurement and Test Precision and the Effect …
Taking all errors as algebraic sums by positive and negative signs e=
n Σ
ei
(2.25)
i=1
Let e be the synthetic error, ei be the suberror, and n be the number of terms in the error. This method is applicable to the synthesis of established systematic errors, i.e., to the synthesis of systematic errors for which the magnitude and direction of the errors are already known exactly. 2. Absolute value sum method Summing all errors in absolute terms e=
n Σ
|ei |
(2.26)
i=1
This synthetic method is on the large side for error estimates, which is because the absolute value sum method does not take into account the offset between errors at all, so it is the most conservative and the most robust. In case the number of terms n of the subdivision error is large, the possibility of the error being superimposed in the same direction is extremely small and the offsetting nature of the error should be taken into account at this point. Therefore, this method is used when the number of terms is small (n < 10). 3. Square and root methods Taking the square and root of all errors [ | n |Σ e=| e2 i
(2.27)
i=1
This method takes full account of the offsetting effect between the errors and is more reasonable and simpler to use for errors of randomness. However, when the number of error terms is small, the deviation from the actual may be large and the synthetic error is low. 4. Generalized square and root method Divide all errors by the corresponding confidence coefficients ki , respectively, and take the square and root, then multiply by the total confidence coefficient k [ | n |Σ (ei /Ki )2 e = K| i=1
(2.28)
2.1 Measurement Errors
31
This method considers the specific distribution of each random error, is general and reasonable, but requires prior determination of k and ki , which is cumbersome.
2.1.6 Small Error Criterion Sometimes there are more error terms in error synthesis, while the nature and classification of various error terms are different, and estimation is quite tedious. If the total error has not changed it’s value when a particular error term had been ignored, then this error can be considered as a minor error. If the size of the individual errors is relatively different and the number of small error terms is not large, the small errors can be ignored under certain conditions. 1. Minor criterion for systematic errors The first basis: the law of synthesis of systematic errors. For systematic errors, synthesis by the algebraic sum method e=
n Σ
ei = e1 + e2 + · · · + ek + · · · en
(2.29)
i=1
Let the error k ek be a small error, i.e. ek is small compared to the other errors ei and negligible compared to the total error e, then the total error e' after neglecting ek is e' = e1 + e2 + · · · + ek−1 + ek+1 + · · · + en
(2.30)
And e − e' = ek . According to the definition of small error, if ek is a small error, then e ≈ e'
(2.31)
The second basis: the number of significant digits that represent the error value, i.e., the number of significant digits of the total error value. According to the rules on valid numbers. (1) When the total error is taken as one valid digit, if ek < (0.1 ∼ 0.05)e Here, ek is negligible. (2) When the total error is taken as two valid digits, if
(2.32)
32
2 Basics of Measurement and Test Precision and the Effect …
ek < (0.01 ∼ 0.005)e
(2.33)
Then ek is negligible. 2. Slight criterion for random errors The first basis: the law of synthesis of random errors. For random errors, synthesis by the square and root method [ | n / |Σ e=| ei2 = e12 + e22 + · · · + ek2 + · · · + en2
(2.34)
i=1
Let the error of k ek be small, i.e., ek is small compared to the other errors ei and small compared to the total error e ratio can be ignored, after ignoring ek , the total error e' is / 2 2 + ek+1 + · · · + en2 (2.35) e' = e12 + e22 + · · · + ek−1 And e2 − e'2 = ek2 . The second basis: the number of significant digits of the total error value. (1) When the total error is taken as one valid digit, if e − e' < (0.1 ∼ 0.05)e e' > (0.9 ∼ 0.95)e '2 e > (0.81 ∼ 0.9025)e2 So e2 − e'2 = ek2 > (0.19 ∼ 0.0975)e2 . That is, when a suberror ek is approximately less than 1/3 of the total error e, ek is negligible. ek < (0.436 ∼ 0.312)e (2) When the total error is taken as two valid digits, if e − e' < (0.01 ∼ 0.005)e e' > (0.99 ∼ 0.995)e ' 2 e > (0.9801 ∼ 0.990025)e2 ' e2 − e 2 = ek2 > (0.0199 ∼ 0.009975)e2 Thus ek < (0.14 ∼ 0.1)e
2.2 Data Processing
33
That is, ek is negligible when a suberror ek is approximately one order of magnitude smaller than the total error e. The small error criterion is of practical importance in the calculation of the total error and in the selection of higher standard quantities. When calculating the total error or error distribution, if a small error is found, the impact of the error on the total error may not be considered. In addition, when proofreading the instrument, the error of the standard instrument can be ignored, and the measured value can be treated as the “true value”. In the calculation of the error of indirect measurement, if you can determine the part of the error that constitutes a small error according to the small error criterion, you can simplify the calculation of the error.
2.2 Data Processing One of the indispensable aspects of metrological work is the recording and collation of observed data, leading to a comprehensive analysis and discussion of the metrological process from which the patterns and conclusions of the problem under study are drawn, which is the purpose of data processing. This illustrates that data processing is an important part of metrological testing. Data processing should neither improve the test accuracy of the measured nor reduce the test accuracy of it, but should truly reflect the objective test results. Only with reasonable data processing can reasonable results be obtained.
2.2.1 Valid Figures 1. Concepts related to effective figures Valid digit: A number consisting of digits that are reliable or exact except for the last digit, which is inexact or doubtful, then all the digits, including the last digit, that make up the number are called valid digits. Superfluous digit: a number other than a valid digit. Valid digits: a number, the number of digits occupied by a valid number, i.e., the number of valid digits. There are two types of numbers. (1) All valid digits can be considered as restricted, and each of their digits is determined. (2) A number with a finite number of effective digits, which appropriately represents the quantity to be expressed or the precision it has in relation to reality. The number of bits is influenced by a number of factors. The number of significant digits taken for these values and the precision can actually be achieved. A few notes on the number “0”: (a) The “0” in front of an integer is not objectionable and is a redundant number.
34
2 Basics of Measurement and Test Precision and the Effect …
(b) For pure decimals, after the decimal point, the “0” in front of the number only serves to locate and determine the unit. (c) The “0” in the middle of the number is a valid number. (d) The “0” in the back of the number, the general agreement is that the “0” at the end of the number is a valid number. To clearly indicate the number of valid digits of the measurement test data, the form of k × 10m can be used. m is any integer; k is any number greater than 1 and less than 10, so the number of digits of k is the number of valid digits. 2. Rounding rules for valid numbers (1) When a redundant number is determined to follow an integer, the redundant number is discarded and replaced by a “0”, which is expressed as a power of 10. In the case of a decimal or fractional number, only the extra digit is discarded. (2) “Even rounding” rule (principle of even rounding: taking into account that the chances of rounding 0 to 9 to 1 or not to 1 as an extra digit should be 50/50). (a) If the first digit of the rounded number is less than 5, then all of them are rounded off. (b) If the first digit of the number being rounded is 5 and all subsequent digits are 0 or no digit, round all when the last digit of the retained digit is even or 0. When the last digit of the retained digit is odd, add 1 to that odd digit. (c) If the first digit of the number being rounded is greater than or equal to 5 but is followed by a digit that is not 0, add 1 to the last digit of the retained digit. (3) In some special cases, the treated (c) In some special cases, the first digit of the redundant data is “5” too many values, may not be dealt with according to the principle of even-handedness (if the processing of even-handedness, will produce cumulative errors in the same direction). Half of the values are rounded to 1 and the other half are not rounded at all. This avoids causing excessive rounding errors. (4) The “rounding” principle. 3. Rules for the operation of valid numbers (1) Addition and subtraction: The result of the number of digits retained after the decimal point should be the same as the number of digits after the decimal point of the data involved in addition and subtraction operations. (2) Multiplication and division operations: The data involved in the operation of the least number of effective digits of the data of the largest relative error, and the result of the operation of the effective number of digits should be the same as the effective number of digits of this data. (3) Multiplication and opening operations: the result (4) Logarithmic operations: The number of digits taken should be the same as the effective number of digits of the true number.
2.2 Data Processing
35
(5) Trigonometric operations: The number of digits of the function value should be increased with the reduction of the angle error. (6) Other cases. (a) The intermediate results of the operation of the number of digits can be retained more than 1–2, in order to reduce the impact of rounding error. (b) When calculating the average, if four numbers or more than four numbers are averaged, the effective digit of the average may be increased by one. (c) In all computational equations, for some error-free values, the number of significant digits can be considered unlimited. That is, in arithmetic, as many digits as are needed can be written. (d) In arithmetic, if the first digit of a valid number is 8 or 9, the number of digits of the valid number may be recorded one more. 4. Valid figures for test results When expressing precision, in most cases only one valid digit is taken, and at most two valid digits. According to the definition of a valid digit, the last digit of the valid digit of a general test result should be taken to the same order of magnitude as the last digit of the accuracy parameter. Therefore, the number of significant digits basically reflects the accuracy of the test. If the test results are more consistently more reproducible, one more digit may be taken than the valid digit, if necessary, for reference purposes. 5. Valid figures for standard deviation The effective number of standard deviations can be determined from σ σσ = √ 2(n − 1)
(2.36)
Request out. Let n = 9, then we have σσ = √
σ σ = 4 2(n − 1)
(2.37)
According to the operation rules of valid numbers, the valid numbers of σ can only be at most two; if the first valid number of σ is 8 or 9, then the valid numbers of σ can only be one.
2.2.2 Coarse Error Handling 1. Definition
36
2 Basics of Measurement and Test Precision and the Effect …
Errors that exceed what is expected under the specified conditions become gross errors. 2. Sources Those beyond the specified limits include staff errors, malfunctioning of metering instruments and equipment, and influencing quantities. The main causes are. (1) Subjective reasons of the measurement personnel. Due to the lack of experience of the measurement personnel, improper operation, work too tired or measurement is not careful, impatient, not careful, etc. caused by the wrong reading, wrong record or wrong calculation of the results and cause coarse error. (2) Objective causes of external conditions. Due to unexpected changes in measurement conditions (such as mechanical shock, external vibration, etc.) caused by changes in the instrument value, changes in measurement conditions, etc. If a data is found to contain a gross error, the value may be crossed out of the record, provided the original record is retained and the reason is given. It is not correct to cross out a figure at random without stating the reason. After the measurement has been carried out, special care should be taken to determine whether a number contains gross errors, and adequate analysis and research should be carried out, using the gross error rejection guidelines as the basis for rejecting outliers. 3. Coarse error rejection criterion Some of the coarse error rejection guidelines are 3σ guidelines, Groebes guidelines, Dixon guidelines, Romanovs benchmark rules and Schauwelle guidelines. (1) 3σ guidelines/Leiter guidelines For a certain measurement column x1 , x2 , …, xn , if each measured value contains only random errors and follows a normal distribution, then according to the law of normal distribution of random errors, the probability that its residual falls outside ±3σ is about 0.27%, i.e., p{|xi − x| > 3σ } = 0.27%
(2.38)
That is, it is highly unlikely that the residuals will fall outside of ±3σ in a finite number of repeated measurements. Further, this is considered as an impossible event. Thus, if the residuals of the value xi obtained from the measure υi = xi − x satisfy |υi | > 3σ
(2.39)
i is 1,2, …, n. Then xi is considered to contain coarse errors and should be excluded. Note: This guideline does not apply if the number of measurements is less than 10. (2) Grobeis Guidelines
2.2 Data Processing
37
A criterion for discriminating whether an outlier is suspicious using the Grobeis statistic provides that the random errors obey a normal distribution. Let multiple independent measurements of equal precision be made on a fixed quantity to obtain a column of measurements: x1 , x2 , …, xn . When xi obeys the normal distribution, find σ . The measurement column is then rearranged in order of magnitude into a sequential statistic as x(1) ≤ x(2) ≤ · · · ≤ x(n)
(2.40)
Then the measured values at the left and right edges are most likely to contain coarse errors. The Grobeis statistic is defined as g(i) =
|xi − x| |υi | = σ σ
(2.41)
and its threshold is g0 (n, a), where n is the number of measurements and a is the significance level, i.e., the probability of making a “discarded truth” error. g0 (n, a) This can be obtained by looking up the table. After computing the Grobels statistic g(i) corresponding to the values of the measurement column ends x(i) (i = 1 or n) in order of size, if it satisfies g(i) =
|xi − x| |υi | = ≥ g0 (n, a) σ σ
(2.42)
Then the statistic g(i) is considered significantly different from the distribution of the statistic g that should be obeyed, the corresponding x(i) contains gross errors, and x(i) is suppressible and should be discarded. If g(i) < g0 (n, a)
(2.43)
Then the corresponding x(i) does not contain coarse errors. Using the Groebes criterion, only one suspicious value can be rejected at a time and the discrimination needs to be repeated until there is no coarse error in the measured value. The Grobeis criterion overcomes the shortcomings of the criterion, gives more rigorous results in a probabilistic sense and is considered a better criterion for judgment. (3) Dixon Guidelines The literature [10] presents Dixon’s criterion to present doubtful values by means of hypothesis testing through the extreme difference ratio composition statistic. Let a fixed quantity be measured with equal precision n times independently of each other to obtain a measurement column xi , i = 1,2, …, n, and let this measurement column obey a normal distribution. Rearrange the statistical measurement series according to the magnitude of the measured values.
38
2 Basics of Measurement and Test Precision and the Effect …
x(1) ≤ x(2) ≤ · · · ≤ x(n)
(2.44)
Among these measured values, the most questionable are x (1) and x (n) , since they deviate the farthest from the arithmetic mean. Find the corresponding critical value r 0 (n,a) based on the corresponding n and a. If the statistic rjk satisfies rjk ≥ r0 (n, a)
(2.45)
Then the statistic rjk is considered to be significantly different from the distribution it should obey, and the corresponding x (1) or x (n) is judged to contain gross errors and should be excluded. The characteristic of this criterion is that it does not require the calculation of standard deviations and is relatively easy to use. When a data is presented using Dixon’s criterion, the statistic should be recalculated by the remaining order quantity and another suspect data should be tested until there is no gross error. (4) Romanovs benchmark rule It is more reasonable to discriminate gross errors by the actual error distribution range of the t-distribution, so it is also called the t-test criterion. It is to first reject a suspicious measured value and then test whether the rejected measurement contains a gross error according to the t-distribution. Let multiple independent measurements of equal precision be made for a fixed quantity and a column of measurements is obtained: x1 , x2 , … xn . If the measurement xk is considered suspect, it is rejected and the mean is calculated (excluding xk from the calculation). x=
n Σ 1 xi n−1
(2.46)
i=1,i/=k
That the standard deviation of the measurement columns excluding xk [ | n Σ | 1 σ =| υi2 n−2
(2.47)
i=1,i/=k
xn Based on the number of measurements n and the chosen significance a (taken as 0.05 or 0.01), the t-test coefficient can be obtained by looking up the table K(n, a). If the residuals of xk satisfy |υi | = |xk − x| > K(n, a)σ
(2.48)
The measured value xk is considered to contain gross errors and should be excluded. (5) Schauwelle Guidelines
2.2 Data Processing
39
y
Fig. 2.1 Probability distribution of |υi | > Zc σ
2ϕ ( Z c )
0 Z cσ Z cσ
x
The Schauwelle criterion is also predicated on a normal distribution. Suppose that the residual error of a data among the n measured values obtained from multiple repeated measurements satisfies |υi | > Zc σ
(2.49)
Then this data is excluded, as shown in Fig. 2.1. In practice, Zc < 3, which compensates to some extent for the 3σ criterion, is used when n ≤ 185 Zc is a function of n, the smaller n is the smaller Zc is; the larger n is the larger Zc is. However, they are not simply monotonic functions of each other. Zc The value of can be obtained by checking the table based on the measurements times n. When n > 185 Zc becomes greater than 3, at which point the Schoville criterion is less stringent than the 3σ criterion; when n → ∞, the Schoville criterion cannot be applied due to Zc → ∞. (6) Comparison of several coarse error rejection criteria (a) The 3σ guideline is simple, does not require a table check, is easy to use, and can be used when the number of measurements is high or not too demanding. (b) The Schauwelle criterion is the classical method, which has been applied more often in the past, but it has no fixed probabilistic meaning, especially when n → ∞ the criterion fails, i.e., it does not work well when the number of measurements is high. (c) The Grobess criterion, the Dixon criterion and the t-test criterion give more rigorous results. These three criteria should be used for measurement columns with a low number of measurements and high requirements. Among them, the Groebes criterion is the most reliable and usually discriminates better when the number of measurements is n = 20 ∼ 100; the t-test criterion can be used when the number of measurements is small; and the Dixon criterion can be used if the measured values containing coarse errors are to be discriminated quickly from the measurement column. When applying the above criteria to discriminate gross errors, it is important to note that if two data are discriminated as containing gross errors at the same time,
40
2 Basics of Measurement and Test Precision and the Effect …
only the one containing the largest error can be rejected; if the two data are the same, only one of them can be rejected. In other words, only one data can be eliminated at a time. After that, the judgment is continued for the remaining (n − 1) data to determine whether there are still suspicious data, until all the data are free of problems. Those data that exceeded the bounds at the same time as the proposed data in the previous judgment may not exceed the bounds of the judgment after recalculation again, so only one exceeded data can be eliminated at a time. The literature [11] suggests that it is important to note that each criterion is predicated on the data being distributed according to a normal distribution, and that the reliability of the judgments will suffer when deviating from the normal distribution, especially when the number of measurements is small. Therefore, to treat coarse errors, in addition to timely detection from the measurement results and the use of rejection criteria to identify, it is more important to improve the technical level of staff and work responsibility. In addition, it is important to ensure stable measurement conditions and to prevent the effects of sudden changes due to drastic changes in environmental conditions.
2.2.3 Simple Processing of the Results Obtained from the Measurement 1. Treatment of values obtained from equal precision measurements (1) Definition of equal precision measurement Equal precision measurement is a measurement of equal precision when the test conditions are constant. Generally, the standard deviation is used to determine that the standard deviation is the same for equal precision measurement. (2) The processing steps of equal precision measurement results After independent measurement of a quantity with equal precision, let the systematic errors have been eliminated by measures and the coarse errors have been removed, the processing steps are: (a) Find the average value. 1Σ xi n i=1
(2.50)
υi = xi − x
(2.51)
n
x= (b) Find the residuals.
(c) Find the standard deviation of a single measurement.
2.2 Data Processing
41
[ | n | 1 Σ υ2 σ =| n − 1 i=1 i
(2.52)
(d) Find the standard deviation of the mean. [ | n Σ | 1 σ σx = √ = | υ2 n(n − 1) i=1 i n
(2.53)
(2) Simple algorithm for arithmetic mean When the number of valid numbers of measured data is large, the calculation of the arithmetic mean by the formula of the mean is very tedious, and it is easy to make mistakes. Let’s say that the measurement x is repeated n with equal precision, and the measured data x1 , x2 , …, xn are obtained. To simplify the calculation of the arithmetic mean, choose any xi value close to the measured data x0 , and subtract it from, xi' = xi − x0 i = 1, 2, …, n. Then we have n Σ i=1
xi =
n Σ
(x0 + xi' ) = nx0 +
i=1
n Σ
xi'
(2.54)
i=1
consequently 1Σ ' x n i=1 i n
x = x0 +
(2.55)
That is, the arithmetic mean can be expressed as the sum of the arithmetic means of x0 and xi' . x0 The value of xi' should be chosen so that the value of is as small as possible and easy to calculate. 2. Treatment of values obtained from unequal precision measurements (1) Definition of unequal precision measurement Measurements with unequal accuracy under different conditions (e.g., environment, methods, instruments, and personnel) or with different numbers of measurements are called unequal accuracy measurements. In unequal precision measurement, each measurement data obtained has a different degree of confidence, so the data processing method is different from that of equal precision measurement. (2) The weight of the measurement data
42
2 Basics of Measurement and Test Precision and the Effect …
To facilitate data processing, the difference that reflects the level of accuracy of the data should be expressed as a numerical value, which is the weight of the measured data. The weight of the measured data indicates the degree of confidence of the data in relation to other data. The higher the accuracy of the data, the higher the degree of confidence is, the greater the weight; conversely, the lower the accuracy of the data is, the smaller the weight. The level of precision of the measurement data is the basic starting point for determining the size of the weight. Since the accuracy of measurement data is measured by its standard deviation, the right of measurement data can be determined by its standard deviation. Let the standard deviation of unequal precision measurement data x1 , x2 , …, xn be σ1 , σ2 , …, σn respectively. The corresponding weights should satisfy P1 : P2 : · · · : Pn =
1 1 1 : : ··· : 2 σn σ12 σ22
(2.56)
perhaps P1 σ12 = P2 σ22 = · · · = Pn σn2
(2.57)
The two equations above give the general method for determining the weights, i.e., the weights of the measurement data are inversely proportional to the square of the corresponding standard deviation. The weights themselves are dimensionless; they reflect only the relative degree of confidence between the individual measurements, and their absolute magnitude is irrelevant as long as the two equations above can be satisfied. This is the relative nature of the weights. It should be noted, however, that once the value of the weights is determined, it is not allowed to be changed arbitrarily during data processing. In general, for simplicity of processing, the values of the weights should be made as approximate as possible. If the standard deviation of the measurements is σ , and m sets of measurements are taken first, with the number of measurements in each set being n1 , n2 , …, nm then the weight of the arithmetic mean of the sets should satisfy the following equation. P1 : P2 : · · · : Pn =
1 1 1 : 2 : ··· : 2 2 σ σ1 σ2 n
(2.58)
It follows that the ratio of the weights of the arithmetic means of the groups is equal to the ratio of the number of measurements in each group. Thus, we can conclude that weights can be based on the precision of a measure or sometimes on the number of measures. If it is based on the precision of the measure, then the weight of a measure should be inversely proportional to its variance, i.e., P=
C σ2
(2.59)
2.2 Data Processing
43
If the number of measurements is used to determine this, the weight should be proportional to the number of measurements, i.e., P =C ·n
(2.60)
In the above two equations are scale factors, which can be chosen arbitrarily to facilitate the calculation as a principle. (3) Weighted arithmetic mean Let n repeated measurements of a quantity X with unequal precision be made and the measurements x1 , x2 , …, xn , and let the weights of each measurement be P1 , P2 , …, Pn , then the best estimate of the measured X shall be the weighted arithmetic mean of all the measurements. Σn Pi xi P1 x1 + P2 x2 + · · · Pn xn xp = = Σi=1 (2.61) n P1 + P2 + · · · Pn i=1 Pi This is the weighted arithmetic mean principle, which is an unbiased estimate of the measured X. Similar to arithmetic means, weighted arithmetic means are based on the satisfaction of random errors, which can be exploited by treating measurements of unequal precision according to the principle of weighted arithmetic means and minimize the effects of random errors. There is no such satisfaction for the same systematic error in each measurement. This offset is not available for the same systematic error across measurements. Unequal precision measurements are often obtained using different measurement methods, so each measurement often contains different systematic errors. Since these systematic errors are not caused by the same factor, they are not the same as each other. Such systematic errors have a degree of offsetting effect on each other in the measurement results. The weighted arithmetic mean can also be calculated using the simplex algorithm. Let xi = x0 + xi' , then we have P1 x1 + P2 x2 + · · · + Pn xn P1 + P2 + · · · Pn Σn Pi xi = x0 + Σi=1 n i=1 Pi
xP =
(2.62)
(4) Unit rights and standard deviation of unit rights If the right Pk = 1 of a particular data Xk is called Pk is the unit right and the standard deviation σk of Xk is called the standard deviation of the unit right, i.e., σ0 . [ [ | | n n | 1 Σ | 1 Σ ' υi 2 = | pi υi2 σ0 = | n − 1 i=1 n − 1 i=1
(2.63)
44
2 Basics of Measurement and Test Precision and the Effect …
Obviously, the number of people who have been given a job by P1 σ12 = P2 σ22 = · · · = Pn σn2
(2.64)
P1 σ12 = P2 σ22 = · · · = Pn σn2 = σ02
(2.65)
√ σ0 = σi Pi
(2.66)
obtain
then there is
i = 1, 2, …, n, with unequal precision measurements x1 , x2 , …, xn , and let the weights of each measurement be P1 , P2 , …, Pn , respectively, then the residuals of each measurement are υi = xi − xP
(2.67)
The weighted residuals are obtained by multiplying each residual by the square root of its respective weight √ υi' = υi Pi
(2.68)
The weighted residual of any data has a weight of 1. Obviously, the formula for the unit weight standard deviation is obtained by substituting the weighted residual into the Bessel formula [ [ | | n n | 1 Σ | 1 Σ '2 | υi = | pi υi2 a (2.69) σ0 = n − 1 i=1 n − 1 i=1 (5) Standard deviation of the weighted arithmetic mean Since the weighted arithmetic mean itself also contains random errors, its accuracy should also be assessed by its standard deviation. / σxP =
Σn 2 1 i=1 Pi υi Σ n n−1 i=1 Pi
(2.70)
(6) Processing steps for unequal precision measurement results Assuming that there are no systematic and gross errors in each measurement, the steps for processing the results of unequal precision measurements are. (1) Find the average.
2.2 Data Processing
45
Σn Pi xi xP = Σi=1 n i=1 Pi
(2.71)
υi = xi − xP
(2.72)
(2) Find the residuals.
(3) Find the standard deviation of a single measurement. [ | n | 1 Σ σ0 = | Pi υi2 n − 1 i=1
(2.73)
(4) Find the standard deviation of the mean. σxP
σ0 = √ Σn i=1
/ Pi
=
Σn 2 1 i=1 Pi υi Σ n n−1 i=1 Pi
(2.74)
2.2.4 Least Squares Method Among the methods proposed in the literature [12] to determine the unique solutions of certain physical parameters or through a series of observations of other parameters of known mathematical models, the least squares method is a universal standard method. The least squares method, as a basic method of experimental data processing, gives a guideline for data processing—the best result (or most trustworthy value) obtained in the least squares sense should minimize the sum of squares of the residuals. The whole set of theories and methods established based on this criterion provides a powerful tool for the processing of experimental data and has become one of the basic elements of experimental data processing with very wide application. Least squares means that the best value of the measure (denoted by x 0 ) should be such that the sum of squares of its difference from the value obtained from the measure is minimal, i.e., n Σ i=1
Pi (xi − x0 )2 =
n Σ
Pi υi' = min
(2.75)
i=1
This is the basic principle of the least squares method. For a measure of equal precision measurement, the best value is the value that minimizes the sum of the squares of the errors of all the measures. Thus, for a series of measures of equal precision, their arithmetic mean is the best or most trustworthy value, with the sum of the squares of the deviations of each measure
46
2 Basics of Measurement and Test Precision and the Effect …
from the arithmetic mean being the smallest. The best value of a measure when measured independently with unequal precision is the weighted arithmetic mean, which is consistent with the principle of least squares.
2.3 Various Test Measures and Their Classification Test and measurement methods are one of the most important tools that must be available to determine test protocols and to complete instrument design work. To produce good test and measurement methods one should not only be familiar with the object to be measured and its characteristics, but should also have a broad understanding of the basic methods for dealing with various measurement problems and the connections between them. Any complex measurement problem will have a complex or simple variety of measurement methods. Therefore, mastering scientific methods of testing and measurement will yield significant results in terms of efficiency and effectiveness. The measurement method is an important pillar to achieve high accuracy of the equipment. In the process of testing and measurement, different quantities or the same quantity of different values should apply the corresponding measurement principles and choose different measurement methods according to their characteristics and accuracy requirements. Precise measuring instruments are often based on the basic physical phenomena and the regular features of nature, so good measurement methods should be produced not only by familiarity with the object to be measured and its characteristics, but also by extensive knowledge of the basic methods for dealing with various measurement problems and the connections between them. In this section, a brief introduction to the most basic measurement methods in common use will be given, in the hope that some common problems will be identified and ideas for the generation of new measurement solutions will be opened up.
2.3.1 Test Measurement Methods In the process of testing and measurement, different quantities or different values of the same quantity should apply the corresponding measurement principles and choose different measurement methods according to their characteristics and accuracy requirements. “Metrological principles are the scientific basis of metrological methods”. For example, the thermoelectric effect is applied in temperature metrology, the Josephson effect in voltage metrology, the Dobler effect in velocity metrology and the optical interference principle in length metrology. These are all based on the corresponding physical and chemical properties of the measurement being made. The measurement of different physical quantities has to be combined with the regular characteristics of the different physical quantities themselves on the one
2.3 Various Test Measures and Their Classification
47
hand, while on the other hand there exist similar ways to improve the accuracy of measurements and to detect error factors. The combination of these two aspects is able to generate general ideas for experimental schemes and instrument design for the measurement of different quantities. Therefore, the mastery of common test and measurement methods is one of the foundations for accomplishing the measurement of different physical quantities and instrument design in a broad sense. The mastery of these measurement methods by way of example can play a key role in the design of a wide range of instruments. So what is presented in this section is one of the common elements that exist in the measurement of different quantities and in the design of instruments. These test and measurement methods are the most fundamental in the field and are essential for working in test and measurement and instrument design. The choice of test measurement method requires consideration of various aspects such as measurement accuracy, range of measurement, response time of measurement and cost of the solution or instrument to be designed. Using different measurement methods, the relationship between the error components contained in the experimental results and the measurement error on the measured will not be exactly the same, and therefore the accuracy of the measurement is often different. Considering the high accuracy of measurement, the effective processing of experimental data can obtain relatively higher accuracy. In the development of modern science and technology, there are a large number of measurement problems that require very high precision to be solved. For example, the measurement accuracy in navigation and positioning, synchronization of time, precision voltage detection, etc. often requires a fineness in relative error of the order of meters to centimeters, nanoseconds to picoseconds, microvolts to nanovolts, but they are in a measurement environment of millions of kilometers, seconds, minutes, days and kilovolts. That is, measurement errors are often required to be as small as 10–8 , 10–9 and up to 10–14 ~ 10–16 . However, there is a huge difference between the accuracy of the A/D, D/A, etc. converters that we usually use in direct measurements and these requirements. The accuracy that can be achieved by various counters, etc. is also very limited. Compared to particularly high accuracy, it is difficult to achieve significant results by simply improving the performance of devices or equipment. Instead, it is the improvement of methods and solutions that will achieve the goal. 1. Direct and indirect measurement methods (1) Direct measurement method “A method of measurement in which the value of the quantity to be measured is obtained directly without the need to measure other quantities that are a function of the quantity to be measured” is called “direct measurement”. That is, the measurement results can be obtained directly from the experimental operation. The formula can be expressed as follows. A=X
(2.76)
where A—the measured quantity; X—the result directly derived from the experiment.
48
2 Basics of Measurement and Test Precision and the Effect …
The method of linearly converting the measured quantity into an electrical or digital display using a sensor can be understood as a direct metering method on the basis of Eq. (2.76). The measurement error of this measurement method is ΔA = ΔX
(2.77)
From this equation, it can be seen that the errors in the experimental results are converted 100% to errors in the measurement of the measured. This is the reason why the measurement accuracy of this method is often not too high. In direct comparative measurement, the measuring instrument gives the value of the quantity being measured directly. In the case of high-precision measurement or testing, supplementary measurements are required to determine the value of the influencing quantity in order to eliminate systematic errors contained in the measurement results. Even so, this type of measurement is still a direct measurement method. It is the most characteristic and most used test measurement method. The measurements obtained by this method are straightforward and convenient, the equipment used is not necessarily complex, and in most cases the range of measurements can be very wide, without the problem of time response. A typical example of direct metrology is the measurement of frequency using the pulse filling method proposed in the literature [13, 14]. The block diagram of the implementation is shown in Fig. 2.2. In the block diagram of the frequency meter shown in Fig. 2.2, the accurate and stable frequency signal generated by the scalar frequency crystal oscillator is divided to produce an accurate time-based signal (e.g., a second signal) and used to control a gate. After the measured signal passes through this gate, it is counted by a counter. Input A
amplification and shaping
Main gate
control circuit
Constant temperature quartz Crystal oscillator
Time reference frequency division
Fig. 2.2 Digital frequency meter with direct counting method
counter display
2.3 Various Test Measures and Their Classification
49
When the time base signal is 1 s, the result of the count is strictly equal to the frequency value of the signal under test. Depending on the height of the signal being measured, the measurement method differs in different cases. (1) fx is higher. If the measured signal frequency is of high frequency, then the gate signal should be taken at a lower frequency for measurement, which ensures that the count value is clear enough for a sufficient gate time period to measure the frequency value by the following equation. fx =
N T
(2.78)
During the measurement, the pulse signal inside the gate is counted and N is the number of pulses inside the gate counted by the pulse counter in time T. From the error synthesis method, we get. Δfx ΔN ΔT − = fx N T
(2.79)
where ΔN/N is a quantization error, which is one of the most common errors in digital instruments. From the above equation, it can be seen that the error is mainly affected by T and N. Since the most important component in the measurement circuit block diagram is the counter, the counter is the one that has the most influence on the accuracy index in the whole measurement. The selection of a counter with good performance can improve the accuracy of the measurement of the measured signal to some extent and effectively control the error range. The error is due to the fact that the gate signal and the measured signal cannot be kept synchronized in phase, so the phase deviation will occur at the moment of opening and closing the door, and the pulse signal is not guaranteed to completely fill the whole gate time period, so it is easy to produce the error of ±1, and the error is random, and its maximum value can be expressed as ±1 1 ΔN = + N N Tfx
(2.80)
ΔT/T The relative error is indicated. The relative error is mainly due to the accuracy of the standard signal provided by the quartz crystal. The following equation can be obtained. Δfc ΔT =− T fc
(2.81)
Usually, the requirement of frequency scale accuracy is based on the requirement of the measured signal accuracy. Usually, the accuracy of the frequency scale signal is one order of magnitude higher than the measured signal accuracy, so that the relative
50
2 Basics of Measurement and Test Precision and the Effect …
error generated by the frequency scale signal will not affect the final measurement results. For the overall measurement error, we use a summation of absolute values by division, i.e., ) (| | | 1 | |Δfc | Δfx (2.82) = ± || || + |fc | fx Tfx Therefore, when fx is unchanged, the analysis of the above equation shows that the larger the value of T, the smaller the corresponding error; if the value of T is unchanged, the higher the value of fx , the corresponding total value of the error will be reduced. (2) fx lower. When the measured signal frequency is lower, the measured signal is used as the gate signal and the reference signal is used as the pulse signal for the filling time inside the gate, and the measured frequency value is expressed by the following equation fx =
1 NT0
(2.83)
where N is the count value and T0 is the count period. Assuming that we do not consider the accuracy and error of the pulse signal itself, this measurement resolution can be obtained as Δfx fx =± fx f0
(2.84)
The advantage of the direct counting method of frequency measurement is mainly that the measured count value is obtained directly; however, the disadvantage is that it is prone to individual word errors. It follows that the direct frequency measurement method can be used when the measured frequency signal has a high frequency and a long gate time, but for lower signals, the use of this method is prone to large errors. (2) Indirect measurement method “A method of measurement in which the value of the quantity to be measured is obtained by measuring other quantities that are functions of the quantity to be measured” is called “indirect measurement”. The value of the quantity to be measured can be found by the following equation. A = F(X1 , X2 , X3 , . . .)
(2.85)
where A is the quantity being measured; X1 , X2 , X3 are the quantity that can be measured directly. In some measuring instruments often the measured value is obtained by measuring an intermediate quantity and calculating the value to be measured. The accuracy of
2.3 Various Test Measures and Their Classification
51
the measured value may be somewhat higher when obtained in this way. Therefore, indirect measurement methods are often chosen in high-precision testing and metrology. The improvement in accuracy can be obtained by analysis of the corresponding functional equation. That is, the mathematical model between the measured and each intermediate quantity is first developed and the contribution of the value of the intermediate quantity to the accuracy of the measured is found. In addition, there are some measurements that are almost impossible to achieve using direct measurement methods. In such cases indirect measurement methods can provide a better way. The specific design of an indirect measure should take into account the relationship between the measured quantity A and the individual directly measurable quantities X1 , X2 , X3 , for example, simple sum or difference relations, multiplication and division relations, etc. In order to state the problem more clearly, it is necessary to introduce specific examples of measurements. For example, in the frequency scale comparison, the direct frequency measurement method is not very accurate. However, since the frequency scale comparison is done with the frequency values of the two comparison frequency signals close to each other, the literature [15, 16] proposes that the measurement of the phase difference change between the signals through the frequency scale can be done by using the equation Δf /f = ΔT /τ
(2.86)
The relative frequency difference of the measured signal is calculated, and a high measurement accuracy is obtained as the comparison time is extended, where ΔT is the change in phase difference between the two signals and τ is the time taken for that change to occur. The error equation is. | | Δf δ || f0
| | | | | | | δ(ΔT ) | | ΔT | |+| |≤| | | | τ | | −τ 2 |.|δτ | | | | | | | | δ(ΔT ) | | Δf | | δτ | |+| |·| | = || τ | | f0 | | τ |
(2.87)
The second term on the right-hand side of the equation is a minor component of the error. This term is negligible compared to the first term as long as the frequency values of the two specific phase frequency sources are relatively close and due care is taken for the accuracy of the measurement or control of the sampling period τ . The first term on the right side of this equation is the main component of the error. It can be seen from Eq. that the accuracy of the frequency measurement increases with the increase of the phase-ratio time, and with the increase of the accuracy of the test for phase difference. The intermediate quantities directly measured here are ΔT = T2 −T1 and τ , where T is the phase difference (time interval) and τ is the time taken for the change in ΔT
52
2 Basics of Measurement and Test Precision and the Effect …
f2
amplification and shaping
F2
Bistable phase discrimination f1
amplification and shaping
v(t)
low-pass filter
v
Long chart recorder
F1
Fig. 2.3 Block diagram of the instrument for frequency measurement by the phase-ratio method
to occur. τ This can be from seconds, minutes, hours up to days. For high-precision frequency sources, ΔT often varies in the range of microseconds or nanoseconds. Depending on the frequency difference between the two signals, the range of variation is different. The resolution of the measurement is also different depending on the frequency value of the specific signal. For example, when comparing phases at a frequency of 10 MHz, the phase difference between the two signals varies from 0 to 100 ns. If a resolution of 1% of 100 ns can be obtained for [δ(ΔT )], a fairly high accuracy can be obtained for Δf /f0 (e.g., 1 × 10–9 /s, 3 × 10–13 /h, and about 1 × 10–14 /day, etc.). This is an accuracy that is difficult to achieve using only the usual direct frequency measurement methods. The instrumentation in this case is not necessarily complex, but should meet certain requirements. From the analysis of the above-mentioned phase-ratio method, it is also possible to see the importance of establishing and analyzing the corresponding mathematical models for the measurement methods for different measurement purposes. Such an analysis makes it possible to determine the measurement accuracy that can actually be obtained, the direct object of measurement, ways of simplifying the instrument and its design, etc. The block diagram of the instrument for frequency measurement by the phaseratio method is shown in Fig. 2.3. As can be seen, this equipment is no more complex than an instrument that performs direct counting frequency measurements. The waveform diagram when the phase difference is measured by the specific phase method is shown in Fig. 2.4. A schematic diagram of the phase difference comparison results obtained using the phase-ratio method is shown in Fig. 2.5. F1 F2 V (t )
V Fig. 2.4 Waveforms of the linear specific phase method
2.3 Various Test Measures and Their Classification
53
Fig. 2.5 Schematic representation of the phase difference comparison results
τ
T
A comprehensive analysis of the characteristics of certain indirect and direct measurement methods also shows that sometimes high accuracy is obtained at the expense of the measurement range. For example, with direct frequency measurement with a counter, although the measurement accuracy is relatively low, the measured frequency range is limited only by the counting speed, but is a wide frequency range. However, when the indirect phase-ratio measurement method is used, the accuracy is greatly improved, and the comparison can only be performed at the same frequency or with a multiplicative frequency relationship. A number of solutions to the problems that may limit the measurement range in the indirect measurement method have gradually emerged. Therefore, it is perfectly feasible to further widen the measurement range while improving the measurement accuracy. 2. Basic and definitional measures The choice of the method of measurement can also be determined by the definition of the value of the quantity to be measured and its connection to some relevant basic quantity. “A method of measurement in which the value to be measured is determined by the measurement of some basic quantity of interest” is called a “basic measure”. It is also referred to in some books as “absolute measurement”. It is clear from the definition that the basic measure is in fact a type of indirect measure. “A method of measuring a quantity by the definition of the unit of that quantity” is called a “definitional measure”. This is a class of methods that reproduce the value of a unit of measurement by its definition and is applicable to both base and derived units. It should be noted that the reproduction of a unit by definition is not entirely limited to the establishment of a base, but may be done in a variety of ways, with different degrees of accuracy. For example, according to the new definition of meter, there can be three methods and multiple laser radiation to reproduce it; another example is the volt datum, which can be used with a saturated Whiston standard cell, or the Josephson Effect can be used. In practice, the most representative measurement is based on the relationship between resistance and voltage and current in Ohm’s law Ω = U /I , the corresponding resistance value is obtained by the calculation of voltage and current measurements.
54
2 Basics of Measurement and Test Precision and the Effect …
3. Direct comparison measures and alternative measures “A method of measurement in which the quantity to be measured is compared directly with a known quantity of the same kind” is known as “direct comparative measurement”. This method is commonly used in metrology and engineering testing. This method has two characteristics: First, it must be the same quantity to compare with each other; second, to use comparative measuring instruments. As a result, many error components cancel each other out because they increase or decrease in the same direction as the standard, thus enabling a high degree of metrological accuracy to be obtained. To create conditions under which they can be compared with each other, it is often necessary to limit the range of values of the two comparative quantities (e.g., the quantities are close or in a certain proportional relationship, etc.), which is a factor that limits the arbitrariness and range of this method in measurement. The direct comparative measures and some of the later indirect measures also intersect with each other. “A method of measurement in which a selected quantity of the same value is substituted for the quantity to be measured and the effect on the indicating device is the same” is called “substitution measurement”. For example, the “Bolt” method of replacing the object to be measured on a balance with a weight whose mass is known in order to obtain its mass is a typical substitution method. The “effect acting on the indicating device” can be understood here as the indicated value of the instrument. The mass of the weight is therefore the mass of the object to be measured and eliminates errors that are not normally easy to calculate due to the unequal armature of the balance. The substitution method is also widely used in electronic measurements. A standard known quantity is substituted for the measured quantity, while the measurement conditions remain unchanged, and the standard quantity is adjusted so that the instrument has the same indicated value. The measured quantity is then equal to the corresponding nominal value of the standard quantity. The constant systematic error in the measurement has no effect on the measurement result because the operating state and the indicated value of the measurement circuit and instrument remain unchanged during the substitution process. The accuracy of the measurement depends mainly on the correctness of the known standard quantity and the sensitivity of the indicated instrument. The most typical example of this is the DC to AC alternative method of metering when using thermocouples as conversion devices in AC voltage measurements. This eliminates errors to the AC or AC conversion signal indicator. Since the output potential of the thermocouple is only related to the power absorbed by the hot wire and not to the low frequency, when the respective output potentials of the same thermocouple for a low-frequency voltage and a DC voltage are the same, the DC voltage value is exactly equal to the RMS value of the AC voltage being measured. And the DC voltage value can be accurately measured by DC digital voltmeter. The substitution method can often be used to achieve high-precision measurements of quantities that are difficult to measure directly. In the measurement system, a common terminal detection and display channel is mainly used for a quantity that
2.3 Various Test Measures and Their Classification
55
can be measured with high accuracy and the measured quantity. There are no high demands on the channel in terms of accuracy, etc. Its function is to be able to display the values of two quantities with the same terminal effect in a strictly equivalent manner. In order to do this, it is required that the two signals in this terminal channel should be of the same nature and that the sensitivity of the terminal display should be as high as possible in order to give the same alternative effect of the two quantities at a higher resolution. In order to be able to share the same terminal channel, the two signals must have devices or apparatuses that perform signal conversion to sameness to that channel. Such a device is either for both signals or only for the signal under test. In order to achieve such a conversion, some special devices, especially conversion devices for the different measured must be at hand. The frequency band of low-frequency voltages in electronic metrology generally covers a frequency range from a few Hz to about 1 MHz. AC–DC conversion standards with vacuum thermocouples as conversion elements are most accurate in this frequency range. The output potential of this thermocouple is only related to the power absorbed by the thermowire and is not related to the low frequency. Therefore, a low-frequency voltage and DC voltage are added to the same vacuum thermocouple successively, if its output potential is equal, the two added voltages are also equal. The principle is shown in Fig. 2.6. In the low-frequency voltage standard, the main factor affecting the accuracy is the imperfect manufacturing process of the thermocouple, which will cause the thermocouple DC forward and reverse errors, AC/DC conversion errors and frequency response errors. Corresponding measures should be taken for this. In this alternative method for the measurement of AC voltage and current, the measurement of the output potential of the vacuum thermocouple in both AC and DC input situations requires only strict equality and does not require a high degree of accuracy for its measurement. So the potential measurement instrument used only requires a high stability and a certain range of sensitivity. At this time, it is used to direct flow measurement with a DC digital voltage or ammeter to equate the RMS
measured LF voltage Input Attenuator
E1
DC Voltage source digital voltmeter
Protection Circuit
Fig. 2.6 Schematic diagram of the low-frequency voltage standard
E2
Potentiometer
Current Meter
56
2 Basics of Measurement and Test Precision and the Effect …
value of the measured AC quantity. It is the accuracy of this DC meter that really reflects the measurement accuracy. It is possible to obtain much higher accuracy for direct flow measurements than for direct AC measurements. This is the reason why the substitution method here is able to achieve high accuracy. Direct measurement of attenuation is difficult in the RF up to microwave bands. In this case, the substitution method plays a very important role. A commonly used method of measuring attenuation is the substitution method, in which the attenuation of the standard attenuator is changed with and without the introduction of the attenuation to be measured in the measurement system, so that the indicator of the monitoring output is constant in both cases before and after, then the change in the attenuation of the standard attenuator is equal to the measured attenuation. The high-frequency substitution method, also known as direct substitution or same-frequency substitution, is a commonly used method in attenuation metering and is one of the simplest substitution methods. The high-frequency substitution method is divided into two types of series and parallel substitution, and Fig. 2.7 shows its schematic diagram. The attenuator to be measured and the standard attenuator operate on the same microwave frequency, and the standard attenuator usually uses a highly accurate waveguide slew attenuator and cutoff attenuator. Its outstanding advantages are large range, high accuracy, simple equipment and easy operation; the main problem is that it can only be measured at the same frequency, and the working frequency is very limited. Among the various substitution methods, the IF substitution method is the most important and has the advantage of large range and high accuracy, so although the system is relatively large and the operation is complicated, it is still the most widely used attenuation measurement method. The basic working principle of the IF substitution method is that the RF signal (the operating frequency of the attenuator under test) is linearly transformed into a fixed IF signal by outlier mixing. The attenuator under test is then replaced by a standard attenuator operating at that IF frequency Microwave Signal source
To be tested Attenuator
Standards Attenuator
Detection Indicator
(a)
Microwave Signal source
To be tested Attenuator
Detection Indicator
Standards Attenuator
(b) Fig. 2.7 Block diagram of the series (a) and parallel (b) high-frequency substitution method
2.3 Various Test Measures and Their Classification Local oscillation
f0 ± f1
f0 Stable signal source
measured attenuator
57
Linear mixer
f1
Standard IF attenuator
IFamplification and detection
Indicator
ρL = 0
ρG = 0
(a) Schematic diagram of the series IF substitution method for measuring attenuation Local oscillation
f0 Stable signal source
measured Stable signal attenuator source
ρG = 0 IF signal source
IF phase shifter
f 0 ± f1
f1 Linear mixer
ρL = 0
IFamplification and detection
Indicator
Standard IF attenuator
(b) Schematic diagram of the parallel IF substitution method for measuring attenuation Fig. 2.8 Block diagram of the operating principle of the IF substitution method
to derive the measured attenuation value. The IF substitution method is available in series and parallel by mode of operation (Fig. 2.8). The series IF substitution method has simpler equipment and is easier to operate than the parallel IF substitution method. However, any instability in the signal source and receiver will result in large metering errors. If a cutoff attenuator is used as the standard attenuator, the range of the metering system is necessarily reduced due to considerable nonlinearity in the starting section (20 to 30). The parallel substitution method is a more complex system, and the use of an additional IF signal source than the series substitution method also causes new errors, but it has a wider range than the former, so it is more widely used. The application of alternative methods depends in one way on our imagination as well. The voltage and attenuation measurements described above exemplify the large differences in treatment methods as well as the obvious similarities. 4. Differential and compliance measures (1) Differential measurement method “A method of measurement in which the quantity to be measured is compared with a known quantity that differs only slightly from its value, and the difference between the two values is measured” is called “differential measurement”. It is often used in metrology and engineering tests. As the two compared quantities are compared under the same conditions, so the error components caused by each influence can be partially offset or basically all offset, thereby improving the accuracy of measurement. There
58
2 Basics of Measurement and Test Precision and the Effect …
are two main sources of error in differential metrology: the error in the standard itself, and the error in the indicated value of the comparator. Differential metrology directly measures the small difference between two comparison quantities, so it is often possible to obtain much higher measurement accuracy with measurement equipment of relatively low measurement accuracy. One of the most typical examples is the differential beat measurement period method in frequency measurement. When two frequency standard signals with similar frequency values are mixed and then measured, the measurement accuracy of the differential beat period is reflected in the measurement accuracy of the measured signal by a multiplication factor. This factor is equal to the ratio of the measured signal frequency to the differential beat signal frequency. It is often possible to increase the measurement accuracy or resolution by tens of thousands to millions of times. Let the measured quantity be x, the standard quantity similar to it be B, and the difference between the measured quantity and the standard quantity be A. The value of A can be read by the indicating instrument. Then Δx x
=
ΔB x
x =B+A ΔB + ΔA = A+B + x
A x
·
ΔA A
(2.88)
Since A is much smaller than B, A + B ≈ B (which is also a condition for the differential method), the measurement error is obtained as ΔB A ΔA Δx = + · x B x A
(2.89)
As seen in Eq. (2.89), the error of the differential method of measurement consists of two parts. The first part is the relative error of the standard quantity, which is generally small. The second part is the product of the relative error of the indicator instrument ΔA/A and the coefficient A/x that makes the measurement of the differential part. Where the coefficient A/x is the ratio of the differential to the measured, called the relative differential. Since the relative differential is much less than 1, the effect of the error of the indicating instrument on the measurement is greatly reduced. From the principle, although the differential method greatly improves the measurement accuracy, the range being measured is generally very narrow. It is mainly used for high-precision measurement when the measured and standard quantities are very close to each other. And such a requirement is exactly in line with the comparison between the standards of various quantities. ΔA/A The manifestation in specific applications varies and is also a multiplier for the accuracy improvement in the corresponding instrument. In recent years, some progress has also been made in research on how the differential method can be used for wider range measurements, providing the conditions for a wider application of this method. A further study of this method is how it can be adapted to a wider range while maintaining a high level of accuracy. From Eq. (2.89), it is also clear that in order to use the differential method more widely, the key is still to create the conditions for the differential. This is central to the
2.3 Various Test Measures and Their Classification
59
study and application of this method. Especially in the measurement of some quantities, it is also necessary that the differential data are predictable and controllable, and on top of this, some auxiliary techniques have been developed. To achieve a microdifference measurement, the first thing is to be able to obtain the value of the microdifference between the measured and the corresponding reference (or standard) quantity. There are too many examples of implementations here to serve as inspiration for our learning of this method. Although they have different manifestations, there are common regularities in the acquisition of the differential in the measurement and for the accurate measurement of the differential, as well as in the effect of the error in the measurement of the differential on the accuracy of the measured quantity. Example 1, length measurement: For the measurement of an arbitrary length quantity usually encountered, it is often influenced by factors such as the scale resolution of the scale. Where the scale is accurate and stable, the measured and the measuring tape used as a standard extend in the direction of the length from the same starting point. At the end point of the measured, it is often impossible to coincide with the scale of the standard measuring tape. Thus, if the measurement of the remaining difference is not continued there is an error of no more than one minimum scale of the measuring tape. Such a microdifference is frequently encountered and automatically generated in length measurement. Therefore, its measurement helps to obtain higher accuracy. The most common method that can be used to measure the length differential is the vernier method, which is described below. The vernier method is one of the most commonly used methods for the measurement of microdifferences in a variety of physical quantities. It uses two length or time scales with small-scale differences in the direction of extension in space or time by accumulating the differences until the scales coincide. It is typical that the unit interval between the two length or time scales used in the vernier method is often much larger compared to the resolution of the measurement. However, the difference between them and the accumulation of the difference is a reflection of the resolution of the measurement as well as the measured value. The vernier method is particularly suitable for linearly scaled measurements, such as length and time quantities. Example 2, Time interval metering: for the usual time interval metering, the methods are filled with standard clock pulses during the time interval being measured. It is also not possible to achieve strict synchronization between the standard fill clock pulse fill and the time interval being measured, as shown in the figure. The time interval being measured is Tx . Thus, the fact that the measured time interval and the fill clock pulse cannot be strictly synchronized at the moment during the start and end of the measurement phase naturally generates a slight difference in the result of the count measurement with respect to the measured time interval. As shown in the figure, the period of the filler clock pulse is T0 , the difference of the unsynchronized time interval at the beginning of the measurement is T1 , and the difference of the unsynchronized time interval at the end of the measurement is T2 . The known time period obtained by counting the periods of the filled clock pulses is Nt, where N is the number of counts of the fill clock pulses during the time interval under measuring. For relatively long time intervals, Nt will be much larger than T1
60
2 Basics of Measurement and Test Precision and the Effect …
and T2 . So, at this point, the measurements for T1 and T2 are consistent with the differential measurement method. This error is contained in the usual devices for counting measurement intervals. To get a higher accuracy, you need to measure and T1 T2 exactly. Here, the measurement error in the method of filling clock pulses is ±t (the relative error is ±t/Tx ); the measurement error in the measurement of and is T1 T2 Δt, which is typically one hundredth, one thousandth or less of t. In this way, the measurement error for Tx can be reduced to (with a relative error of ±Δt ±Δt/Tx ). To obtain higher measurement accuracy, on the one hand, the amount of differential time can be made smaller by increasing the frequency value of the filler clock pulses; on the other hand, the accuracy of the measurement of short intervals T1 and T2 can be improved as much as possible. The measurement schematic for the analog interpolation method is shown in Fig. 2.9. After using the differential method for short interval measurements, the final measurement accuracy is equal to one thousandth of the measurement accuracy for short interval measurements. The time interval of the input signal is Tx and the count is measured at TN instead of Tx , thus generating an error of ±1 clock cycles. Tx = TN + T1 − T2
(2.90)
The essence of the interpolation method is to expand the zeros of T1 and T2 , e.g., by 1000 times each, and then count them by pulse signals as well, with count values N1 N2 · T0 , T2 = 1000 · T0 into Eq. (2.90), of N1 and N2 , respectively. Substituting T1 = 1000 we get ) ( N2 N1 T0 − Tx = N0 + 1000 1000 Tx measured
Start
End
signal f x
T0
100MHz clock pulse
0
1
2
N0
3
TN = N 0T0 T1 expand er
T1′= 1000T1
counting
Fig. 2.9 Representation of microdifferences in time interval measures
T2
T2′ = 1000T2
2.3 Various Test Measures and Their Classification
61
= (1000N0 + N1 − N2 ) × 0.01 ns
(2.91)
From this equation, it can be seen that after applying the interpolation method, the resolution of the device is increased by a factor of 1000, so that the error of ±1 is reduced to 0.1% of the original. Considering that both T1 and T2 have errors, the total error is reduced to 0.2% of the original. Embodiments 1 and 2 share common features. Namely, these measurements can be linearly scaled and divided. Thus, the vast majority of the measurements can be accurately given in terms of length and time extensions and separated from the measurements by simultaneous calibration. In order to achieve higher precision measurements, the remaining small differences need to be measured accurately. The main difference between them is that the length quantities can be aligned by the movement of the measuring device or the measured in space so that their measurement starts or ends. Such a slight difference will only occur at the other counterpart. For interval quantities, especially non-periodic interval quantities, it is difficult to either move the measured interval in time or to fill the clock signal. So then the differential is likely to appear at both the beginning and the end of the measurement. Example 4, Precision Frequency Metrology: In intercomparison between frequency scales, the frequency scale often has a nominal value of 5 or 10 MHz, and the resolution of the difference between them Δf measurements is often desired to be of the order of 10−10 ∼ 10−14 . A simple way to obtain the difference between the 5 MHz frequency scale signal and the 5 MHz + Δf signal under test is to use a mixing or phase discrimination method. In this way, a small difference between the signals Δf is obtained, and then an accurate period measurement of Δf allows the measured signal to be measured with high accuracy, as shown in Fig. 2.10. Since in general the difference between the frequency scales Δf in an accurate frequency scale pair is very small, the period is also longer. The accuracy of the measurement for the difference can be very high. With such a measurement method, the measurement accuracy obtained is one of the most accurate methods among all the frequency scale matching techniques. If Δf is about 1 Hz, then using this method, the ratio of the differential to the measured value for a 5 MHz comparison frequency signal can reach 2 × 10−7 . Using the period measurement function of an ordinary frequency meter for this differential Fig. 2.10 Precision frequency metering by differential frequency period measurement
measured oscillator
fx
phase detector reference oscillator
f0
Counter determined period
62
2 Basics of Measurement and Test Precision and the Effect …
signal, the measurement error for the differential value can also reach 2 × 10−7 if the clock signal for the measurement is also selected as a 5 MHz standard frequency signal. Finally, the total measurement error would theoretically reach 4 × 10−14 . Only in the actual measurement device, there is still noise in the line, error in the mixer, etc. Therefore, the actual measurement accuracy needs to consider other noise factors as well. However, such a measurement scheme also exposes a problem in the application of the microdifference method. This is due to the fact that the obtained differential beat signal is the absolute value of the difference between the two comparison signals, rather than the arithmetic difference. Δf = |fx − f0 |
(2.92)
Therefore, the positive and negative signs of Δf must also be determined. If possible, the positive and negative signs are determined by making a small change in the frequency of one of the signals. If Δf is very small, observe which of the two signals has the higher frequency by phase matching. In order to make the method of differential frequency measurement period in precision frequency measurement not only adapt to a wider range of measured frequency signals, but also require that the microdifferential value can be controlled so that the measurement time and the sampling period for frequency stability can be carried out as required, certain measures must be supported. For example, we often use standard frequency signals of 5 or 10 MHz, but in order to meet the needs of communication, we need to measure signals of 16.384, 32.768 and 38.88 MHz. For this purpose, the method shown in Fig. 2.11 can be used. Here, compared to the scheme in Fig. 2.10, a frequency synthesizer is added. The frequency synthesizer can generate signals with different frequency values as required with the input of one standard frequency signal and is also able to guarantee the frequency accuracy and stability of the original standard frequency signal. Here the frequency synthesizer sets the corresponding difference value according to the actual frequency value of the measured signal and then performs a differential beat measurement with the measured signal. Because of the flexibility of the frequency setting of the frequency synthesizer, it is possible to use its frequency for small changes to determine the positive and negative sign of the difference frequency of the measured signal. As we can see from the original definition of the differential method and the previous examples, the differential method greatly limits the range of measurement while increasing the accuracy of the measurement. For this reason, a series of fractional and multiplicative differencing methods or combined differencing methods can be developed to broaden the range of measurements. The use of these methods must be combined with the characteristics of both sides of the comparison signal. Let the measured quantity be x, the standard quantity related to its components and multiples be B, and the small difference between the measured and the component and multiple values of the standard quantity be A. Then we will have A = mB − x
(2.93)
2.3 Various Test Measures and Their Classification measured oscillator
63 fx
phase detector reference oscillator
f0
frequency synthesis
Measuring cycle with counter
fa
Fig. 2.11 Methods suitable for wider frequency ranges with controlled differential frequency measurement periods
Perhaps A = B − mx
(2.94)
This is the fractional and multiplicative differential method. The measured and standard quantities can also be related without components or multiples. But they may also form a slight difference between different integer values. For example A = nB − mx
(2.95)
This is the combined differential method. The fractional and multiplicative differential methods and the combined differential method described above more obviously extend the range of measurements and applications of the differential method, but still have a discrete character. The above measurements are at the same time clearly targeted. That is, the quantity being compared has to be able to be obtained as a multiple or a fraction of its value. The most typical example of the application of the fractional and multiplicative differential method is the harmonic mixing measurement method. (2) Compliance with the Law on Weights and Measures “A method of measurement in which a small difference between the measured value and the same known value used as a standard of comparison is measured by observing the conformity of some mark or signal” is called “conformity measurement”. Measuring the size of another part with a vernier caliper is an application of this measuring principle. The size of the part is determined by matching the engraved line on the vernier to the engraved line on the master ruler. A measurement is converted to a conformal measure when the differential of the differential method is 0. It can also be said that conformity metrology is a special case when the differential of the differential method is 0. It is also for this reason that most of the analytical methods of the differential method are also suitable for the conformometry method. Since the differential is 0, there is no problem of calculation.
64
2 Basics of Measurement and Test Precision and the Effect …
Since ideal conformity is difficult to obtain, measurement errors are often formed as a result of judgments of conformity for conditions that do not necessarily conform in most cases. The zero indication method is a special case of conformity measurement method. It is a comparative measurement method in which the action of the measured quantity on the indicator and the action of the standard quantity on the indicator are balanced against each other so that the indicator shows zero. Its advantage is that it can eliminate the systematic error caused by the inaccuracy of the indicator. For example, various bridge methods in electronic measurement are based on this method, and the value of the measured device is calculated from the value of each reference device in the bridge circuit by the balance of the bridge. Fig. 2.12 shows a circuit for measuring an unknown voltage by the zero indication method, where E is a standard DC voltage; R1 and R2 form a standard adjustable voltage divider; and G is a current detector. By adjusting the voltage divider ratio so that U = ER2 / (R1 + R2 ) is equal to the measured voltage U X , the current detector will have a zero indication value. The value of the measured voltage is then equal to the value of the voltage divider mentioned above. The accuracy of the measurement depends on the accuracy of the standard DC voltage, the sensitivity of the detector, and the accuracy of the indication of the resistance divider. In the balanced state, the detector branch does not load R2 and does not affect the voltage division ratio. The measurement result is also independent of the accuracy of the detector itself. The conformity method, sometimes referred to as the reassignment method, can have a high degree of measurement accuracy. This is because the minimum interval between the uniform scales of the two quantities involved in the comparison is much greater than the error in the judgment of the overlap, which means that the accuracy of the reassignment method measurement will be much better than the accuracy when measured directly with the scale interval as the measurement resolution. But there will be certain requirements on the measurement conditions. The measurement error here often depends on the error in the judgment of the overlap state between the two comparison signals. If the comparison device can guarantee complete coincidence, only the error in the standard quantity itself affects the measurement accuracy. This measurement method is promising for high-precision measurements, but some auxiliary techniques are needed in conjunction to perfect the conditions for coincidence measurements. The coincidence method mostly involves comparing a series Fig. 2.12 Zero indication method of voltage measurement
E
R
R1 R2
+ U −
G Ux
2.3 Various Test Measures and Their Classification
65
of uniformly alternating marks or signals of the measured quantity (e.g., alternating periods of a frequency signal, uniform scales of a length signal, etc.) with a series of uniformly alternating marks or signals of a known quantity and detecting the coincidence. On this basis, the value to be measured is derived from the number of complete alternating values completed or experienced by the two compared signals, respectively, during the overlap interval. In a broad sense, the recombination method can be applied for comparison and measurement as long as the measured and known quantities can be represented in a uniformly alternating manner. Since a number of physical quantities can be linearly scaled and calibrated, high-precision measurements can be made with the reassignment method as long as specific technical measures for reassignment detection are addressed. In physical quantities, there are many signals that alternate uniformly. They have different independent variables and forms of expression. When these quantities are represented as digital quantities, the digital quantities are uniform per unit increment. Uniformly alternating signals can be expressed as. A(t) = nt
(2.96)
Here, t is the independent variable and n is the unit scale value. Replacing the independent and dependent variables and addressing the means and specific technical aspects of the detection makes it easy to transfer, for example, error analysis in the detection of one physical quantity to the measurement of another physical quantity. The detection accuracy of the duplication method depends only on the accuracy of the scale, the fineness of the “scale line” itself and the accuracy of the judgment of the scale duplication. The measurement result of the measurement is given based on the overlap of two or more scales of a uniformly alternating signal. As a result, the usual quantization error of ± 1 number of counts can be eliminated in high-precision digital measurement, and high measurement accuracy can be obtained. It can also measure small dimensions with large scales, solving measurement problems that are difficult to solve with other measurement techniques. The literature [17, 18] proposes that the least common multiple is a very important parameter in coincidence measurements. There is a least common multiple between a known quantity and a measured quantity of the same physical quantity with different unit scales Amin c (t). The highest measurement resolution that can be obtained using the recombination method is ΔT =
A0 (t)Ax (t) Amin c (t)
(2.97)
where A0 (t) and Ax (t) are the known and unknown quantities under the unit independent variable, respectively. It can also be interpreted as their different unit values (the magnitude of the quantity between two adjacent scales). The accuracy of the measurement when using the coincidence measurement method is not only related to Eq. (2.97), but also to the stability of the coincidence detection equipment in order to obtain a high resolution. It is also clear from
66
2 Basics of Measurement and Test Precision and the Effect …
T0 known quantity
Tx measured quantity
coincidence point
Fig. 2.13 Schematic diagram of the measurement of the constant value of the uniformly scaled unknown quantity
Eq. (2.97) that a suitable difference between A0 (t) and Ax (t) is a condition for the high accuracy of the coincidence measurement method. When the two are exactly equal or in a strict multiplicative relationship, not only is the coincidence difficult to capture, but the accuracy of the measurement is also compromised. Other methods must be used to assist in order to obtain satisfactory results. The object of measurement when using coincidence measurement, one is the constant value of an unknown quantity on a uniform scale, or a quantity that can be reflected by that constant value, for example, the fixed degree of the scale in length measurement, the period and frequency in frequency measurement, and some cases commonly encountered in power measurement. The other is the use of two different uniformly scaled known quantities to accurately measure a certain unknown quantity, such as vernier measurements in length measurements, phase difference measurements, etc. The measurement schematic for the first case is shown in Fig. 2.13. The measured scale value of the uniform scale can be Tx =
T0 N0 Nx
(2.98)
where T 0 is the scale unit of the known quantity, and N 0 and Nx are the entire numbers of uniformly alternating known and unknown quantities read between numbers of overlap points, respectively. Since the measured quantity may often be variable, the ΔT in the decision Eq. (2.97) is also variable. In order to ensure the measurement accuracy and the measurement range, it is necessary to choose a suitable scale unit value for the known quantity that is not only relatively small but also avoids a similar or multiplicative relationship with the unit value of the unknown quantity. If necessary, an auxiliary known quantity should also be used to match the measurement. Only in this way can the reliable detection of the reclosing state be guaranteed. The measurement schematic for the second case is shown in Fig. 2.14. The measured in the measurement is
2.3 Various Test Measures and Their Classification
Ax
Measured quantity
Known quantity 0
T0 K
Known quantity 1
67
Tx
N0
N1 Fig. 2.14 Schematic diagram of the comparison of measurements of known quantities with different uniform scales
Ax = KT0 − N1 T1 + N0 T0
(2.99)
where the scale values of the two uniformly scaled known quantities are T0 and T1 . They are the same step as the beginning and the end of the measured respectively. That is, their uniformly alternating characteristic values (e.g., scale lines, zero phase points, etc.) are “aligned” with the beginning and end of the measured range, respectively. K T0 N1 is the number of complete uniform alternations of T1 measured within the measured range, N0 is the number of complete uniform alternations of T0 read after the length of KT0 up to the point of coincidence between T0 and, and T1 T0 is the number of complete uniform alternations of read after the length of up to the point of coincidence between and T1 . That the ends of the measured quantity must coincide with the initial scale value or a certain scale value of the two known quantities is a basic requirement of a measurement method based on Eq. (2.99). In such measurements, both known quantities can be artificially selected. The reasonableness of the determination of the known quantities can be checked against the accuracy of the coincidence detection equipment and the measurement resolution determined by Eq. (2.97). In the measurement of the scale value of a uniformly alternating quantity measured on the basis of Eq. (2.98), it is possible either to require that the initial scale values of the known quantity T0 and the unknown quantity Tx coincide, or not to make such a requirement and to calculate the measured value from the multiple regular coincidences of these two quantities during their extension in the same direction. This method may be commonly used in measurements where the quantity being measured is a uniformly alternating quantity with a variable scale. Consciously causing the initial scales of two uniformly varying quantities to coincide is relatively easy to do for some quantities (e.g., electricity, length, etc.). But for some quantities (e.g., frequency quantities, etc.), it is somewhat difficult. Therefore, the initial point coincidence requirement is not required when the measurement purpose can be achieved without it. Also using the overlap method, it is easier to create the overlap condition by
68
2 Basics of Measurement and Test Precision and the Effect …
moving for spatial quantities; however, it is difficult for temporal quantities. This refers mainly to non-periodic time signals. A third uniform alternating signal can also be introduced as an intermediary signal in the case of relegal comparisons. This is particularly useful when the nominal values of the two comparison signals are equal or are multiplicative of each other. The problem of difficulty in capturing the coincidence between known and unknown quantities can be solved by using this method to calibrate the fixed accuracy of the unknown quantity under test. According to Eq. (2.97), the inclusion of an intermediary signal can improve the resolution and accuracy of the measurement in a number of cases. Only high stability and moderate alternation rate are required for the intermediary signal, and there is no requirement for it to have a certain accurate value. It is possible to obtain the complete alternation between the intermediary signal and the known quantity at several coincidence points Nc1 and N0 , as well as the complete alternation between the intermediary signal and the unknown quantity at several coincidence points Nc2 and Nx , and to find the unknown quantity from the known quantity T0 . Tx =
N0 Nc2 T0 Nx Nc1
(2.100)
This method has been used in the measurement of quantities such as frequency, period, phase difference and voltage, adding to the ubiquity of the use of the reweighting method. The application of the gravimetric method in length measurement is well established, and its representative measuring device is the commonly used vernier scale. This method has now also gained application in a different way in time–frequency measurements, allowing a more than 1000-fold increase in measurement accuracy with a degree of complexity of the measuring device that does not differ much from that of the commonly used devices. And the measurement range is largely unaffected. Since this method has the function of eliminating ± 1 counting error in the counting measuring equipment, it can be applied to obtain very significant high accuracy results. In the development of measuring instruments toward digitization, intelligence and higher accuracy, the relawful method is bound to be an indispensable key method and deserves to be given sufficient attention and promoted vigorously. 5. Compensatory and switched metering methods The measurement process is arranged so that one measurement contains a positive error and another measurement contains a negative error, so that most of the errors in the measurement results can be compensated for and eliminated. This measurement method is called “compensation measurement method”, can also be called “positive and negative measurement method”. Such as in the electrical measurement, in order to eliminate the system error brought about by the thermal potential, often change the current direction of the measuring instrument, take two readings and one-half of the reading results. In order to eliminate the error introduced by thermal coherence in the calibration of certain temperature measuring devices, also often use the standard
2.3 Various Test Measures and Their Classification
69
thermostat to raise the temperature and lower the temperature and read the thermometer reading at the same temperature value in two different temperature change directions, and their intermediate value as a correction to the reading scale. Temperature-compensated crystal oscillators, which are quite widely used, are also commonly affected by temperature hysteresis effects. This manifests itself in the fact that there is always a difference in the oscillator frequency when ramping up and ramping down through the same strict temperature value. In order to reduce its effect, the frequency of the uncompensated oscillator is always taken as the average of the ramp-up and ramp-down frequency values during the hardware and software processing of the implementation of the compensation. The compensation treatment for the output frequency of the crystal oscillator can also be done by using dynamic compensation according to the history of the change. That is, the compensation value for the crystal oscillator is different according to the warming or cooling. Using such processing, it is beneficial to eliminate the influence of temperature hysteresis effects. Of course, such compensation is a bit more complicated to implement. In some measurements where the measured quantity depends on the difference between two readings, both of which contain the same systematic error, that systematic error is automatically eliminated from the measurement result. There is also a comparative measurement method to eliminate systematic errors called “swap measurement”. The principle is as follows: The object to be measured is first compared with the same object of known value A and balanced, then the object to be measured is placed at this known value and compared again with another B and balanced again. If the indicated reading √ is the same in both measurements, the value being measured can be derived as AB. This method is dedicated to measuring masses on a balance and eliminates systematic errors due to unequal lengths of the two arms. 6. Intermediary source test measurement methods (1) Overview of the intermediary source method In many test comparisons, it is sometimes difficult to measure the measured quantity directly with a standard quantity, or to improve the accuracy of the measurement and to standardize it in some respects. In this case, an intermediate quantity with the same characteristics as the two comparison quantities, but with some numerical differences, can be used. By using the intermediary quantity as a bridge to the two comparison quantities separately, and then calculating it, the value of the measured quantity can be obtained. In this measurement method, the intermediary source is acted upon symmetrically in both measurement channels, and all of its own errors such as noise are often canceled out in whole or in part in the final measurement result, with little effect on the measurement result. The systematic error of the intermediary source often does not affect the measurement result, and the effect of its random error on the measurement depends mainly on the relationship between the characteristics of the random error of the intermediary source with the variation of the independent
70
2 Basics of Measurement and Test Precision and the Effect …
variable and the error measured during the whole measurement process (e.g., in time– frequency metrology, it is the relationship between the error-time characteristics of the intermediary common oscillation source and the measurement comparison time). This is also a characteristic of this method. A typical example of this is the dual mixer time difference measurement method in frequency scale comparison proposed in the literature [19, 20]. In this method, not only the normality of the sampling time of the measurement is ensured, but also the measurement accuracy is greatly improved because of the error multiplication effect. Two frequency standard signals with the same nominal value can be measured with the difference beat period method (microdifference method) by frequency mixing in order to obtain highly accurate measurement results for the signal under test. However, the irregularity of the differential beat period does not guarantee the regularity of the sampling period during measurement (e.g., by 1 ms, 10 ms, 1 s, 10 s, etc.). Therefore, a common oscillator with a certain difference in frequency from the two comparison signals is used to mix with the two comparison signals at the same time, respectively, and then the two difference signals are compared with the time interval comparison method will be able to obtain both a high measurement accuracy and also make the measurement gate time under artificial control. In order to explain more clearly the role of the intermediate source method, the intermediate common oscillator, the error and the effect on the measurement, as well as the advantages of this method over the differential method and the ease of application, a block diagram of the method is given in Fig. 2.15. This method can also be understood as a secondary application of the differential method. The direct acquisition of the differential beat period of two frequency standard signals is difficult and difficult to control due to their very close frequency. This is partly due to the difficulty of line implementation, and partly from the aspect of frequency stability measurement requiring that the sampling period of the measurement must be strictly 1 ms, 10 ms, 1 s, 10 s, etc. The frequencies of both comparison frequency standard signals are not allowed to change, while the frequency value of the intermediary common oscillator can be set or changed according to the requirements. The difference frequencies between the common oscillator and the two comparison
Measured frequency
fx
Pha ser
DBM
LPF
Amplifier
Public oscillator
Time interval counter
DMB
LPF
Amplifier
Fig. 2.15 Block diagram of the dual mixer time difference measurement method
f0
Reference frequency
2.3 Various Test Measures and Their Classification
71
signals can be selected as 1 kHz, 100 Hz, 10 Hz, 1 Hz, 0.1 Hz, etc., according to the period requirement of the differential beat. This achieves the purpose of controlling the microdifference (differential beat period). The common oscillator is symmetrical in the two mixing measurement channels, so that the noise it contains and the effects of long-term changes in frequency act equally on both channels. These effects are symmetrical and cancelable. This is also the characteristic and advantage of the intermediate source method. When the phase difference between the two differential frequency signals is small, the effects of the long-term drift contained in the common oscillator and the noise whose period of action is greater than that phase difference case will be canceled out because of the symmetrical action in the two channels. The phase shifter here is used to adjust the phase difference between the source under test and the intermediate common oscillator so that it is as close as possible to the phase difference between the reference oscillator and the intermediate common oscillator. The intermediary source method can be used in several contexts. (1) When a high-precision instrument is used as a standard to test a low-precision instrument and there is a lack of high-precision standard sources, the error of the low-precision instrument can be obtained from the readings of the highprecision and low-precision instruments by using a general-precision source as a mediator source and measuring it with two instruments simultaneously. In this case, adjusting the intermediary source can check the measurement accuracy, measurement range and other indicators of the low-precision instrument. Figure 2.16 shows its block diagram. The reading for the high-precision instrument is A1 = A0 + Xc1 ; the reading for the low-precision instrument is A2 = A0 + Xc2 + X , where X is the error of the lowprecision instrument, Xc1 and Xc2 are the errors caused by the mediating sources in the two instruments, respectively, and A0 is the reflection in the measurement reading of the accuracy of the high-precision instrument that can be considered true in this measurement. When Xc1 and Xc2 can cancel each other out X = A2 − A1
(2.101)
At Xc1 −0.25 × 10−12 ) ∪ (Δ f / f < −1.25 × 10−12 ), which are initially analyzed as coarse difference values, probably caused by artificial or environmental abrupt changes, and should be corrected. Therefore, histogram of the output instantaneous relative frequency difference data was plotted, as shown in Fig. 9.38. For the analysis of the undulation of the physical output of the excitation signal: In the case of using a good active hydrogen atomic clock as the excitation source of the fountain clock, the frequency difference information of the physical output of the active hydrogen atomic clock and the stability and frequency difference variation characteristics of the hydrogen atomic clock itself compared to the large, irregular undulation here on the one hand and the frequency undulation of the hydrogen atomic clock itself does not correlate; on the other hand as far as the fountain phenomenon can be expressed the natural reference characteristics themselves do not have irregular undulations. The signal shown above is intended as a frequency correction for the fountain clock excitation signal. Visually look at this waveform, it can be seen that the noise of clock output signal may be increased if such signal is directly used for control or correction without processing. Reasonable and correct processing of the signal can guarantee the accuracy of the output of the fountain clock and better frequency stability characteristics.
358
9 Research Developments in the Phase Question and Its Role in Advances …
Fig. 9.38 Histogram of output frequency of cesium atomic fountain clock
Solving with Matlab, the confidence levels and confidence intervals for the instantaneous relative frequency data were obtained as follows (Fig. 9.39 and Table 9.5). The deviation of the fountain clock sample frequency difference data is set to “overlap” and the data that exceeds the limit is corrected to the center frequency difference data of the fountain clock. For example, if the fountain clock samples the center of the data for 0.0079 Hz, when the data is greater than 0.0095 Hz or less than 0.0065 Hz, the data confidence level is less than 5%, can be considered gross error, the data can be corrected for the center of the fountain clock frequency difference of 0.0079 Hz. Frequency difference data for this simple coarse processing and the frequency stability results are as follows (Fig. 9.40). It can be seen that correcting the gross error data improves the frequency stability of the fountain clock. It also eliminates the frequency stability bump that occurs at sampling times of 10–30 s. It is also possible to combine data in stability calculations based on the frequency difference between collected values and segmented averages without affecting frequency accuracy. Thus, processing methods that use different weights assigned to the collected data can be handled. The sequence of frequency differences is rearranged from smallest to largest, with N equal parts divided equally between the largest frequency difference B and the smallest value A. The length W of each interval and the area of each interval are then expressed as (Table 9.6)
9.5 Application of Phase Processing and Measurement Techniques …
359
Fig. 9.39 Frequency stability graph of a cesium atomic fountain clock (Data provided by the National Timing Center, for physical feedback signals without any processing)
Table 9.5 Summary of confidence levels and confidence intervals for instantaneous relative frequency data for cesium fountain clocks
Degree of confidence
Confidence interval (math.)
α = 0.45
[−0.008475847533772 − 0.007401946549913]
α = 0.35
[−0.008616399993164 − 0.007254230148582]
α = 0.25
[−0.008814636190260 − 0.007068067702972]
α = 0.15
[−0.009110288108346 − 0.006767683982043]
α = 0.1
[−0.009782801955989 − 0.006092805713287]
W = (B − A)/N , N = 9, Δ f ∈ [A, B]
(9.17)
[A + kW, A + (k + 1)W ], k ∈ [0, N − 2]
(9.18)
Local smoothing is then performed based on the probability value of each interval. Δ f ' = Δ f × (P ± 1)
(9.19)
360
9 Research Developments in the Phase Question and Its Role in Advances …
Fig. 9.40 Frequency stability plot after coarse difference correction
Table 9.6 First data sub-boxing
Correction interval, Δ f ' /Hz
Correction factor, P
(0.0065, 0.0068] ∪ (0.0092, 0.0095]
0.8
(0.0068, 0.0070] ∪ (0.0089, 0.0092]
0.6
(0.0070, 0.0073] ∪ (0.0085, 0.0089]
0.4
(0.0073, 0.0076] ∪ (0.0081, 0.0085]
0.2
(0.0076, 0.0081]
0
From the probability density function of the frequency difference, we can know that the statistical distribution of the frequency difference is basically consistent with the normal distribution, and the new frequency difference series can be obtained by modifying the local smoothing correction according to the probability setting P value of the total frequency difference of each interval, and the frequency stability results calculated according to the corrected frequency difference data are shown in Fig. 9.41. Every 0.0002 Hz is divided into an interval. The above process is then performed again and the final processing and results are as follows (Fig. 9.42 and Table 9.7). Frequency stability is improved by a factor of 3–5 compared to the unsmoothed case. With the existing hardware for this sample system, frequency stability of the order of 10–16 has been achieved for 10,000 s. The following table summarizes the specific values of the Arlen variance of the output frequency for different sampling times before and after data smoothing (Table 9.8).
9.5 Application of Phase Processing and Measurement Techniques …
Fig. 9.41 Frequency stability of the clock output frequency after the first data smoothing
Fig. 9.42 Frequency stability of the clock output frequency after the second data smoothing
361
362
9 Research Developments in the Phase Question and Its Role in Advances …
Table 9.7 Second data sub-boxing
Table 9.8 Comparison of Allan variance before and after correction
Correction interval, Δ f ' /Hz
Correction factor, P
(−0.0072, −0.0070] ∪ (−0.0090, −0.0088]
0.8
(−0.0074, −0.0072] ∪ (−0.0088, −0.0086]
0.6
(−0.0076, −0.0074] ∪ (−0.0086, −0.0084]
0.4
(−0.0078, −0.0076] ∪ (−0.0084, −0.0082]
0.2
(−0.0082, −0.0078]
0
Period
Original
First time
Second time
2s
3.28E-14
2.05E-14
1.46E-14
32 s
5.98E-14
1.88E-14
1.05E-14
512 s
1.88E-14
6.01E-15
3.39E-15
4100 s
6.85E-15
2.18E-15
1.26E-15
32,800 s
3.16E-15
1.36E-15
7.30E-16
Time domain frequency difference data contains errors caused by various noises and disturbances. If these errors are used directly as feedback frequency differences without data processing, they can lead to output frequency instability. Such processing is reasonably valid as long as it is performed without degrading its frequency accuracy. Based on this, the valid and error data of the atomic energy level jumps are distinguished here and the error data processing is performed, which significantly improves the frequency stability of the clock output signal. The process of the cesium atomic leap frequency relative to the excitation frequency improves the original frequency stability index and its variation pattern. These data processing methods will give some inspiration to other atomic clock improvements.
9.6 Application of Digital Time–Frequency and Phase Processing Methods to Precision AC Voltage Measurements and Waveform Sampling In short, digitization is the inevitable path of scientific and technological development, of which measurement technology is the most typical embodiment. Digital processing is manifested by the conversion of continuously varying analog quantities into discrete digital quantities through clock acquisition, which is also the process of quantization. As a result of the quantization phenomenon, what is often referred to as quantization error—the measurement error that can be maximized when expressing
9.6 Application of Digital Time–Frequency and Phase Processing Methods …
363
an analog quantity in the form of a certain number of digits. According to conventional understanding, quantization error is one of the biggest obstacles to the impact of digitization. Our extensive research results reveal that the phenomenon of quantization does not necessarily mean that quantization errors arise, and the occurrence of quantization errors is the result of improper use of devices such as AD. The analog– digital deviation resulting from the quantization process for analog quantities is a variable. Its maximum may be the digital quantization value, but the minimum at the jump edge of the quantization change will be close to zero, with the noise, line error associated with the AD device. In this way, the digitization error will simply be related to the number of quantization bits rather than being determined, taking into account the noise, conversion speed, etc., of the AD device. What we are most concerned about is the “jump edge” phenomenon in the case of dynamic applications of AD devices. This is a phenomenon that occurs when the digitized signal and the converted analog signal are “fitted”. Therefore, the digitized skipping edge situation is actually a high-precision representation of the analog signal, beyond the “quantization error” of the AD device. From this point of view, it is desirable to have a higher frequency clock signal so that the ideal fit point is not missed [25, 26]. In accordance with the previous analysis, two aspects of our experiments illustrate examples of results that are not subject to “quantization error” and that also follow the theme of this book, “border effects”, for which the accuracy of measurements of different quantities is substantially improved. One is a measurement of the rms value of a high-frequency AC voltage; the other is a measurement of the phase difference variation up to frequency and frequency stability, which is the highest resolution of almost any physical quantity. The former signal is characterized by the fact that the rms AC voltage is related to its waveform. That is, the voltage RMS sampling needs to be done realistically according to its waveform processing. The phase difference change converts to the calculation of the relative frequency difference of the measured frequency signal Δ f / f = ΔT /τ . Key issues in obtaining higher precision of AD conversion jump edge phenomenon during digital quantization processing: The above diagram is an idealized illustration of the numerical fit of the jump edge of an AD conversion to an analog quantity. However, in terms of practical implementation, it is important to note whether there is a clock signal coming in synchronously at this fit, what is the effect of noise on the jumping edge, etc. For simpler implementation considerations, it is desirable that the sampling rate and clock frequency of the AD converter be as high as possible. In addition, the noise may also make the “jump edge” come early or lagged. From the point of view of device resource utilization and effect guarantee, it is not possible to maintain the high rate of acquisition and processing of the AD converter all the time. That is, most of the time the AD converter operates at low sampling speed and only resumes high-speed operation close to the “jump edge” [25]. Where the actual AD converters conflict with the border effect at the “jump edge” and the measurement of digitized high-frequency voltages in direct applications is that the higher bit count AD converters tend to have very low sampling rates, and the high-speed AD converters themselves are certainly very small in terms of the number
364
9 Research Developments in the Phase Question and Its Role in Advances …
Fig. 9.43 Ideal fit and deviation of the digitally quantized jump edge from the analog
of bits converted. The result is that direct measurements of digitized voltages can often only be used in DC and low-frequency situations. The AD converter in time–frequency measurements may have quantization errors in the direct measurement of the phase difference itself for linear phase-ratio sampling of the output. However, the quantization error can be eliminated for the twodimensional relationship between the rate of phase change up to frequency and frequency stability measurements, because the rate of change is based on the slope of the line between the two-dimensional points at each “jump edge”. By the same token, in digital high-frequency voltage measurements, the rate of change of the voltage over time is the basis for obtaining accuracy of the AC waveform. As seen in Fig. 9.43, the quantization error can be eliminated with more combinations of rate of change lines and with certain algorithms. In order to obtain and use the waveform information of the measured signal in high resolution, as can be seen from the figure, it is desirable to have more voltage sampling points within one cycle of the measured signal. This is to reflect the ADC quantization of the obvious step waveform, the leading edge of each step is often, or very close to the AD converter’s “jump edge”. According to the traditional method, it is necessary to have enough sampling points in the cycle of each measured signal. Therefore, the frequency of the clock signal must be much higher than the frequency of the signal under test. For example, using an 8-bit AD converter for a 10 MHz AC signal sample, if all 256 quantization steps are to be represented, then the clock signal frequency must be at least 2560 MHz. From the current technology, this is obviously very difficult. And in order to identify the “jump edge” of the transition from the jump conversion, the sampling rate needs to be increased significantly. Therefore, it is not possible for current commercial digital voltmeters to measure the RMS value of high-frequency AC voltages with high precision. The digital quantity obtained after ADC quantization should be a waveform with a distinct step shape. The moment of the digital change indicates that the input signal is at the ADC quantization threshold. Here, the level of the input signal is compared with the quantization threshold. For increasing values, greater than the threshold
9.6 Application of Digital Time–Frequency and Phase Processing Methods …
365
Fig. 9.44 Frequency characteristic curve of measured values
will output the next digital quantity and less than the threshold will keep the current digital quantity unchanged. For the actual input signal, a small random noise will be superimposed on the waveform of the input signal due to noise, and when the input signal containing the noise is quantized by the ADC, a violent random disturbance will be generated at the quantization threshold moment, causing the digital conversion value to shift back and forth (Fig. 9.44). The above graph shows the relationship between the precision and frequency response of the RMS value of the voltage obtained when using a fixed AD converter clock signal for the case of a change in the frequency of the signal under test. As can be seen, because the frequency of the clock signal is not high enough, in some special frequency of the signal under test, corresponding to its one cycle the collected voltage value is very limited, so the calculated signal RMS error will increase. The principle of border effect may not be applied in the framework of conventional techniques. In contrast, the experiment shown in Fig. 9.45 uses a 100 MHz clock signal for the 8-bit AD. The frequency response and measurement precision at higher frequencies from 200 kHz to 10 MHz are significantly better than that of the 61/2 bit digital multimeter. It can also be seen that at higher frequencies the 8-bit AD using a highfrequency clock will still have significant errors [25]. When an AD converter is used to sample the voltage of the signal under test, a higher clock frequency should be available in order to obtain precision border information over a wide range of frequencies, or a guarantee of phase relationship coverage between the clock and the signal under test should be considered. In this way, a frequency synthesizer is added to the system as shown in Fig. 9.47. The period of the measurement at high frequencies no longer follows the period of the signal under test, as is the case at low frequencies, but multiple cycles are done to collect all the finer quantized voltage values. With the combination of AD converter and edge effect, the number of samples required for sampling the measured signal is significantly higher than the number of samples acquired directly by the AD converter, making it easier to locate the jump
366
9 Research Developments in the Phase Question and Its Role in Advances …
Fig. 9.45 Comparison of border effect frequency response and measurement precision using highfrequency clocks
Fig. 9.46 Data processing process for edge effects in broadband applications
Actual measurement results
Calibration & compensation
Extraction of waveform parameters
Curve fitting
Extract transform edge
Digital quantization value
Simulation waveform
edge accurately. This is also the reason why the measurement precision at a particular frequency is not as good as when using a simple clock signal in combination with border effect as shown in the Fig. 9.46. The diagram above shows a block diagram of the flow to implement this sampling. This diagram is very similar to the counting method of phase coincidence detection for frequency measurement, digital linear phase comparison, etc. The key here is the awareness of the signal frequency-phase relationship, the differences in application and the obvious differences in software processing. (1) When there is a frequency difference between the clock and the signal under test, or when there is a fractional or multiplicative relationship in frequency and an additional frequency difference, or even when the frequency relationship is complex, the “phase synchronization gate” for frequency measurement is formed by capturing the phase coincidence between the two signals at the closest point to 0°. This enables precise frequency measurements to be made. At this time, the measurement is mainly to obtain information about the moment of phase coincidence. (2) When the clock is a multiple of the frequency of the measured signal and the voltage-phase difference in the linear region near 0° of the measured signal is selectively sampled, digital linear phase precision can be achieved with this line. The measurement at this point is able to obtain the value
9.6 Application of Digital Time–Frequency and Phase Processing Methods … Reference
crystal oscillator
frequency measurement
367
Magnifying shaping
DDS or synthesized signal generator clock
A/D converter
amplification Filtering Conditioning
Tested signal
FPGA
display
MCU
personal computer
Fig. 9.47 High-frequency AC voltage RMS measurement schematic
of the voltage-phase difference of the linear section. (3) Also here, when there is a frequency difference between the clock and the measured signal, or when there is a frequency division or multiplication, and an additional frequency difference, the waveform of the measured signal can be recovered by connecting all the acquired voltage values together in time order. The focus here is more on the integrity of the continuous two-dimensional relationship of the acquired signal (Fig. 9.47). The AD converters were chosen to use 8-bit and 16-bit, respectively. This experiment was to see the frequency response of the border effect, note that the upper frequency limit of the measured signal was adjusted higher. Regarding the frequency relationship between the clock and the measured signal, it is considered that there is an appropriate frequency difference between the clock signal and the measured signal, or a frequency difference on a multiplier basis. This realizes phase stepping at intervals according to the measurement period, and in terms of the periodic waveform of the measured signal, the periodic signal waveform obtained by synthesis can truly reflect the measured signal waveform condition and is more accurate for the rms value of the measured voltage. With such a method, not only the problem that the original method of digital acquisition was not suitable for AC high-frequency voltage measurement can be solved, it also solves the problem of upper frequency response for high-frequency voltage measurement. This circuit is roughly the same system hardware as the differential beat measurement period method in time–frequency measurements. But the previous measurement was concerned with the change in the time interval between two voltages that are strictly consistent or close to strict agreement at the acquisition output and is not
368
9 Research Developments in the Phase Question and Its Role in Advances …
Fig. 9.48 Signal sampling on the basis of the same-frequency nominal value of the clock and measured with norm deviation
concerned with the rest of the acquired signal. But here, for measurements such as high-frequency voltage RMS, it is all the collected voltage values that need to be retained and processed. Due to the frequency difference between the measured signal and the clock, the phase difference between them is always changing. As long as the linear phase change is ensured, it is possible to collect all, digitally represented waveform voltage-phase difference information of the measured signal without distortion. Only its two-dimensional independent variable—time—is significantly magnified. Regarding the choice of frequency difference, we should consider the multiplication of the sampled data points, the frequency stability index of the signal under test itself and the response time required for the measurement (Fig. 9.48). When the frequency of the clock signal is close to the frequency of the measured signal, the voltage of the measured signal can still be measured with high precision by virtue of the edge effect. Its working principle waveform is shown in the figure above [27]. The top waveform represents the signal under test, and the dark black line indicates the voltage waveform that can be recovered from the acquired signal voltage; the bottom waveform represents the clock signal (which can be generated and adjusted by a frequency synthesizer). In this way, the amount of digital-voltage information that can be collected in the time it takes for the waveform of the signal under test to change by one cycle is greatly increased, and the step-like change condition is obvious. The period of change of the waveform of the measured signal is actually equal to ΔT =
1 1 = Δf fc− fx
(9.20)
And the number of voltage values collected is D=
fx Δf
(9.21)
The large number of acquisitions can be seen. For example, for a 1 MHz signal under test, the number of effective voltages acquired will be 100,000 when Δf is 10 Hz. For the 8-bit AD converter, with 256 digital steps, there will be a sufficient number of acquisitions to facilitate the analysis of the digit/moment of the jump edge on the waveform and also allow the algorithm for processing to be applied.
9.6 Application of Digital Time–Frequency and Phase Processing Methods …
369
Fig. 9.49 Measured signal waveform recovered from the results of a large number of AD timesharing acquisitions—voltage
In order to implement the above method of measurement and processing, sometimes auxiliary lines are needed to measure the differential beat period between signals, etc., as shown in the following Fig. 9.49 [27]. The above diagram shows the operating principle of a system that measures the voltage of the measured signal with high precision by adjusting the synthesizer frequency with the help of the border effect. Waveform 1 represents the measured signal, waveform 2 represents the clock signal, and the last line of waveforms represents the recovered waveform. That is, the measured signal is F f , the shaped measured signal is F ' f , the clock signal is F c , the shaped clock signal is F ' c , and the combined recovered signal of the measured signal sampling point is F s . It is actually a signal that retains the waveform of the signal under test while its frequency is greatly reduced. The conventional method requires many clock samples in one cycle of the signal under test to obtain the measurement data as in the original Fig. 9.43. Obviously, this is not suitable for high-frequency signals. By adjusting the frequency of the frequency synthesizer, the frequency of the clock and the signal under test are close to each other with a small frequency difference, and the phase difference between them is shifted for each signal acquisition until the voltage values acquired in a large number of clock cycles in a “differential beat cycle” time are voltages that differ from each other and the converted digital. This is the point marked in waveform 1. The shaping and comparison of the signal will help us to have a better grasp of the period and phase difference variation of the combined signal of F s , because the voltage acquisition points in a waveform cycle of the measured high-frequency signal are greatly increased, for us through certain processing to obtain the signal itself more accurate waveform parameters. For specific processing, it is better to use relative values. The precision of the AC voltage measurement at high frequencies obtained by this new method is now essentially the same as in the low frequency. And it does not appear, as in Fig. 9.45, that measurements combined with border effects are instead
370
9 Research Developments in the Phase Question and Its Role in Advances …
inferior to direct measurements with limited number of collected data points. By recovering the waveform of the high-frequency signal accurately at low frequencies, the number of data points acquired is greatly increased. The implementation of the border effect pathway is facilitated, leaving enough time for processing. Figure 9.50 shows a comparison of the measurement results of the previously described method with a 16-bit AD converter and with the measurement results of other methods. Converters, although relatively accurate, can only operate at DC or low frequencies (as indicated by the solid line in the figure for a 6½-bit digital voltmeter measurement). Directly using a high-speed, low bit count AD converter is still not sufficient for high-frequency, high-precision AC voltage measurements [25, 27] (as shown in the upper part of the figure in dotted line). The dotted line in the figure is the measurement result of an 8-bit AD converter at 100 MHz clock. Therefore, the voltage acquisition method using differential or least common multiple period, phase stepping method and extended signal waveform can obtain high-precision AC voltage measurement results at the highest possible frequency. From the above analysis and the current status of high-frequency voltage measurement, we can see some very valuable development prospects in favor of digital technology for high-frequency applications. The current high-frequency voltage RMS measurement is more of a substitution method applied, with complex equipment, difficult operation and not very high precision. As digital-voltage measurement for high-frequency voltage, there is a contradiction between the number of bits and the rate of AD analog-to-digital conversion. That is, the conversion rate of AD devices
Fig. 9.50 Comparison of measurement results
9.6 Application of Digital Time–Frequency and Phase Processing Methods …
371
with more bits are not high, and the use of high-speed AD devices and the number of conversion bits is not high. Thus, for direct sampling of waveforms of high-frequency signals, the number of sampling points obtained per cycle of the measured signal is very limited. Therefore, high-frequency voltage measurement cannot be achieved using a measurement method like the digital voltmeter has become a consensus among many people. The above-mentioned method, which is similar to the differential beat period in the frequency standard comparison—the equivalent phase-shifted sampling of the voltage signal under test, enables more voltage data to be collected within one cycle of the signal under test and the reproduced signal waveform. It is beneficial to the high-precision measurement of various voltage parameters of the signal, such as rms, average and peak values. Among the measurement errors introduced by this method, the difference in the frequency stability index between the clock and the measured signal is to be considered. Poor frequency stability can distort the recovered measured signal waveform, and the signal voltage RMS calculated in this way will have a waveform distortion error. The corresponding measurement results have been carefully shown in the previous section on transient and short-term frequency stability of the signal. Similarly, this method also plays a key role in the calibration of linearity of special waveform signals, the precision testing of amplifier frequency response characteristics and the measurement of the transfer relationship between input–output characteristics of frequency conversion circuits. Digital high-frequency voltage measurement works on the basis of the fractional and multiplicative frequency relationship between the clock and the measured signal combined with a tiny frequency difference, which has a better effect on the sampling of the measured signal. And for the measured signal voltage measurement toward a higher frequency range, the method can achieve the effect that the frequency of the clock signal is much lower than the sampling frequency of the previous measurement method. When the completion time of the sampling is calibrated with the least common multiple period or combined differential period between the clock signal and the signal under test, a fraction or multiple of the frequency between the clock signal and the signal under test is ensured, and a certain small specific frequency difference. Such clock sampling ensures from one to several successive stepwise, densely phaseexpanded sweep samples, measurements with respect to the signal under test with a full clock period delay and with a small phase difference. This is shown in Figs. 9.51 and 9.52. As a result of the sampling, the waveform of the measured signal is recovered and reconstructed according to the voltage sampling data corresponding to the phase difference value, and the corresponding AC voltage parameters are calculated. Figure 9.51 depicts a fractional relationship between the clock signal and the signal under test with a certain frequency difference, where each clock signal is spaced to scan the phase condition on the waveform of the signal under test. The voltage of the measured signal is sampled for multiple cycles of the measured signal, and the voltage signals measured in this manner have a step shift in phase between them. Such sampling ensures that the waveform of the measured signal can be recovered
372
9 Research Developments in the Phase Question and Its Role in Advances …
Fig. 9.51 Fractional clock sample point shift characteristics
f
1
2
3
4
fc
Fig. 9.52 Voltage sampling point shift characteristics for multiplicative clock band frequency differences
after all the phase states of the measured signal have been sampled, and the RMS value of the voltage can be calculated. Fc = f x /m + Δ f
(9.22)
ΔT = τ1 · Δ f / f
(9.23)
When τ1 is equal to the period of the clock signal (i.e., m times the period of the measured signal), the sampling process of the clock is also τ1 constantly multiplying the process of sampling and measuring the voltage. It is a linear stepwise process on the time axis. Then, ΔT here represents the phase change of the step under the action of the clock. For ease of use and calculation, on the one hand Δf should be much smaller than the frequency of the two comparison signals (so that the extended effect of time sampling can occur in the measurement); on the other hand, Δf should become fractional with the frequencies of the two comparison signals (i.e., take the norm). In the case of a fractional, multiplicative relationship between the body of the frequency between the clock signal and the measured signal, the least common multiple period is exactly the same as the period of Δf [3, 4]. For example, if Δf is 10, 1 and 100 Hz, the minimum common period is 0.1, 1 and 10 ms. For the same signal under test, the longer the least common multiple period, the more voltage data can be collected, which is more conducive to the improvement of precision. The situation is somewhat more complicated in cases where the frequency arbitrariness of the measured signal is obvious.
9.6 Application of Digital Time–Frequency and Phase Processing Methods …
373
In Fig. 9.52, when there is a multiplicative relationship between the clock signal and the signal under test with a certain frequency difference, sampling according to a certain clock sequence is also fully capable of obtaining a scan of the phase conditions of the signal under test with respect to voltage variations. The sampling here is that a certain clock signal sequence acquires voltage data covering all phase conditions in one phase step for each period of the measured signal. Thus, it is possible to support the calculation of the RMS value of the voltage of the signal from essentially, for example, clock signal frequency. Fc = n f x + Δ f
(9.24)
Thus, in the periodic sampling of the clock for the measured signal, the corresponding clock sequence following n samples of the measured signal f x cycle interval should be synchronized at a specific phase value of f x , where F c is the frequency of the clock signal. But Δf acts according to the relation ΔT = τ · Δ f / f
(9.25)
This reflects the phase step condition between the signals. Where τ is the time of comparison, the shortest sample step should be very close to the period of the measured signal when there is a multiplicative relationship between the clock signal and the measured signal and a certain frequency difference. This method gains a breakthrough from the frequency range of digital highfrequency AC voltage measurements. On the one hand, it is the fractional, multiplicative plus deviation of the clock to the measured signal of multiple cycles of the phase-shift step sampling in the equivalent according to the phase difference value of the order of sampling greatly reduces the actual clock frequency; on the other hand, especially for the case of fractional clock plus deviation, the clock signal frequency can be far below the measured signal through its multiple cycles for a single sampling to get the measured signal of a phase difference corresponding to the voltage value is sampled. Therefore, the acquisition of waveform elements and waveform reconstruction and recovery for higher frequencies of the measured signal are done at significantly lower clock frequencies [28]. Figure 9.53 shows the recovered voltage waveforms after digitally sampling the 10 MHz AC voltage signal using a 1.0001 MHz and 1.001 MHz clock signal, respectively. The blue line is the measurement result of the 1.0001 MHz clock signal used; the red line is the measurement result of the 1.001 MHz clock signal used. One can see that the voltage peaks and RMS values are identical for both measurements. Moreover, from the digital point of view, the waveform is very realistic with the smaller frequency difference of the clock signal sampled. But in terms of phase-voltage resolution, the blue line has a bit more resolution. The least common multiple period between the clock and the signal under test is 1/100 Hz = 10 ms. The latter common multiple period between the clock and the signal under test is 1/1 kHz = 1 ms. The equivalent period of the waveform in both of the above cases is equal to, or equivalent to, the period of the signal under test, 100 ns. If there is distortion in the
374
9 Research Developments in the Phase Question and Its Role in Advances …
Fig. 9.53 Example of fractional frequency clock relationship sampling
signal, or if it is a square wave, triangle wave, specific pulse wave, etc., it should be able to be represented realistically. Therefore, it is reasonable to believe that waveform display instruments such as oscilloscopes suitable for periodic voltage signals at higher frequencies can be developed using the above-mentioned time– frequency processing method and the principle of order effect. It is also a challenge for conventional instruments. Figures 9.54 and 9.55 give the waveform plots of the waveform reconstruction of the recovery of a 20 MHz pulse wave acquired with a 10.01 MHz clock and the waveform plot of the recovery of a 5 MHz triangular wave with noise acquired with a 1.0001 MHz clock, respectively, using the sampling measurements described above. It can be seen that for pulsed, non-sinusoidal signals, the use of a lower frequency clock signal is able to display the waveform of a complex signal under test and to obtain its important waveform parameters. As can be seen from the comparison of measurement results in Fig. 9.50, it is precisely by using the measurement method and system of high-frequency AC voltage in Fig. 9.47 that the measurement precision of the measured AC voltage is greatly improved. For sine wave signal measurement, its RMS measurement precision is significantly higher than other traditional methods. Similarly, for the non sinusoidal signals in Figs. 9.54 and 9.55, such as pulse wave and triangle wave signals, especially high-frequency signals, we use these signals generated by AFG3101C arbitrary waveform and function signal generator as the measured input of the system. It is possible to sample the value of voltage changing with phase in each cycle of the signal with higher resolution, therefore, the precision of the waveform recovered from the sampled value is also very high. These two figures truly reflect the waveform of non sinusoidal signals. This measurement result also shows that the measurement method and system in Fig. 9.47 can be used to measure and display different waveform signals at high frequency without distortion.
9.6 Application of Digital Time–Frequency and Phase Processing Methods …
375
Fig. 9.54 Waveform plot of the 20 MHz pulse wave recovery reconstruction acquired with a 10.01 MHz clock
Fig. 9.55 Waveform plot of the recovery reconstruction of 5 MHz triangular band noise acquired with a 1.0001 MHz clock
376
9 Research Developments in the Phase Question and Its Role in Advances …
The experiment here essentially incorporates a phase processing approach to digital-voltage measurements, especially in terms of waveform processing and acquisition more including the broadband nature of phase processing. The higher measurement bandwidth in the case of a fractional lower frequency clock should enable highly precision recovery of the signal waveform at a sampling rate that is not very high (This is the exact opposite of a conventional digital sampling oscilloscope being displayed at a waveform frequency significantly less than the sampling rate of the instrument, substantially increasing the upper measurement frequency of the instrument.). As a waveform display instrument, it also has the function and indicators of quantitative measurement of parameters such as voltage and time, which is conducive to greatly improving its measurement technical indicators.
9.7 Prospects for Further Development of the Phase Question The conceptual deepening, the expansion of processing methods and the opening of application ideas with respect to the phase question have allowed phase processing not only to maintain and relatively improve measurement precision, but also to significantly expand its range of applications. In addition, from the frequency correlation between signals can be found more extensive and in-depth applications. It is worth noting that digital linear phase precision and processing have also made considerable progress in the frequency domain. So far, the effects of single-sideband phase noise measurements have become apparent. Although there is still a gap compared to the most advanced international instrumentation, its exceptionally low cost and the use of convenience have also shown a strong competitive edge. In addition, our work has delved into frequency sources. The atomic clock is a very widely used frequency base and standard at home and abroad. While most of the conventional atomic clock construction efforts have been focused on the improvement of its physical parts and hardware processing, our work has also shown that its technological progress cannot be separated from the software and phase processingrelated signal processing techniques specific to atomic clocks. In addition, there is value in revealing and applying time–frequency fingerprints in more depth, which will be useful for the further development of frequency sources. The application of time–frequency methods to periodic signals, including digitized AC voltage signal measurements, and more generally to waveform acquisition, processing, reconstruction and display will also be work of significant market and academic value and in addition to extending the upper frequency limits for digital measurements of AC voltages, but also will be an important avenue for new, higher precision digital sampling oscilloscope technology. And this range of approaches, combined with computer technology and resources, has every opportunity to develop new digital computer instrumentation systems that will change the paradigm for the development of existing instrumentation technology (Table 9.9).
References
377
Table 9.9 Selected advances in the application of border effects after years of research Serial number
Measurement target Direct parameters
Border
Improvement factor
1
Frequency (direct)
Multicycle phase synchronization
Phase-recombination envelope borders
102 –104
2
Linear phase comparison
Digital phase change A/D conversion jump 101 –102 rate edge
3
Digital-voltage frequency response (high-frequency voltmeter development direction)
Rate of voltage change
A/D conversion jump High frequency edge range breaks 102 or more
4
Crystal frequency source
Resonant state equivalent compression
Resonance curve border
101 or more
5
Sensor output
Sensing signal change rate
Analog-to-digital conversion jumps along
101 –102
6
Other digital measurements
Numerical—time rate of change
Jump along-analog fit 102 information
7
Digital-voltage AC signal
Rate of change of voltage to waveform parameters
A/D conversion jump 102 or more edge fitting
8
Phase noise measurements
Rate of phase change A/D conversion jump 101 or more and its handling edge with phase undulation Time domain, frequency domain balance
Acknowledgements This book was supported by grants from NSF projects 11773022 and 11873039. We hereby express our gratitude.
References 1. Zhou W, Ou X, Zhou H et al (2006) Time-frequency measurement and control techniques (1st Edn). Xidian University Press 2. Allan DW, Daams H (1975) Picosecond time difference measurement system. In: 29th Annual symposium on frequency control symposium, pp 404–411 3. Zhou W (1992) The greatest common factor frequency and its application in the accurate measurement of periodic signals. In: Proceedings of the 1992 IEEE frequency control symposium, Hershey, PA, USA, pp 270–273. https://doi.org/10.1109/FREQ.1992.270004 4. Wei Z (2000) Systematic research on high accuracy frequency measurements and control. Shizuoka University, Doctor dissertation, pp 31–39 5. Ruan J, Wang YB, Chang H et al (2015) Current status of development of time-frequency reference devices. J Phys 64(16):60–68
378
9 Research Developments in the Phase Question and Its Role in Advances …
6. Zhou W, Li Z, Qiao W (2019) The most potential technology in time frequency domain—digital linear phase comparison. National academic conference on time and frequency in 2019 7. Zhou W, Li Z, Bai L et al (2014) Verification and application of the border effect in precision measurement. Chin Phys Lett 31(10):100602 8. Zhou W et al, A method of time-frequency fingerprint detection with a frequency source. Patent of invention, application No. 202110600832.7 9. Huang BY, Zhou W, Zhang YB et al (1996) Handbook of test and measurement techniques (vol 11, Time Frequency, 1st edn). China Metrology Press, Beijing, pp 73–82, 143–149 10. Lance AL, Seal WD, Labaar F et al (1984) Phase noise and AM noise measurements in the frequency domain. Infrared Millimeter Waves 11, 239–239 11. Hang S (2011) Research on phase noise testing techniques for continuous wave signals. Xidian University, Xi’an 12. Shi Y (2011) A novel phase noise measurement method. Xi’an University of Electronic Science and Technology, Xi’an 13. Li H-H (2010) Precision measurement of time-frequency signals. Science Press, Beijing 14. Dyukin AB, Medvedev SY, Yakimov AV (2007) Application of direct digital synthesis and cross correlation solution in phase noise measurement system. AIP Conf Proc 922(01):411–414 15. Allan CE (2014) A digital method for phase noise measurement. University of Washington, Washington 16. 5125 A high-performance, extended-range phase noise and allan deviation test set. Symmetricom (2009) 17. Xu LF, Luo D, Zhou W et al (2018) A frequency stability measurement method for full response time. J Xi’an University of Electron Sci Technol 45(01):79–84 18. Lina B, Yifei W, Wei Z et al (2019) Advances in measurement methods from transient to full domain stability. J Xi’an University Electron Sci Technol 46(03):8–13 19. Li-na BAI, Xin SU, Wei ZHOU et al (2015) On precise phase difference measurement approach using border stability of detection resolution. Rev Sci Instrum 86(1):015106 20. http://www.bdtic.com/en/linear/LTC2217 21. Ren QH, Yu YF (2014) Application of Welch power spectrum estimation algorithm in phase noise measurement. Automat Instrument 35(01):16–19 22. Wang YT (2012) Atomic clocks and time-frequency systems (anthology). National Defense Industry Press, Beijing 23. Zhou W, Bailn L et al (2013) Generalized phase measurement and processing with application in time-frequency measurement control and link. In: Proceedings of the 2013 joint European frequency and time forum and international frequency control symposium 24. Zhou W et al, A phase locked loop based on digital direct linear phase matching. Invention Patent, Application No. CN202010450932.1 25. Lina B, Haidong L, Wei Z et al (2017) ADC border effect and suppression of quantization error in the digital dynamic measurement. Chin Physics B 26(9):090601 26. Wei Z, Qi LZ, Lina B et al (2014) Verification and application of the border effect in precision measurement. Chin Phys Lett 31:100602 27. Liu HD (2017) Clock cursor principle and its application in direct digital measurement. Xi’an University of Electronic Science and Technology, Xi’an 28. Wei Z (2008) Equivalent phase comparison frequency and its characteristics. In: Proceedings of the 2008 IEEE frequency control symposium, pp 468–470