New digital signal processing methods 9783030453589, 9783030453596


404 37 10MB

English Pages 458 Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface......Page 6
Symbols......Page 10
Contents......Page 14
Abbreviations......Page 19
1.1 Classical Approach: Formulation of One-Dimensional Regression Problems......Page 21
1.1.2 Regression Model......Page 22
1.1.4 Assumption About the Regression Function and Its Possible Recognition......Page 23
1.1.5 Analysis of Remnants......Page 25
1.2 Procedure of Optimal Linear Smoothing of Noisy Data......Page 26
1.2.1 Possible Generalizations......Page 29
1.3 Description of the Eigen-Coordinates Method......Page 31
1.3.1 Determining the Basic Linear Relationship for Functions with Nonlinear Fitting Parameters......Page 32
1.3.2 Using the Orthogonal Variables......Page 35
1.3.3 Selection of the Most Suitable Hypothesis......Page 37
1.4 Generalizations and Recommendations for the Eigen-Coordinates Method......Page 45
1.4.1 Using a Priori Information......Page 47
1.4.2 The Problem of Elimination of Depending Constants......Page 52
1.5 Concluding Remarks......Page 61
1.7 Exercises......Page 65
References......Page 67
2.1 Introduction and Problem Formulation......Page 69
2.2 The Reduced Fractal Model and its Realisation in the Fractal-Branched Structures......Page 73
2.3 The Fitting Procedure of Functions Containing Log-Periodic Oscillations......Page 78
2.4 Application of the Original Procedure to Real Data......Page 81
2.4.1 Description of the Bronchial Asthma Disease......Page 82
2.4.2 Description of Fragments of the Queen Bee Acoustic Signals......Page 83
2.4.3 Description of Acoustic Data Recorded from Car Valves in the Idling Regime......Page 90
2.5 Concluding Remarks on Fundamental Results and Open Questions......Page 92
2.A Appendix 1: Evaluation of the Product (2.4) for the Case xib 1......Page 95
2.B Appendix 2: Synthesis of the Eigen-Coordinates Method and the Basic Linear Relationship......Page 97
2.C Appendix 3: Computation of the Power-Law Exponent nu in L0(t)......Page 101
2.D Appendix 4: Basic Linear Relationship for Inoculating Log-Periodic Functions......Page 102
2.F Questions for Self-Testing......Page 103
2.G Exercises......Page 104
References......Page 105
3.1 Introduction and Formulation of the Problem......Page 107
3.2 Evaluation of Statistical Stability of Random Sequences by Higher Moments......Page 109
3.3 The Approximate Expression for the Generalised Mean Value Function, Fractional and Complex Moments, and External Correlati.........Page 113
3.4 The Generalised Pearson Correlation Function, External and Internal Correlations, and Some Useful Inequalities......Page 119
3.5.1 Statistical Protection of Valuable Documents......Page 126
3.5.2 Detection of a Small Signal and Statistical Proximity......Page 131
3.5.4 Dielectric Data and Calibration Curve......Page 135
3.5.5 Fractional Exponential Reduction Moments Approach......Page 138
3.5.6 Integration and Differentiation Pre-processing......Page 142
3.5.7 Some Simulation Results......Page 143
3.6 Concluding Remarks and Open Problems......Page 156
References......Page 157
4.1 Introduction and Formulation of the Problem......Page 160
4.2 The Universal Distribution of Stable Points......Page 163
4.3 Representation of Data from Complex Systems Without an Explicit Model by Means of the Generalised Gaussian Distribution......Page 167
4.3.1 Detection of Few `Strange´ Points Located in a Narrow Interval......Page 168
4.3.2 The Presence of Random Points That Disturb the Whole Interval l......Page 172
4.4.1 Similar Random Sequences Without Trend (Triple-Correlations of the Transcendental Numbers π and e)......Page 178
4.4.2 Random Sequences with a Trend: The `Forex´ Currency Market Data......Page 184
4.4.3 Quantitative Classification of Earthquakes......Page 188
4.4.4 Coding Information with the Help of Quantum Dots......Page 196
4.5 Basic Results and Open Problems......Page 199
4.6.1 Introduction and Formulation of the Problem......Page 201
4.6.2 Scaling Properties of the Beta-Distribution and Description of the Treatment Procedure......Page 204
4.6.3 Treatment of Long-Times Membrane Current Series......Page 214
4.6.4 Clusterization of Final Parameters Based on the Generalised Pearson Correlation Function......Page 217
4.6.5 Results and Discussion......Page 222
4.8 Exercises......Page 223
References......Page 224
5.1 Introduction and Formulation of the Problem......Page 226
5.2 Description of a Quasi-Periodic Process in Terms of the Prony´s Spectrum......Page 228
5.3 Description of the General Detection Algorithm......Page 231
5.4 Detection of Quasi-Periodic Processes from Real Data......Page 235
5.4.1 Detection from Raman Spectra Recorded for Pure Water......Page 236
5.4.2 Detection from Random Geophysical Acoustic Signals......Page 242
5.5 Results and Discussion......Page 245
5.A Appendix: Generalization of the Model for Quasi-Periodic Processes to Consider Incommensurable Periods......Page 249
References......Page 251
6.1 Introduction and Formulation of the Problem: The Reproducible Experiments and their Description......Page 253
6.2 The Basis of the General Theory of Reproducible Experiments. The Physical Meaning of the Prony Decomposition......Page 254
6.3 Description of the General Algorithm and Its Testing on Available Data......Page 262
6.3.1 The Raman Spectra of Distilled Water......Page 264
6.3.2 Two Wireless Sensor Nodes Exchanging Packets in a Noisy Wireless Channel......Page 270
6.4 Generalisation of Results for Quasi-Reproducible (Non-stationary) Measurements......Page 275
6.4.1 Self-Consistent Solutions of the Functional Equation (6.30)......Page 278
6.4.2 The Clusterization Procedure and Reduction to an ``Ideal Experiment´´......Page 282
6.5 Validation of the General Theory on Experimental Data......Page 287
6.6 Final Results and Further Perspectives......Page 298
6.7 Questions for Self-Testing......Page 303
References......Page 304
7.1 Describing Complex Signals with Beatings......Page 306
7.2 Basics of the Non-orthogonal Amplitude Frequency Analysis of Smoothed Signals (NAFASS) Approach: Evaluation of the Initial.........Page 310
7.3.1 The Fitting Function for the Integer Case......Page 313
7.3.2 The Fitting Function for the Fractional Case......Page 315
7.4.1 Application to Economic Data......Page 318
7.4.2 Application to the Noise Created by Transcendental Numbers......Page 323
7.5 New Type of Fluctuation Spectroscopy Based on the NAFASS Approach......Page 332
7.6 The NAFASS Approach and Chaos......Page 340
7.7 Concluding Remarks and the Basic Principles of Fluctuation Metrology......Page 351
7.A Appendix......Page 354
7.B Questions for Self-Testing......Page 356
References......Page 357
Chapter 8: Applications of NIMRAD in Electrochemistry......Page 359
8.1.1 Formulation of the Problem......Page 360
8.1.2.1 Preliminary Considerations: The Second-Order DGIs......Page 361
8.1.2.2 The General Theory of the Discrete Geometrical Invariants Based on the Higher-Order Curves and the Fourth-Order GDI......Page 363
8.1.2.3 Application of the Statistics of the Fractional Moments (SFM) and Use of the Internal Correlation Factor......Page 366
8.1.3.2 Algorithm Description......Page 368
8.1.4 Results and Discussion......Page 373
8.2.1 Formulation of the Problem......Page 375
8.2.2 Experimental Set-Up and Preliminary Data Analysis......Page 379
8.2.3 The Mathematical Section of the PCA and the Modified F-Transform......Page 386
8.2.4 Application of the Modified Platform to Real Data......Page 391
8.2.5 Results and Discussion......Page 396
8.3.1 Formulation of the Problem......Page 398
8.3.2 Foundations of the General Theory of Percolation Currents......Page 400
8.3.3.2 Experimental Measurements......Page 406
8.3.4.2 Representation of Initial Data in the Uniform Logarithmic Scale......Page 407
8.3.4.3 Clusterization of all Measurements to the Averaged Triad Cluster......Page 409
8.3.4.4 The Final Fit......Page 413
8.A Appendix......Page 417
8.B Questions for Self-Testing......Page 420
References......Page 421
9.1 Compact Universal Parameters to Reduce Initial Data......Page 425
9.2 The ``Struggle´´ Principle and Justification of the Chosen Set......Page 427
9.3 Verification by Model Data......Page 431
9.4 Finding Quantitative Differences when the Input Is in a Descriptive Form......Page 432
References......Page 444
Epilogue......Page 446
Index......Page 448
Recommend Papers

New digital signal processing methods
 9783030453589, 9783030453596

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Raoul R. Nigmatullin Paolo Lino Guido Maione

New Digital Signal Processing Methods Applications to Measurement and Diagnostics

New Digital Signal Processing Methods

Raoul R. Nigmatullin • Paolo Lino • Guido Maione

New Digital Signal Processing Methods Applications to Measurement and Diagnostics

Raoul R. Nigmatullin Radioelectronics and InformativeMeasurement Technics Department Kazan National Research Technical University named by A.N. Tupolev (KNRTU-KAI) Kazan, Tatarstan Republic, Russia

Paolo Lino Department of Electrical and Information Engineering Polytechnic University of Bari Bari, Italy

Guido Maione Department of Electrical and Information Engineering Polytechnic University of Bari Bari, Italy

ISBN 978-3-030-45358-9 ISBN 978-3-030-45359-6 https://doi.org/10.1007/978-3-030-45359-6

(eBook)

© Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

Many research investigations are based on collecting and processing big amounts of experimental data. Then they share a common problem. Development of statistical methods that are free from unjustified assumptions, simplifications, and treatmentinduced systematic errors is a “golden” dream, which is, at first sight, unachievable. Therefore, if a demanding reader opens this book, what kind of information he/she might find or which knowledge he/she might receive? Non-invasive methods, that is, methods that are free from uncontrollable treatment errors, are possible by careful selection in a wide spectrum of approaches and generalizations available in modern statistics. A fundamental supporting evidence comes from more than 20 years of research and scientific publications in this field. The main ideas, with the associated approaches and data processing techniques, are collected in this book and organized under a unique framework. Another argument in favor of the proposed non-invasive methods is the validation by real data that are obtained from different laboratories and independent sources. Given this premise, the major goal of the book is to present methods and algorithms initially distributed in many publications in the form of a reference manual, which can be useful to Master’s and PhD students and young researchers. The authors’ intention is to provide a manual for readers interested in innovative signal and data processing techniques that are suitable in measurement, diagnostics, and control applications. Namely, the authors had useful and instructive experiences in presenting ideas by lectures to students. Then they decided to properly define and make available a new approach for a wider audience, while having in mind that many researches and practical applications could get advantage from, for example, what they called “non-invasive methods for a reduced analysis of data.” There is a basic idea that will follow and help the reader throughout the book. It consists in providing a definite answer to the following main question: is it possible to find deterministic elements in random sequences of data, if the so-called “best fit” describing model is absent? If a researcher tries to understand the random behavior of some system data flow, he/she may put the following question: does the system response reflect an absolute v

vi

Preface

random behavior or is there any “deterministic color” because of hidden system properties or characteristics, such that the initial randomness is only due to ignorance of such properties and characteristics? It is shown that, in many cases, the second case occurs. One can put forward some natural principles to find a fitting function for a wide class of data, without knowing the specific model behind the process that produces data. Actually, a fitting function containing a small number M of parameters allows us to reduce the initial high number N of data points to a much lower number of fitting parameters (M  N ). This function, which is obtained by some general principles, contains a controllable fitting error, then the fitting procedure is free from uncontrollable errors. Stated in this way, it looks obvious. However, students usually do not like complex or too advanced mathematical calculations that are beyond the scope of traditional university courses. Hence this book tries to avoid any mathematical formula related to the traditional continuous mathematics. Namely, all measured data have a discrete structure and simple algebraic expressions employing discrete vectors of data are used only. However, the reader should have a fundamental knowledge of algebra, calculus, and analytical geometry. This knowledge is usually covered by books of the first 2 years of engineering programs of study in universities and institutions. Hopefully, certain algebraic manipulations (e.g., taking the derivatives and integrals of discrete data) are understandable by most readers. Some simple exercises at the end of each chapter help to deeply understand the contents of the chapter. Moreover, it should be relatively easy to code programs for verifying the basic ideas on data from experiments or from simulation models, at least. To better understand the idea of this book, why not to have a look at the existing data and signal processing methods from the bird’s-eye view? The techniques to process measurements or other kinds of data form the milestone of all natural sciences, and any attempt to push this stone over the hump seems useless. Many excellent and popular books, reviews, and papers, which were written by outstanding mathematicians, statisticians, experimentalists, and theoreticians, provide a stable trend in the science of data and signal processing. All the cited references in this book represent great efforts of the last decades for data processing, fitting, and forecasting in several application domains, on the basis of different approaches and principles. However, the question posed in this book should sound unexpected and paradoxical to many researchers: is it possible to create a unified theory or approach, which is rather general and free from uncontrollable treatment errors, to process all reproducible data? The aim is to find and justify a positive answer. Many of the presented methods should lead to reconsider the conventional point of view and create a new trend, known as the theory of reproducible measurements. The main contents of each chapter are described as follows. Chapter 1 puts the following question: is it possible to increase the limits of the conventional linear least square method (LLSM) and apply it to the fitting of functions that initially contain nonlinear fitting parameters? The answer is positive and the limits of the conventional LLSM can be essentially increased. Moreover, the reader will find some interesting peculiarities that are not considered in books related to the LLSM.

Preface

vii

Chapter 2 considers the so-called eigen-coordinates method and the non-trivial problem of fitting blow-like signals (BLS). The chapter includes a useful combination of fitting methods for achieving the desired result. Moreover, it describes the mesoscopic model of the formation of BLS to understand the origin of BLS appearance in many systems. Chapter 3 generalizes the statistics related to integer moments and introduces the moments that may assume fractional and even complex values. The statistics of fractional moments creates an additional source of information and fits many random signals in the fractional moments space. Many practical examples based on models and real data are considered to demonstrate the usefulness of this additional statistics. Chapter 4 proposes the quantitative universal label to read relative fluctuations and compare them with each other. This methodology is effective in many real situations. Moreover, it is shown that the so-called integrated “sequence of ranged amplitudes” can be fitted by the beta-distribution function. This allows us to describe the relative fluctuations from a general point of view and to create a specific fluctuation spectroscopy based on its scaling properties. An interesting example is to distinguish the “living” noise from the “dead” noise recorded from the brain cells of a young rat and serves as confirmation of the proposed algorithm. Chapter 5 shows how to find an additional fitting function for describing quasiperiodic experiments. After discussing on the physical and statistical meaning of the conventional Fourier-transform, it is proved that real experimental data should be expressed in terms of the segments corresponding to the Prony decomposition. The results are instructive and show that the measured data contain actually two fitting functions. The first is traditional and corresponds to the chosen microscopic model; the second fitting function follows from the partly correlated experiments and is tightly associated with the chosen Prony decomposition. Chapter 6 generalizes the results of the previous chapter for the class of quasireproducible experiments. This general theory is tested on some non-trivial data determined by heart beatings and demonstrates its generality and wide applicability. A suitable application is expected soon, especially in the analysis of different complex systems, for which the “best fit” model is absent. Chapter 7 suggests a new way of presenting random signals in the form of multiperiodic decomposition of nonlinear and non-stationary signals. If one puts forward a principle that an initial signal is a combination of all possible beatings, then finding a multi-spectrum of the signal (i.e., the dispersion law) and its amplitude-frequency response is possible. The examples prove the reliability of the proposed principle and its applicability for describing data. Chapter 8 contains combinations and modifications of the new methods and demonstrates their power in electrochemistry. Data obtained from the Ufa Petroleum Institution are utilized. The reason for applying the new statistical methods in electrochemistry comes from the author Nigmatullin’s desire to pay his debt to the distinguished professor and founder of molecular electronics in the former USSR, namely Prof. Rashid Sh. Nigmatullin, who was the author’s father. Chapter 9 is very recent and proposes some “universal” set of quantitative parameters that are free from any treatment errors and contain only experimental

viii

Preface

errors. The methodology dramatically reduces an initial rectangle matrix of size N  M, with N > M  1, to a more compact matrix of size n  n, where n is the number of universal parameters obtained by the proposed approach. Typically, it is n < 10. The method detects some monotone behavior with respect to an input external factor and finds the quantitative differences between external factors that are expressed in a descriptive (qualitative) way. This useful, rather effective, innovation is tested on model-based simulation data and on real data available from experiments. To conclude, it is remarked that the methods proposed in each chapter are almost independent from each other, hence they can be applied and tested independently. I hope that the reader will find many useful algorithms and new ideas to understand and establish deterministic relationships that are hidden in various and complex phenomena.

Kazan, Russia Bari, Italy Bari, Italy January 2020

Raoul R. Nigmatullin Paolo Lino Guido Maione

Symbols

Note: The list of symbols is organized by chapters, avoiding repetition of symbols previously declared Chapter 1 ε(x), err(x) Random error functions θ Vector of the given fitting parameters hε(x)i, Arithmetic mean value of an error function, arithmetic mean of any function K() Gaussian kernel function ninj Noisy random sequence Gsm(), gsm Generalized smoothed function in the POLS (.) w Fitting parameter rndj Random sequence Jninj Integrated random sequence RelErr() Relative error function σ(θ) Relative dispersion for a set of the fitting parameters Ck Set of the fitting coefficients Xk(xj) Set of the fitting functions in the LLSM r(x) Remnant random function Ψk(xj) Orthogonal set of functions in the LLSM D Differentiation operator (AB) Scalar product of vectors A and B yin(x) Initial fitting function

ix

x

Symbols

Chapter 2 Scaling factors/parameters figuring in the fractal model Relaxation function, microscopic relaxation function Dimensionless temporal variable Remnant function Fitting coefficients of the log-periodic Fourier sequences

b,k,ξ Φ(t), f(z) z Rm(t) Ack, Ask

Chapter 3 ðpÞ

ΔN

Mp ðpÞ

ðpÞ

GN , F N

u + iv νk GMVp(s1, s2, . . ., sK) 0 < nrmj( y) < 1 momp ¼ eLnp GPCFp CCF, ICF w s , λs E(k)

Integer or fractional moment of the p-th order for a sequence of N data points Relative moments counted off the first-order moment Generalized mean value function, normalized function of the moments Parameter defining the function of the complex moments Values of the power-law exponents Generalized mean value function for k random sequences Normalized function for the interval [0,1] Exponential moments Generalized Pearson correlation function of the p-th order Complete correlation factor, internal correlation factor Normalized statistical weights, set of exponential factors Relative fitting error

Chapter 4 ak R()( p)   R p, θ Mj Tgi, Bi Jb(x) A, B, α, β b, ξ w1,2 H, H

0

Various products of the given set (at a fixed value of k) of roots Roots distributions for the positive and negative parts, respectively Generalized Gaussian distribution including the fourth-order power-law exponents Weierstrass-Mandelbrot function Tangent and Intersept for the segment of the straight line Beta-distribution function The fitting parameters of the Beta-distribution function Scaling parameters Weighting factors (multipliers) Heights of the initial and scaled Beta-distributions, respectively

Symbols

xi

Chapter 5 Pr(t), F(t) P(λ) ðr Þ

ðr Þ

Periodic and quasi-periodic functions, respectively Characteristic polynomial Decomposition coefficients figuring in the Prony sequence

Ack , Ask   3 Spatial translation function in the 3D-space P f r þ ni ai i¼1

Chapter 6 hTxi κl (l ¼ 0,1,2. . .L1) PR(x) Rsl ¼ Δup + Δdn L1 P F L ðxÞ ¼ hal ðxÞiF l ðxÞ l¼0

, hal(x)i σ(x) Ks,l(x) κl(x) Slm (m ¼ 1,2. . .M ) Rt

Arithmetic mean of the period along OX axis A set of the roots of the characteristic polynomial Conventional Prony function Quality and accuracy factor of the equipment Function describing the quasi-reproducible experiment, the mean values taking into account the influence of external factors Functional dispersion Pair correlation functions Functional roots of the characteristic polynomial Slopes distribution (m- number of measurements) Quality parameter for the performed experiment

Chapter 7 Ωk (k ¼ 0,1,. . .,K1) w1, w2, . . ., wp EΩk ¼ Ωk+1 def

Dispersion law for the given signal Weighting factors determining contributions of each basic frequency to the generated frequencies Shifting operator Generalized beating

Δα Ωk  ðE  αÞΩk      Ωkþ2 Ωk    , Y 2,k ¼  Ωkþ1 Ωkþ2  Invariant frequency combinations used Y 1,k ¼    Ωkþ3 Ωkþ1 Ωkþ2 Ωkþ3  for the testing of the desired dispersion law 00 χ(iω) ¼ χ0(ω)  iχ (ω) Complex susceptibility K(t) ¼ iQ1Sp(eβH[A(t), B]) Pair correlation function used in the statistical mechanics, with H the Hamiltonian, Q the partition function σαβ(iω), α, β ¼ x,y,z Complex conductivity tensor vq (q ¼ 1,2,. . .,Q) Set of input variables

xii

Symbols

Chapter 8 ð2,4Þ

Lk

ðx, yÞ

A2,4, B2,4, C2,4 K2,4(X,Y ) σB,C, SA,B,C Inv V, U PC1,2,3 ðaÞ

Tj PT Ei,j M ¼ USPT , Sðσ1 σ2 . . . σN r Þ Jl(z) (ξ1ξ2. . .ξn)1/n bl(z) Dl(z) σ(z)

Quadratic and fourth-order forms for an arbitrary point M (x,y) located in 2D-plane, respectively Coefficients of the corresponding forms Desired quadratic and the fourth-order forms obtained after the application of the averaging procedure Coefficients entering in the final forms Value of the corresponding invariant Voltage Principal components Score matrix Load-transposed matrix Error matrix Presentation of the initial matrix M in the PCA, being σ a set of eigen-values of the matrix M, belonging to the diagonal matrix S Current distribution in the l-th percolation cluster, being z ¼ U/U0 the sizeless potential Geometric mean of the scaling factors Log-periodic scaling factor Fractal dimension depending on the applied potential z Functional dispersion

Chapter 9 p1 ¼ Rg(Dy) p2 ¼ Rg(|Dy|) p3 ¼ DN ¼ N(x+) – N(x) ÞþN ðx ÞÞ p4 ¼ 2ðN ðNxðþroots : Þ

p5 ¼ DS ¼ S(y+) + S(y) þ,   NP S yþ, ¼ yðþ,Þj

Range of the given sequence Dy Relative contribution of the given amplitudes Number of amplitudes located in the opposite sides of the trendless sequence Number of data points corresponding approximately to one oscillation Total contribution of all amplitudes corresponding to positive/negative amplitudes

j¼1

p6 ¼ Ymx p7 ¼ xmx

Maximal value of the beta-distribution corresponding to the given sequence Location of the extreme point of the betadistribution Measure of the vertical asymmetry Measure of asymmetry relatively extreme point

p8 ¼ Dx ¼ 12 ðx0 þ xN Þ  xmx x0 p9 ¼ αβ  r ¼ xxNmxx h ðsÞ mx i ðpÞ Ratio of the reduced matrices Rt s ðpÞ ¼ Md  1 , ðsÞ Mn ðpÞ Md (s) (p), Mn (s) (p)

Reduced matrices depending on parameter p

Contents

1

2

The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Classical Approach: Formulation of One-Dimensional Regression Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Initial Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 Basic Assumptions About Errors . . . . . . . . . . . . . . . . . . . 1.1.4 Assumption About the Regression Function and Its Possible Recognition . . . . . . . . . . . . . . . . . . . . . . . 1.1.5 Analysis of Remnants . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Procedure of Optimal Linear Smoothing of Noisy Data . . . . . . . . . 1.2.1 Possible Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Description of the Eigen-Coordinates Method . . . . . . . . . . . . . . . . 1.3.1 Determining the Basic Linear Relationship for Functions with Nonlinear Fitting Parameters . . . . . . . . . . . . . . . . . . . 1.3.2 Using the Orthogonal Variables . . . . . . . . . . . . . . . . . . . . 1.3.3 Selection of the Most Suitable Hypothesis . . . . . . . . . . . . . 1.4 Generalizations and Recommendations for the Eigen-Coordinates Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 Using a Priori Information . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 The Problem of Elimination of Depending Constants . . . . . 1.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Questions for Self-Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Eigen-Coordinates Method: Description of Blow-Like Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction and Problem Formulation . . . . . . . . . . . . . . . . . . . . . 2.2 The Reduced Fractal Model and its Realisation in the FractalBranched Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 2 2 3 3 5 6 9 11 12 15 17 25 27 32 41 45 45 47 49 49 53 xiii

xiv

Contents ThiS is a FM Blank Page

2.3

The Fitting Procedure of Functions Containing Log-Periodic Oscillations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Application of the Original Procedure to Real Data . . . . . . . . . . . 2.4.1 Description of the Bronchial Asthma Disease . . . . . . . . . 2.4.2 Description of Fragments of the Queen Bee Acoustic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Description of Acoustic Data Recorded from Car Valves in the Idling Regime . . . . . . . . . . . . . . . . . . . . . . 2.5 Concluding Remarks on Fundamental Results and Open Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.A Appendix 1: Evaluation of the Product (2.4) for the Case ξb 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.B Appendix 2: Synthesis of the Eigen-Coordinates Method and the Basic Linear Relationship . . . . . . . . . . . . . . . . . . . . . . . 2.C Appendix 3: Computation of the Power-Law Exponent ν in L0(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.D Appendix 4: Basic Linear Relationship for Inoculating Log-Periodic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.E Appendix 5: Reduction to Three Incident Points . . . . . . . . . . . . . 2.F Questions for Self-Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.G Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

The Statistics of Fractional Moments and its Application for Quantitative Reading of Real Data . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction and Formulation of the Problem . . . . . . . . . . . . . . . 3.2 Evaluation of Statistical Stability of Random Sequences by Higher Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 The Approximate Expression for the Generalised Mean Value Function, Fractional and Complex Moments, and External Correlations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 The Generalised Pearson Correlation Function, External and Internal Correlations, and Some Useful Inequalities . . . . . . . 3.5 Applications of the Statistics of Fractional Moments . . . . . . . . . . 3.5.1 Statistical Protection of Valuable Documents . . . . . . . . . . 3.5.2 Detection of a Small Signal and Statistical Proximity . . . . 3.5.3 Real Data Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.4 Dielectric Data and Calibration Curve . . . . . . . . . . . . . . . 3.5.5 Fractional Exponential Reduction Moments Approach . . . 3.5.6 Integration and Differentiation Pre-processing . . . . . . . . . 3.5.7 Some Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Concluding Remarks and Open Problems . . . . . . . . . . . . . . . . . . 3.7 Questions for Self-Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .

58 61 62

.

63

.

70

.

72

.

75

.

77

.

81

. . . . .

82 83 83 84 85

. .

87 87

.

89

.

93

. . . . . . . . . . . . .

99 106 106 111 115 115 118 122 123 136 137 137 137

Contents

4

5

xv

The Quantitative “Universal” Label and the Universal Distribution Function for Relative Fluctuations. Qualitative Description of Trendless Random Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction and Formulation of the Problem . . . . . . . . . . . . . . . . 4.2 The Universal Distribution of Stable Points . . . . . . . . . . . . . . . . . 4.3 Representation of Data from Complex Systems Without an Explicit Model by Means of the Generalised Gaussian Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Detection of Few ‘Strange’ Points Located in a Narrow Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 The Presence of Random Points That Disturb the Whole Interval l . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Calculation of the Quantitative Universal Label for Real Data . . . . 4.4.1 Similar Random Sequences Without Trend (TripleCorrelations of the Transcendental Numbers π and e) . . . . . 4.4.2 Random Sequences with a Trend: The ‘Forex’ Currency Market Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Quantitative Classification of Earthquakes . . . . . . . . . . . . . 4.4.4 Coding Information with the Help of Quantum Dots . . . . . 4.5 Basic Results and Open Problems . . . . . . . . . . . . . . . . . . . . . . . . 4.6 The Universal Distribution Function for Relative Fluctuations and Application of Beta-Distribution for Quantitative Analysis of Long-Time Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Introduction and Formulation of the Problem . . . . . . . . . . . 4.6.2 Scaling Properties of the Beta-Distribution and Description of the Treatment Procedure . . . . . . . . . . . . . . . . . . . . . . . . 4.6.3 Treatment of Long-Times Membrane Current Series . . . . . 4.6.4 Clusterization of Final Parameters Based on the Generalised Pearson Correlation Function . . . . . . . . . . . . . 4.6.5 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Questions for Self-Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Description of Partly Correlated Random Sequences: Replacement of Random Sequences by the Generalised Prony Spectrum . . . . . . . 5.1 Introduction and Formulation of the Problem . . . . . . . . . . . . . . . 5.2 Description of a Quasi-Periodic Process in Terms of the Prony’s Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Description of the General Detection Algorithm . . . . . . . . . . . . . 5.4 Detection of Quasi-Periodic Processes from Real Data . . . . . . . . 5.4.1 Detection from Raman Spectra Recorded for Pure Water . 5.4.2 Detection from Random Geophysical Acoustic Signals . . 5.5 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

141 141 144

148 149 153 159 159 165 169 177 180

182 182 185 195 198 203 204 204 205

. 207 . 207 . . . . . .

209 212 216 217 223 226

xvi

Contents

5.A

Appendix: Generalization of the Model for Quasi-Periodic Processes to Consider Incommensurable Periods . . . . . . . . . . . . . 230 5.B Questions for Self-Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 6

7

The General Theory of Reproducible and Quasi-Reproducible Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction and Formulation of the Problem: The Reproducible Experiments and their Description . . . . . . . . . . . . . . . . . . . . . . . 6.2 The Basis of the General Theory of Reproducible Experiments. The Physical Meaning of the Prony Decomposition . . . . . . . . . . 6.3 Description of the General Algorithm and Its Testing on Available Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 The Raman Spectra of Distilled Water . . . . . . . . . . . . . . 6.3.2 Two Wireless Sensor Nodes Exchanging Packets in a Noisy Wireless Channel . . . . . . . . . . . . . . . . . . . . . . 6.4 Generalisation of Results for Quasi-Reproducible (Non-stationary) Measurements . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Self-Consistent Solutions of the Functional Equation (6.30) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 The Clusterization Procedure and Reduction to an “Ideal Experiment” . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Validation of the General Theory on Experimental Data . . . . . . . 6.6 Final Results and Further Perspectives . . . . . . . . . . . . . . . . . . . . 6.7 Questions for Self-Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Non-orthogonal Amplitude Frequency Analysis of Smoothed Signals Approach and Its Application for Describing Multi-Frequency Signals . . . . . . . . . . . . . . . . . . . . . 7.1 Describing Complex Signals with Beatings . . . . . . . . . . . . . . . . 7.2 Basics of the Non-orthogonal Amplitude Frequency Analysis of Smoothed Signals (NAFASS) Approach: Evaluation of the Initial Multi-Frequency Spectrum . . . . . . . . . . . . . . . . . . . . . . . 7.3 The Fitting of the Optimal Spectrum . . . . . . . . . . . . . . . . . . . . . 7.3.1 The Fitting Function for the Integer Case . . . . . . . . . . . . 7.3.2 The Fitting Function for the Fractional Case . . . . . . . . . . 7.4 Application and Verification of the NAFASS Approach by Available Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Application to Economic Data . . . . . . . . . . . . . . . . . . . . 7.4.2 Application to the Noise Created by Transcendental Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 New Type of Fluctuation Spectroscopy Based on the NAFASS Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. 235 . 235 . 236 . 244 . 246 . 252 . 257 . 260 . . . . .

264 269 280 285 286

. 289 . 289

. . . .

293 296 296 298

. 301 . 301 . 306 . 315

Contents

xvii

7.6 7.7

. 323

The NAFASS Approach and Chaos . . . . . . . . . . . . . . . . . . . . . . Concluding Remarks and the Basic Principles of Fluctuation Metrology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.A Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.B Questions for Self-Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

9

. . . .

334 337 339 340

Applications of NIMRAD in Electrochemistry . . . . . . . . . . . . . . . . . 8.1 Application of the Discrete Geometrical Invariants for the Analysis of Solute Background . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Formulation of the Problem . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Description of the Method . . . . . . . . . . . . . . . . . . . . . . . . 8.1.3 An Application of the Method to Electrochemistry . . . . . . . 8.1.4 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 The Generalized Principal Component Analysis and Its Application in Electrochemistry . . . . . . . . . . . . . . . . . . . . 8.2.1 Formulation of the Problem . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Experimental Set-Up and Preliminary Data Analysis . . . . . 8.2.3 The Mathematical Section of the PCA and the Modified F-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.4 Application of the Modified Platform to Real Data . . . . . . . 8.2.5 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 The Fractal Theory of Percolation and its Application in Electrochemistry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Formulation of the Problem . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Foundations of the General Theory of Percolation Currents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 The Experimental Set-Up . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.4 Description of the Algorithm . . . . . . . . . . . . . . . . . . . . . . 8.A Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.B Questions for Self-Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

343

Reduction of Trendless Sequences of Data by Universal Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Compact Universal Parameters to Reduce Initial Data . . . . . . . . . 9.2 The “Struggle” Principle and Justification of the Chosen Set . . . . 9.3 Verification by Model Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Finding Quantitative Differences when the Input Is in a Descriptive Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Questions for Self-Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

344 344 345 352 357 359 359 363 370 375 380 382 382 384 390 391 401 404 405 409 409 411 415

. 416 . 428 . 428

Epilogue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433

Abbreviations

ACF AFR AUC AVC BLC BLR BLS CCF CFFT DEL DFA DGI ECs FB FERMA FFT FLSM FNS FPS FT GCE GGD GMV GMVF GoP GPCA GPCF GPS GTE HF ICF

Auto-Correlation Function Amplitude-Frequency Response Area Under the Curve Advanced Video Coding Bell-Like Curve Basic Linear Relationship Blow-Like Signal Complete Correlation Factor Complex Fast Fourier Transform Double Electric Layer Detrended Fluctuation Analysis Discrete Geometrical Invariant Eigen-Coordinates Frictionless Bearing Fractional Exponential Reduction Moments Approach Fast Fourier Transform Functional Least Squares Method Fluctuation-Noise Spectroscopy Frames per Second Fourier Transform Glassy Carbon Electrode Generalized Gaussian Distribution Generalized Mean Value Generalized Mean Value Function Group of Pictures Generalized Principal Component Analysis Generalized Pearson Correlation Function Generalized Prony Spectrum General Theory of Experiment High-Frequency Internal Correlation Factor xix

xx

IM LLSM LRD MDN MV MVC NAFASS NIMRAD OMA PCA PCC PEN POLS PSCV QM QoS QP QP-Process QT-Property QUL REMV RFM RS SFM SRA SS-Process STFT SVD TLS Trps UDFRF VAG VBR WT

Abbreviations

Intermediate Model Linear Least Square Method Long Range Dependence Mercaptophenyl Diazonium Nanofilms Multi-View Multi-View Video Coding Non-Orthogonal Amplitude Frequency Analysis of the Smoothed Signals Non-Invasive Methods of Reduced Analysis of Data One-Mode Approximation Principal Component Analysis Pearson Correlation Coefficient Pseudo-Ergodic Noise Procedure of Optimal Linear Smoothing Principle of the Strongly Correlated Variables Quantum Mechanics Quality Of Service Quantization Parameter Quasi-Periodic Process Quasi-Translational Property Quantitative Universal Label Reduced Experiment to its Mean Value(s) Reduced Fractal Model Raman Spectrum (Spectra) Statistics of Fractional Moments Sequence of the Ranged Amplitudes Self-Similar Process Short-Time Fourier Transform Singular Values Decomposition Trendless Sequence Tryptophans Universal Distribution Function for Relative Fluctuations Voltammogram Variable-Bit-Rate Wavelet Transform

Chapter 1

The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

Abstract This chapter provides essential information about the fitting procedure. Everybody should know the conventional linear least squares method (LLSM), which was proposed by K. F. Gauss in the 18th century and later modified by other mathematicians. This method is very efficient when dealing with linear fitting coefficients. On the contrary, nonlinear fitting represents a serious problem without a general solution at present. This statement is related to the fact that the search of the global minimum in the space of the fitting parameters is an unsolved problem. The introduced eigen-coordinates method is based on the following result. It can be shown that many differential equations for functions having initially nonlinear fitting parameters possess a new set of linear parameters. Therefore, in this case, one can calculate a basic linear relationship and reframe the solution of the fitting problem in the application of the LLSM again. In this chapter, the reader can see how to fit the combination of exponential, power-law, and other functions. Some exercises help to evaluate this innovation and use it in various research problems. Besides, the chapter explains the procedure of optimal linear smoothing (POLS) that helps to smooth data. Keywords Linear least-squares method (LLSM) · Eigen-coordinates method (ECs) · Nonlinear fitting method · Procedure of the optimal linear smoothing (POLS)

1.1

Classical Approach: Formulation of One-Dimensional Regression Problems

Any scientist or engineer having experimental data at disposal deals with the necessity to select a function, which is “the most suitable” to fit the given (initial) data and to derive a model of the process that originated the data.

© Springer Nature Switzerland AG 2020 R. R. Nigmatullin et al., New Digital Signal Processing Methods, https://doi.org/10.1007/978-3-030-45359-6_1

1

2

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

1.1.1

Initial Data

In fact, there are three basic problems related to the initial data: 1. All data are available in the limited interval of the measured variables. 2. The set of data represents itself a random sampling, which is always accompanied by a measurement error and by limitations of the measurement apparatus function. 3. The set of data can be fitted by a certain set of functions containing some number of fitting parameters in the limits of admissible error variance. Figure 1.1 represents a typical situation. Initially, a random sequence (xj, yj) ( j ¼ 1,2,. . .,N ) is given, where the set x = {x1, . . ., xN} is considered as the independent (input) variable, which is usually determined as a set of independent factors, and the set y = {y1, . . ., yN} depends on the given set of experimental factors and determines the response.

1.1.2

Regression Model

Usually, it is assumed that the measured response values are divided in two parts: (1) the first, basic part of y strongly depends on x and so it is determined by the deterministic function f(x); (2) the second part reflects the influence of other uncontrollable factors, hence it is determined by a random function (error) ε(x) with respect to the given factor x. It holds: 200

R(x)

150

100

50

0 0

2

4

6

8

10

x

Fig. 1.1 Realization of a random sampling of R(x) given in the limited interval of x and aggravated by a random error

1.1 Classical Approach: Formulation of One-Dimensional Regression Problems

y ¼ f ðxÞ þ εðxÞ

3

ð1:1Þ

Usually, the random function is associated mainly with experimental error, i.e. it includes all possible errors that appear as the result of: (a) experimental measurements; (b) the used equipment, and (c) the influence of different uncontrollable factors. By putting εj ¼ ε(xj), the general relationship can be written in the equivalent form   y j ¼ f x j þ ε j , j ¼ 1, 2 . . . , N:

1.1.3

ð1:2Þ

Basic Assumptions About Errors

In the conventional models related to regression analysis, it is not possible to separate the function f(xj) and the random function εj. Nevertheless, an approximate separation is possible if one can approximately replace the function f(xj) by the regression function f(xj, θ), where θ is the fitting vector that belongs to the set of the fitting parameters θ2Θ. Usually, the realization of the random function ε(x) in one set of experiments is assumed: (a) totally independent from the realization of the random function belonging to another set of experiments (however, this supposition is not true in many real cases, see Chap. 7); (b) related to the nature of distribution function, which is assumed to remain the same (i.e. it keeps its statistical proximity during the whole process of measurements), which can be valid only in the case of stable experiments (see Chap. 7). These two assumptions are not mandatory, as shown below. It is possible to consider the cases in which the error function can keep its systematic component and cannot follow the normal or uniform distribution. Below several examples from real applications show and confirm this point of view.

1.1.4

Assumption About the Regression Function and Its Possible Recognition

It is supposed that a set of admissible functions belongs to the parametric family of the functions f(xj, θ). The fitting vector θ figuring inside the arguments should minimize (in a certain sense) the random function ε(x). So, the classical problem of the one-dimensional regression analysis can be formulated as

4

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

  y j ¼ f x j , θ þ ε j , j ¼ 1, 2, . . . , N:

ð1:3Þ

It is necessary to restore the dependence between the set of yj and the fitting function f(xj, θ) (which is defined usually as a possible hypothesis) for certain values of the fitting vector θ. These components should minimize the values of the error dispersion. Mathematically, this requirement can be expressed in the following simple form: σðθÞ ¼ min θ

N X j¼1

ε2j

! N  X  2 ¼ min y j  f x j, θ : θ

ð1:4Þ

j¼1

As shown below, the eigen-coordinates (ECs) method allows to reduce the problem of the non-linear fitting by the linear least-squares method (LLSM), when the components of the fitting vector θ enter into (1.3) in a linear way: s   X   f ðx, θÞ , Y x j ¼ Ck X k x j :

ð1:5Þ

k¼1

Note that Xk is the k-th component of a set of functions (X1(xj), . . .,Xs(xj)) a priori known. For example, it can be a set of polynomials or another set of functions for the description of the chosen Y(xj). Throughout all the following chapters, it is shown how to determine (1.5) as the basic linear relationship (BLR). In writing expression (1.5), all functions in (1.5) are assumed to be shifted relatively to their mean values as follows: N   1 X Y xj ) yj  y  y j  h. . .i, N j¼1 j N     1 X     Xk x j ) Xk x j  X x  X k x j  h. . .i: N j¼1 k j

ð1:6Þ

This procedure is very useful and can be considered in many cases as obligatory because it keeps the deviations of the transformed error function eεðxÞ ¼ εðxÞ  hεðxÞi near its mean value equal to zero, i.e. < eεðxÞ >¼ 0. In other words, the systematic error of the chosen model in the result of subtraction of its mean value is eliminated. In the opposite case, when < eεðxÞ >6¼ 0, one expects the uncontrollable increase of the initial and systematic error ε(x) in the result of integration and other transformations that can lead the initial expression (1.3), (1.4), and (1.5). Really, integrating the modified error (JeεðxÞ ) provides another random function, whose average value cannot coincide with the zero value hJeεðxÞi 6¼ 0. Here the fitting vector θ is represented by a finite and linear combination of the independent constants Ck (k ¼ 1,2,. . .,s) and the fitting function f(xj, θ) is presented by a linear combination of the functions Xk(xj). Besides the solution of a non-linear

1.1 Classical Approach: Formulation of One-Dimensional Regression Problems

5

Table 1.1 Simple functions that admit presentation by straight lines Recognized function y ¼ A exp.(λx) y ¼ A xλ y ¼ af ðx1Þþb y ¼ Aeλx11 y¼

1þð

1

Þ

xx0 2 w

Presentation in the form of a straight line Y ¼ aX(x) + b Y ¼ ln( y), a ¼ λ, X(x) ¼ x, b ¼ ln(A) Y ¼ ln( y), a ¼ λ, X(x) ¼ ln(x), b ¼ ln(A) Y ¼ 1/y, X(x) ¼ f(x)   Y ¼ ln 1y  1 , X ðxÞ ¼ x, a ¼ λ, b ¼ ln ðAÞ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r  1 Y¼  1 , a ¼ w1 , b ¼  xw0 y

fitting problem, which in most cases reduces to a linear problem, it will be shown how to recognize/identify the proper hypothesis among at least two competitive ones. The more reliable criterion to select a suitable hypothesis will be formulated. As it is well-known, in several cases, to decrease the uncertainty in the selection of the proper hypothesis, some curves can be presented in the form of straight lines by finding the corresponding coordinates. For example, the hypothesis of the exponential function is recognized in the semilog scale, the power-law function in a double-log scale, but such representation is not known or impossible for more and other cases. Table 1.1 shows the simplest functions that can be presented in the form of straight lines. The proper coordinates should be determined in such a way that presenting the initial data in these coordinates should give a curve close to a straight line. Definitely, the list of functions in Table 1.1 is not complete and can be extended. However, analysis of these functions leads to the conclusion that the list could be considerably increased if the fitting function f(xj, θ) is presented in the form of the BLR (1.5). So, one can try to find another form for the initial fitting function (possible hypothesis) in order to express it in the equivalent form (1.5). Is it possible to realize this idea practically or not? Figure 1.1 placed above represents a typical example. The associated data are measured in the limited interval of a variable x, contain the error of measurements and can be fitted reasonably well either by a Gaussian or some Lorentz bell-like profile. However, a careful investigation shows that this curve is neither a Gaussian nor Lorentzian. The best fit can be realized with the help of the function R(x; A) ¼ B xμ exp.(a1 x  a2 x2/2), with A = {a1, a2} and B ¼ 1.55, μ ¼ 0.6, a1 ¼ 1.5, a2 ¼ 0.3.

1.1.5

Analysis of Remnants

After the final fitting with the help of the chosen hypothesis is realized, it is useful to analyze the error function

6

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

      err x j  r x j ¼ y j  f x j , θ ,

ð1:7Þ

which is usually defined as the remnant function (or the function of remnants). If the chosen hypothesis coincides with the true hypothesis, then the remnant function (1.7) should behave as the equally-distributed random function in the vicinity of zero values while not having a clearly visible trend. If there is some trend, then it can prompt some unexpected dependence that should be taken into account in the initial fitting procedure. Special attention should be paid to outliers. They can distort the values of the fitting parameters considerably, hence two problems appear: a. The elimination of the outliers. b. The smoothing of the initial data to decrease the values of outliers and the values of the relative error at the whole (the solution of this problem by the optimal linear smoothing procedure will be considered in Chap. 2). In spite of the fact that the LLSM and its possible generalizations are considered in many books (e.g [1–5]), however, some interesting problems remain unsolved even in one-dimensional regression tasks. In this book, as a contribution to modern mathematical statistics, the ECs method is considered to reduce the problem of fitting a wide class of non-linear functions (when the fitting vector θ enters to f (xj, θ) in a non-linear way) to a problem solved by the well-known LLSM.

1.2

Procedure of Optimal Linear Smoothing of Noisy Data

Any real data contain errors expressed in the form of a random error function ε(x). If the random function accepts large values, then it is necessary to decrease its values. More specifically, if there is an approach that enables to realize the procedure of data smoothing, then one may increase the quality of the regression procedure and improve the selection of the proper hypothesis, which is more important. In this section, the procedure of optimal linear smoothing (POLS) is suggested for decreasing the values of the initial random function. It is useful to consider some model based on mimic (artificially created) data for a better understanding of the basic elements of the smoothing procedure. Consider two functions that are defined in the interval [xmn, xmx] on the given number of discrete points xj ¼ xmn + ( j/N ) (xmxxmn), with j ¼ 0,1,. . .,N:          y1,2 x j ¼ A0 þ A1 x j ν1 þ A2 x j ν2 cos Ω  ln x j þ sin Ω  ln x j

ð1:8Þ

The parameters figuring in (1.8) correspondingly accept the following values: A0 ¼ {1,2}; A1 ¼ {4,6}; A2 ¼ {6,4}; ν1 ¼ {0.9, 1.0}; ν2 ¼ {1.0, 0.9}; Ω ¼ {3.0, 2.0}. Then these functions are randomized by means of the relationship

1.2 Procedure of Optimal Linear Smoothing of Noisy Data

       nin1,2 x j ¼ y x j þ Δ þ 2Δ  Pr1,2 x j  max ðyÞ, Δ ¼ 0:5:

7

ð1:9Þ

Here Δ determines the value of deviation relatively to the maximal value max( y). This parameter is set equal to 50% in order to create large random deviations relatively to the chosen model function (1.8). Expressions Pr1,2(xj) in (1.9) determine two different random functions, which generate random numbers in the given interval [0,1]. They are chosen as a linear combination of uniform, exponential and normal random functions (in equal proportions), respectively, in order to receive the final random function independently from the conventionally accepted normal distribution. The plots of these functions expressed by (1.8) are given in Fig. 1.2a. Then, the POLS is applied as the first and fundamental element of the new treatment approach. The essence of the POLS was described in papers [6, 7] but it is instructive to repeat here the basic elements. To smooth the initial data (1.6), the linear procedure based on the convolution with the Gaussian kernel is used. The employed expression is written as N P

ey ¼ Gsmðx, nin, wÞ ¼

K

xi x j  w

j¼1 N P j¼1

K

nin j

xi x j 

  , K ðt Þ ¼ exp t 2 :

ð1:10Þ

w

Here, K(t) defines the Gaussian kernel and w defines the fixed width of the smoothing window. The set ninj ( j ¼ 1,2,. . ., N ) defines the initial noisy random sequence. Despite many smoothing functions are embedded in numerical codes, the chosen function has two important features: (a) the transformed smoothed function (1.9) is obtained in the result of linear transformation hence does not have uncontrollable error; (b) w is an adjustable (fitting) parameter and can accept any value. (For exercise, the reader is suggested to find the limit values of the expression at small (w ! 0) and large (w >> 1) values of the smoothing window). In a certain sense, the smoothing function is a pseudo-fitting function, which is not directly associated with a specific model describing the considered process. The value of the optimal window wopt is chosen from the conditions     Δn j ¼ Gsm x, ey j , w0  Gsm x, ey j , w , w0 < w,   stdevðΔnðwÞÞ min ð Re lErr ðwÞÞ ¼  100%: meanðjeyðwÞjÞ

ð1:11Þ

This iterative procedure automatically decreases the value of the initial fluctuations and helps to find wopt by minimizing the relative error RelErr(w) in the vicinity of the first local minimum. In many model calculations realized with random sequences having clear or hidden trends, wopt does exist and helps to find the optimal smoothed curve (trend) describing the large-scale fluctuations.

8

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

a

nin1,y1,nin2,y2

200

nin1 y1 nin2 y2

100

0

-100 0

10

20

x

b

0.6 J1 Jsm1 J2 Jsm2

nin1 nin2

0.4 nin1(x),nin2(x)

J1(x),J2(x)

1.0

0.2

0.5

0.0

0

10 x

20

0.0

0

10

20

x

Fig. 1.2 (a) The functions y1,2(x) are strongly influenced by a “noise”, which usually describes external random factors. They are given by expressions (1.8) and are shown by solid (green and pink) lines. The strong influence of fluctuations (shown by the scattered points) is presented by expression (1.9). (b) Creation of an artificial trend by integration. The initial random sequences on the right-hand side do not have a trend. A possible trend coincides with the positive part of X-axis. Integration by expression (1.12) provides the curves with clearly expressed trends that are shown in the central figure. The solid lines show the smoothed curves obtained by the POLS. The optimal window is wopt ffi 0.5

1.2 Procedure of Optimal Linear Smoothing of Noisy Data

9

It is also remarked that, as noted from the analysis of model data, if the initial random sequence specified by rndj ( j ¼ 1,2. . ., N ) does not have a trend, then the first local minima is absent. In such a case, it is preferable to create a trend artificially by trapezoidal integration of the initial trend through the recurrence Jnin j ¼ Jnin

 þ 0:5  x j  x

j1

j1

   rnd

j1

 þ rnd j :

ð1:12Þ

After integration, an initial random sequence rndj that oscillates around the X-axis will receive a trend, and the integrated sequence Jninj will have fewer deviations than the initial sequence. Figure 1.2b shows this phenomenon, which is interesting for applications. The minimal values of the functions RelErr(w) for the model example are shown in Fig. 1.3a. The optimal trend is found as eyðxÞ ¼ Gsmðx, nin, w e Þ, w e  wopt :

ð1:13Þ

The optimal trends determined by the POLS for both functions (1.9) are shown in Fig. 1.3b. However, how to prove that these optimal trends are the most adequate in comparison with other smoothed curves? One can suggest a criterion proving that the optimal trend found by expressions (1.10), (1.11), and (1.12) is optimal or at least close to the optimal one. Integration of the initial random sequences nin1,2(x) with respect to the argument x provides the less “noisy” functions Jnin1,2(x). There are some evidences that any integrated random sequence decreases essentially the range of the high-frequency fluctuations in comparison with the initial one. If, at the same time, the optimal trend (1.13) is integrated, then the integral trends JeyðxÞ are expected to serve as pseudo-fitting functions for the functions Jnin1,2(x) and should be close to the integrated ideal functions (1.8) that determine the perfect trend. This observation is completely confirmed on many real and mimic data. Figure 1.3c confirms it. The integrated optimal trends Jey1,2 ðxÞ can serve as the pseudo-fitting functions for Jnin1,2(x) and are very close to the integrated ideal functions Jy1, 2(x) determined by expression (1.8).

1.2.1

Possible Generalizations

Further tests of the general expression (1.10) show that smoothing by the Gaussian kernel is close to be optimal. Taking the expression   jxi x j j nin j w   j¼1 θ   ey ¼ Gsmðx, nin, wÞ ¼ ð t Þ ¼ exp  t , , K j j θ N P jxi x j j Kθ w N P



j¼1

ð1:14Þ

10

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

a

RelErr1 RelErr2

RelErr1(w),RelErr2(w)

10

w1=0.4723,Rerr1=1.6828% w2=0.6131,Rerr2=1.2703%

5

0

0

1

2

w

nin1(x),nsm1(x),y1(x)

b

200

nin1 nsm1 y1

100

0

-100 0

10

20

x nin2 nsm2 y2

nin2(x),nsm2(x),y2(x)

200

100

0

0

10

20

x

Fig. 1.3 (a) Application of the POLS to the curves defined by (1.11). The optimal smoothing windows (w1 ¼ 0.4723, w2 ¼ 0.6131) minimizing the relative errors and located approximately

1.3 Description of the Eigen-Coordinates Method

c

Jnin1(x),Jnin2(x)

1200

11

Jnin1 Jsm1 Jnin2 Jsm2

600

0

0

10

20

x

Fig. 1.3 (continued) near the first local minimum are shown by arrows. (b) Application of the POLS to the curves “one” (up) and “two” (down) from Fig. 1.2a. The smoothed curves (shown by black solid lines) are close to the ideal curves (see expression (1.8)) that are shown by violet solid line and do not coincide with them. The reasons are obvious. The strong remained fluctuations are “swinging” initial curves non-uniformly in “up-down” directions. It is necessary to take into account the fact that the ideal curve in real situations remained unknown. (c) Additional verification of the POLS based on the integrated curves. Integration of the strongly fluctuated curves and smoothed curves shows that the initially smoothed curves can be considered as pseudo-fitting functions

and putting the additional interval θ as 0 < θ < 6, does not give essential benefits in comparison with (1.10). Therefore, in many practical cases, the simplified expression (1.10) remains optimal. However, it is interesting to note that a “game” with other kernels remains open. The expression for an optimal kernel, which is suitable in a general case, is not known. Then it is necessary to formulate some criterion, which can be helpful to find the optimal kernel. This formulation is not known and remains as an open problem.

1.3

Description of the Eigen-Coordinates Method

The ECs method helps to reduce the nonlinear problem of fitting the chosen hypothesis, initially containing a finite set of nonlinear fitting parameters, to the basic linear relationship of the type (1.5). If this reduction is realized, then the LLSM can be applied. Now, the basic idea and principles can be illustrated and then the method is applied to problems that are described by a set of functions defined as in (1.8).

12

1.3.1

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

Determining the Basic Linear Relationship for Functions with Nonlinear Fitting Parameters

In which cases and how to obtain the basic linear relationship for functions that initially contain a set of nonlinear fitting parameters? To give a positive answer, it is necessary to obtain the corresponding differential equation that is satisfied by the chosen hypothesis. If the unknown parameters {Ck (k ¼ 1, 2,. . ., s)} of the differential equation form a linear combination (the set of parameters can be related with the initial set of parameters {Al (l ¼ 1, 2,. . ., q)} in a nonlinear way), then the answer to the previous question is positive. In other cases, it can be negative. An example is useful to specify this statement. Consider the function R(x; A) ¼ B xμ exp.( a1 x – a2 x2/2). Taking the natural logarithm from both sides gives ln R ¼ ln B þ μ ln ðxÞ  a1 x  a2

x2 : 2

ð1:15Þ

The comparison of the structure of (1.15) with (1.5) allows to identify the desired BLR written relatively to the function Y(x) ¼ ln[R(x)]. However, this presentation is not acceptable because it leads to strong distortions when the function R(x) takes small values close to zero. In this case, ln[R(x)] has large negative values and the initial error is increased. To avoid these large deviations, it is necessary to differentiate (1.15) and present the BLR in the equivalent form x

dR ¼ μR  a1 xR  a2 x2 R: dx

ð1:16Þ

The expression (1.16) is more preferable for further analysis in comparison with (1.15). Nevertheless, it remains unacceptable for further analysis because any numerical differentiation of R(x; A), which is known only in the measured points, creates new uncontrollable errors. To overcome this drawback, the differential Eq. (1.16) must be transformed into the Volterra integral equation with variable that figures as the upper limit of integration. This simple procedure, which is realized numerically by the trapezoid method, decreases the error value (that should be always taken into account when transforming any hypothesis containing unknown errors!) and keeps at least the initial error in the same limits. Then, integration of (1.16) allows to obtain the desired BLR of the following type Y ð xÞ ¼

3 X k¼1

where

C k X k ðxÞ,

ð1:17Þ

1.3 Description of the Eigen-Coordinates Method

13

Zx Y ð xÞ ¼ x  R ð xÞ 

RðuÞdu  h. . .i, x0

Zx X 1 ðxÞ ¼

RðuÞdu  h. . .i, C 1 ¼ μ, x0

ð1:18Þ

Zx X 2 ðxÞ ¼

uRðuÞdu  h. . .i, C 2 ¼ a1 , x0

Zx X 3 ðxÞ ¼

u2 RðuÞdu  h. . .i, C3 ¼ a2 : x0

Here the value x0 corresponds to the initial point of the considered discrete data. As done before, all possible constants subjugating expression (1.17) to the condition < eεðxÞ >¼ 0 are eliminated. Then LLSM allows to determine the unknown fitting parameters μ, a1, and a2 from expressions (1.18). The last unknown constant B is computed from the simple relationship ðRðxÞ  X 4 ðxÞÞ RðxÞ ¼ BX 4 ðxÞ, B ¼ , ð X 4 ð xÞ  X 4 ð xÞ Þ   X 4 ðxÞ ¼ xμ exp a1 x  a2 x2 =2 :

ð1:19Þ

Here and below the symbol (AB) defines the scalar product in the discrete space of the given number of measured points ( j ¼ 1, 2,. . ., N ): ðA  B Þ ¼

N X

A jB j:

ð1:20Þ

j¼1

Therefore, it may be concluded that many functions initially containing an unknown set of the fitting parameters satisfy differential equations, where the initial set of nonlinear parameters forms the desired linear combination. In this case, the ECs method reduces the problem of the nonlinear fitting to the classical problem solved by the well-known LLSM. Moreover, it opens a quite new possibility of calculating the fitting parameters of many statistical distributions by means of a simple and well-developed method. Another possibility is opened in fitting many special functions, which is normally expressed by infinite series, where their fitting to actual data generally represents a specific and difficult problem. For example, consider the differential equation of the second order x ð x  1Þ

d2 y dy þ ðax þ bÞ þ cy ¼ 0 dx dx2

ð1:21Þ

14

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

At a ¼ α + β + 1, b ¼ γ, c ¼ α β, this differential equation describes the solution presented in the form of linear combination of two degenerated hypergeometric functions [8] (γ accepts non-integer values): yðxÞ ¼ A1 F ðα, β, γ, xÞ þ A2 x1γ F ðα  γ þ 1, β  γ þ 1, 2  γ, xÞ

ð1:22Þ

where the function F(α, β, γ, x) is written in the form of the infinite series F ðα, β, γ, xÞ ¼ 1 þ

1 X αðα þ 1Þ . . . ðα þ k  1Þβðβ þ 1Þ . . . ðβ þ k  1Þ k x : ð1:23Þ k!γðγ þ 1Þ . . . ðγ þ k  1Þ k¼1

As it is known, the hypergeometric function is widely used in mathematical physics to describe different natural phenomena. If the fitting parameters α, β, γ are not known, then the fitting by the function y(x) in (1.22) of some actual data is a nontrivial problem. However, it can be easily solved by integrating the differential Eq. (1.21) two times, such that the following type of BLR is obtained: Y ð xÞ ¼

4 X

C k X k ð xÞ

ð1:24Þ

k¼1

Zx Y ðxÞ ¼ xðx  1ÞyðxÞ þ 2

ðx  3u þ 1ÞyðuÞdu < . . . > x0

Zx X 1 ð xÞ ¼

ð2u  xÞyðuÞdu < . . . > , C1 ¼ a, x0

Zx X 2 ð xÞ ¼

yðuÞdu < . . . > , C 2 ¼ b, x0

ð1:25Þ

Zx X 3 ð xÞ ¼

ðx  uÞyðuÞdu < . . . > , C3 ¼ c, x0

X 4 ðxÞ ¼ x < . . . > : A recommended exercise for the reader is to reproduce the relationships (1.25). Here the constant C4 including the unknown value of the first derivative in the initial point x0 is not essential for the calculations hence it is omitted. The function y(u) determines the fitting to actual data by the hypothesis (1.22). Given the Eqs. (1.24) and (1.25), the unknown constants C1, C2 and C3 can be determined by means of the conventional LLSM. Then it is possible to find the desired values of the unknown parameters α, β and γ by the following equations:

1.3 Description of the Eigen-Coordinates Method

  1 þ C1 α, β ¼   2 γ ¼ C 2 :

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   1 þ C1 2 þ C3 , 2

15

ð1:26Þ

The unknown constants A1 and A2 are obtained from (1.22) by the LLSM, because the other fitting constants entering in (1.22) are known. Other examples of the application of the ECs method will be considered below.

1.3.2

Using the Orthogonal Variables

It is well-known that in practical applications of the LLSM the determinant might have the value close to zero. When can it happen? All fitting constants Ck entering into the BLR (1.5) should be independent of each other. Possible errors can also distort the initial BLR and create a situation in which the determinant becomes close to zero. In these uncomfortable cases, the values of the fitting parameters Ck determined from (1.5) contain large errors and sometimes cannot be calculated. For these cases, one may suggest the transformation of the initial variables Xk(x) to another set of variables that are orthogonal to each other. The transformation makes the functions statistically independent of each other and helps to avoid the zeros in the corresponding determinants, which appear in the calculation of the fitting coefficients Cp. Any BLR is presented in the form s   X   Ck X k x j , Y xj ¼

ð1:27Þ

k¼1

having in mind Eq. (1.5) that was considered above. The relationship (1.27) reminds the decomposition of a wave function Y(xj) over the finite set of eigenfunctions {Xk(xj)}k¼1,2,. . .,s. By using the process of orthogonalization, one can choose the set of orthogonal functions {Ψk(xj)}k¼1,2,. . .,s and represent the initial function Y(xj) as a linear combination of {Ψk(xj)}k¼1,2,. . .,s. This transformation is realized with the help of the following formulae       e 1 Ψ1 x j þ . . . þ C e s Ψs x j : Y xj ¼ C

ð1:28Þ

Here   k1     X       Ψp  X k   Ψ p x j , Ψ1 x j ¼ X 1 x j , Ψk x j ¼ X k x j  Ψp  Ψp p¼1 for k ¼ 1, 2,. . ., s. As before, the following notation in (1.29)

ð1:29Þ

16

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems



N  X     Ψp  X k ¼ Ψp x j X k x j :

ð1:30Þ

j¼1

indicates a scalar product in the discrete space of dimension N, where N is the number of measured points. It is instructive to write down the first obtained four functions     Ψ1 x j ¼ X 1 x j ,

ð1:31aÞ

    ðΨ  X Þ   Ψ2 x j ¼ X 2 x j  1 2 Ψ1 x j , ð Ψ 1  Ψ1 Þ

ð1:31bÞ

    ðΨ  X Þ   ðΨ  X Þ   Ψ3 x j ¼ X 3 x j  1 3 Ψ 1 x j  2 3 Ψ2 x j , ð Ψ1  Ψ1 Þ ð Ψ2  Ψ 2 Þ

ð1:31cÞ

    ðΨ  X Þ   ðΨ  X Þ   ðΨ  X Þ   Ψ4 x j ¼ X 4 x j  1 4 Ψ1 x j  2 4 Ψ2 x j  3 4 Ψ 3 x j : ðΨ1  Ψ1 Þ ð Ψ2  Ψ 2 Þ ð Ψ3  Ψ3 Þ ð1:31dÞ It is easy to note from (1.31) that the new set of functions {Ψk(xj)}k¼1,2,. . .,s is orthogonal, i.e. it holds: 

   Ψp , Ψq ¼ Ψp , Ψp δpq :

ð1:32Þ

The initial set of constants Ck is found from the linear system of equations   s X Ψk , X p ð Y, Ψ Þ k ek ¼ e ¼ Cs : C ¼ Ck þ C ,C ð Ψk , Ψk Þ ð Ψk , Ψk Þ p s p¼kþ1

ð1:33Þ

The transformation matrix that is obtained from (1.33) has a triangle form 3 ð Ψ1 , X 2 Þ ð Ψ1 , X s Þ 1, , : . . . . . . 6 ð Ψ1 , Ψ1 Þ ðΨ1 , Ψ1 Þ 7 7 6 6 ðΨ2 , X s Þ 7 7 6 0, 1, ðΨ2 , X 3 Þ , 7 6 ð Ψ2 , Ψ 2 Þ ð Ψ2 , Ψ2 Þ 7 T ¼6 7 6:........................... 7 6 6 ðΨs1 , X s Þ 7 7 6 0: . . . . . . 0, 1, 4 ðΨs1 , Ψs1 Þ 5 0, :: . . . . . . , 0: . . . . . . . . . 1 2

ð1:34Þ

with det(T) ¼ 1. Using the orthogonal functions {Ψk(xj)}k ¼ 1,2,. . .,s helps to reduce the correlation matrix to the set of matrix elements located on the main diagonal and makes the whole procedure more stable to the influence of the initial error.

1.3 Description of the Eigen-Coordinates Method

1.3.3

17

Selection of the Most Suitable Hypothesis

The purpose of the ECs method is described in this section. It consists in the selection of the most suitable hypothesis from two alternative ones. Namely, besides the calculation of the desired set of constants Ck (k ¼ 1, 2,. . ., s), the method helps to select the proper hypothesis verifying two BLRs, which are tuned for selection and simultaneous verification of competitive hypothesis based on a finite set of measured points. The general idea is the following. Each verified function has its own differential equation. If the value of the initial error is rather small, then there is a one-to-one correspondence between the function and its differential equation, which should be satisfied by this partial solution. So, a “strange” function ystr(x) being presented in the eigen-coordinates belonging to a “native” function ynat(x) should receive an additional dependence of the constants Ck (k ¼ 1, 2,. . ., s) against the variable x. In other words, the set of straight lines e ðxÞ ¼ C k X k ðxÞ, Y e ð xÞ ¼ Y ð xÞ  Y

s X

C p X p ðxÞ,

ð1:35Þ

p ðp6¼k Þ

identifying the native function (Ck ¼ const, with k ¼ 1, 2,. . ., s) is distorted when the strange function (which does not satisfy the set of equalities (1.27) corresponding to the native function) is verified. This simple procedure, which uses additional information based on the properties of the corresponding differential equations, makes the method more preferable and adequate in comparison with traditional methods when two and alternative hypothesis are compared [2]. Consider a simple example that illustrates this observation. Two simple functions similar to each other are: y1 ð xÞ ¼

A1 A2 , y2 ð xÞ ¼ : θ 1 þ b2 x ν ð 1 þ b1 x Þ

ð1:36Þ

Simple manipulations easily lead to the BLRs for both functions. They can be written as follows: ð1Þ

ð1Þ

ð1Þ

ð1Þ

Y 1 ðxÞ ¼ C1 X 1 ðxÞ þ C2 X 2 ðxÞ Y 1 ðxÞ ¼ y1 ðxÞ < . . . > , ð1Þ

ð1Þ

X 1 ¼ x  y1 ðxÞ < . . . > , C1 ¼ b1 , Zx ð1Þ ð1Þ X2 ¼ y1 ðuÞdu < . . . > , C 2 ¼ b1 ð1  θÞ: x0

ð1:37aÞ

18

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

After calculation of the fitting parameters b1 and θ, the relations given below easily determine the last parameter A1: y1 ðxÞ ¼ A1 X A ðxÞ, A1 ¼

ð X A ð xÞ  y1 ð xÞ Þ , ðX A ðxÞ  X A ðxÞÞ

ð1:37bÞ



X A ð xÞ ¼ ð 1 þ b 1 xÞ : Similarly, for the function y2(x), one can obtain another BLR: ð2Þ

ð2Þ

ð2Þ

ð2Þ

Y 2 ð xÞ ¼ C 1 X 1 ð xÞ þ C 2 X 2 ð xÞ Zx Y 2 ðxÞ ¼ x  y2 ðxÞ  y2 ðuÞdx < . . . > , x0

Zx

ð2Þ

X 1 ð xÞ ¼

ð2Þ

y2 ðuÞdu < . . . > , C 1 ¼ ν,

ð1:38aÞ

x0 ð2Þ X 2 ð xÞ

Zx

ð2Þ

ðy2 ðuÞÞ2 du < . . . > , C2 ¼

¼

ν : A2

x0

The unknown parameter b2 is found from the relationship ðY b ðxÞ  X b ðxÞÞ , ðX b ðxÞ  X b ðxÞÞ A X b ðxÞ ¼ xν , Y b ðxÞ ¼ 2  1: y2 ð xÞ Y b ðxÞ ¼ b2 X b ðxÞ, b2 ¼

ð1:38bÞ

The plots of the functions (1.36), that are visually closed to each other, are given in Fig. 1.4a. Is it possible to recognize the proper hypothesis and notice the distortions that can appear in the corresponding BLRs? The calculations show that for more reliable recognition of the native hypothesis, it is convenient to present the straight lines in the following form:



ðY ðxÞ  X 2 ðxÞÞ ð X 1 ð xÞ  X 2 ð xÞ Þ Y ð xÞ  X ð xÞ ¼ C 1 X 1 ð xÞ  X ð xÞ , ðX 2 ðxÞ  X 2 ðxÞÞ 2 ð X 2 ð xÞ  X 2 ð xÞ Þ 2



ðY ðxÞ  X 1 ðxÞÞ ð X 1 ð xÞ  X 2 ð xÞ Þ Y ð xÞ  X ð xÞ ¼ C 2 X 2 ð xÞ  X ð xÞ : ðX 1 ðxÞ  X 1 ðxÞÞ 1 ð X 1 ð xÞ  X 1 ð xÞ Þ 1

ð1:39Þ

What happens if the two hypotheses y1(x) and y2(x) pass through the ECs tuned for the recognition of the first hypothesis? The result is presented in Fig. 1.4b.

1.3 Description of the Eigen-Coordinates Method

a

y1(x),y2(x)

12

19

y1(x) y2(x)

8

4

0

b

0

20 x

40

y1(x) yft1(x) 12

y2(x) yft2(x)

y2(x), yft2(x)

12

4

y1(x)

8

8

4

0 0

20 x

40

0 0

20 x

Fig. 1.4 (continued)

40

20

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

c

0.2

Y1n(x) Y1s(x)

4

errn

errn

0.1

0.0

"native" slope=-0.4975 -0.1

2 20 x

40

"strange" slope=-0.3683 errs

0.6

0

errs(x)

Y1n(x),Y1s(x)

0

distortions are deviated

0.3

remnant function

0.0

-0.3

0

20 x

40

-2 -10

-5

0

5

X1n(x),X1s(x) err(x)

d 0.5

4 err(x)

"native"

-0.5

"strange"

2

0.0

20 x

40

errs(x) deviations 0.4

0

"hockey stick" errs(x)

Y1n(x),Y1s(x)

0

0.0

-0.4 0

-2 -10

20 x

40

-5

0 X1n(x),X1s(x)

Fig. 1.4 (continued)

5

1.3 Description of the Eigen-Coordinates Method

e

middle of the distorted segment

3

2

Y2n(x),Y2s(x)

21

"native"

"strange" 1 "hockey stick"

0

-1 -10

0

10

20

X2n(x),X2s(x)

Fig. 1.4 (a) Two different functions, y1(x) and y2(x) in (1.36), are respectively marked by white points and black stars. The values of the parameters are: b1 ¼ 0.5, A1 ¼ 10, θ ¼ 0.7; b2 ¼ 0.37, A2 ¼ 10, ν ¼ 0.84. The value of the error does not exceed 2%. (b) The central figure shows the fitting of the “native” function to expressions (1.36). The figure shown on the right-hand shows the results of the fitting of the “strange” function y2(x) to the same expressions (1.36). Visually, one can see that there are no principal differences between these two results. However, one quantitative result remains important at any level of the external error. The value of the relative error remains less if the chosen hypothesis corresponds to the rule: “native function – native BLR”. For this case, the value of the relative error equals 2.495% in comparison with the situation when the “strange” function corresponds to the BLR tuned for recognition of another hypothesis. For the second situation: “strange function – native BLR”, the value of the relative error is 3.352%. (c) This figure demonstrates the desired distortions that appear in the value of the constants, when the native function corresponds to the native BLR (black points) and when the strange function y2(x) is passing through the same BLR (white points). The distortions are noticeable. Note these distortions in distribution of the absolute error. For the native function, this distribution looks as uniform, in the opposite case the remnant function is clearly noticeable. (d) The increasing value of the initial error (up to 3.5%) makes the recognition procedure more uncertain. However, an “experienced eye” can note possible distortions. (e) This figure demonstrates the behavior of the distortions in the case of large external error (3.5%). In the case of verification of the native function to its native BLR, the distortions are concentrated near the middle of the distorted segment. When the correspondence of the strange function to the same BLR is verified, this observation is violated. Distortions evoked by the presence of the strange functions are leaving out the region of distortions, generated by the external error. On this figure, a specific “hockey stick” located on the left-hand side is clearly noticeable

It is reminded here that the value of the relative error is defined by (1.11). Is it possible to differentiate the small differences between two hypotheses in terms of distortions that can appear in the behaviour of the constants C1 and C2?

22

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

The verification of the second hypothesis y2(x), which becomes native for the BLR given by expressions (1.38), leads to the same results. The value of the relative error is decreased when the rule “native hypothesis – native BLR” is satisfied. In the opposite cases, the distortions in the behavior of the constants, which are expressed by Eq. (1.33), are noticeable. In the presence of the external error, the recognition procedure becomes more uncertain, but one basic feature is conserved. The distortions evoked by the external error keeps the straight line in the middle of the given segment while the distortions evoked by the verification of the strange function can go out from the middle of the calculated segment. At the end of this section, it is instructive to formulate some general observations that can be useful in selecting the proper hypothesis. Usually, the “true” hypothesis is unknown and evaluation about the “truth” of the chosen hypothesis is on the basis of preliminary information (e.g. some theoretical results justifying the selection) and of the results of the fitting procedure. The ECs method gives additional information about the adequacy of the selected hypothesis. Some observations are in order and can be formulated in the form of useful recommendations. Recommendation 1 It is suggested to work with “clean” data containing a large number of measured points. If the values of the external error are rather high, as in Fig. 1.2a, then it is useful to use the POLS for smoothing and cleaning the initial data. Recommendation 2 If possible, for each selected hypothesis, it is necessary to calculate its BLR that was preliminary verified on mimic data. Recommendation 3 If the calculation of the BLR is not possible, then the smallest value of the relative error, see (1.11), can serve as a quantitative criterion in the selection of the proper hypothesis. Recommendation 4 The specific behaviour of the constants C1 and C2 and their distortions calculated by expressions (1.39) will give additional information for the selection of the true hypothesis. In conclusion, go back to the example presented by the functions (1.8) and (1.9). After smoothing these functions, it is necessary to have the BLR for the recognition of the function having at least six fitting parameters. By following the general recommendations, it is necessary to obtain the differential equation that is satisfied by functions expressed in the form of expressions (1.8). It is easy to note that the corresponding differential equation is the conventional Euler equation of the third order that can be written in the form D3 yðxÞ þ a1 D2 yðxÞ þ a2 DyðxÞ þ a3 yðxÞ ¼ a0 , d d ¼x : with D  dx dð ln xÞ Any function of the type

ð1:40Þ

1.3 Description of the Eigen-Coordinates Method

yðxÞ ¼ A0 þ

3 X

23

Ak xνk ,

ð1:41Þ

k¼1

at certain initial conditions, satisfies the differential Eq. (1.40). By integrating (1.40) three times, one can obtain the desired BLR of the type Y ð xÞ ¼

6 X

C k X k ð xÞ

ð1:42Þ

k¼1

Y ðxÞ ¼ yðxÞ  h. . .i, Zx y ð uÞ du  h. . .i, C 1 ¼ a1 , X 1 ð xÞ ¼ u x0

Zx X 2 ð xÞ ¼

ð ln x  ln uÞ

y ð uÞ du  h. . .i, C 2 ¼ a2 , u

x0

X 3 ð xÞ ¼

1 2

Zx ð ln x  ln uÞ2

ð1:43Þ

y ð uÞ du  h. . .i, C 3 ¼ a3 , u

x0

X 4 ðxÞ ¼ ln x  h. . .i, X 5 ðxÞ ¼ ln 2 x  h. . .i, X 6 ðxÞ ¼ ln 3 x  h. . .i The values of the constants C1, C2 and C3 can be found from this BLR. After that, the desired power-law constants are found as the roots of the cubic equation ν 3 þ a1 ν 2 þ a2 ν þ a3  ðν  ν1 Þ  ðν  ν2 Þ  ðν  ν3 Þ ¼ 0

:

ð1:44Þ

The values of the constants C4, C5 and C6 entering into (1.42) contain the unknown values of the derivatives at the initial point x0 and their calculation is not interesting. So, they can be omitted. The values of the constants A0, A1, A2 and A3 are found from (1.41) by the conventional LLSM, when the values of the power-law parameters from (1.44) are known. In realization of the fitting procedure, the importance of the POLS is remarked, because the large values of the initial errors distorted essentially the values of the initial fitting parameters. Another remark is also stressed. In spite of the application of the POLS, some values of the fitting parameters are also calculated with relatively large errors. This fact takes place in all cases when the fitting function contains a large number of fitting parameters. At first, consider the fitting of the smoothed functions obtained by the POLS. These plots are shown in Fig. 1.5a below.

24

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

a

ysm1(x),yft1(x)

120

ysm1(x) yft1(x) ysm2(x) yft2(x)

60

0

0

10 x

20

b

nin1(x), nin2(x)

200

nin1(x) yft1(x) nin2(x) yft2(x)

100

0

0

10

20

x

Fig. 1.5 (a) The plots demonstrate the application of the BLR to the smoothed functions for calculation of the fitting parameters that initially are contained in the functions (1.8). The values of the calculated fitting parameters are shown in Table 1.2. (b) The plots demonstrate the application of BLR for initial data that are strongly distorted by an initial error. The calculated fitting parameters (see Table 1.2) deviate from the initial ones. It means that, for a successful application of the ECs method, accurate or smoothed data are needed

After the smoothing procedure, the desired values of the fitting parameters can be obtained by means of the BLR (1.41) and are very close to the initial ones. See the results presented in Table 1.2. If this smoothing procedure is not used, then the values of the calculated fitting parameters deviate from the initial ones (in bold

1.4 Generalizations and Recommendations for the Eigen-Coordinates Method

25

Table 1.2 The set of the fitting parameters that are calculated for the function (1.1), which is initially subject to the POLS, and without the smoothing procedure ν1 0.9037 (0.9) 0.4825 1.0801 (1) 0.8718

ν2 1.0198 (1.0) 1.2827 0.9098 (0.9) 0.9499

Ω 3.0123 (3) 2.3998 2.0723 (2) 2.6112

A0 4.4495 (24) 5.3297 2.7777 (22) 0.8617

A1 4.6132 (4) 0.8596 5.5221 (6) 6.5086

A2 5.5804 (6) 4.1747 3.9546 (4) 1.6879

A3 5.4677 (6) 3.6339 3.8837 (4) 3.8017

RelErr(%) 4.4036

PCC 0.99882

44.7565 7.3585

0.8972 0.99559

51.9292

0.8645

character). The quality of the fitting procedure, which is described by the value of the relative error, is deteriorated by ten times. So, the smoothing procedure plays an important role in the data fitting procedure. Some comments to Table 1.2 are as follows. The exact values of the fitting parameters are shown in bold and given together with the calculated values that were obtained for the smoothed curves. The fitting parameters for the unsmoothed curves are given correspondingly in the third and sixth lines. As these values show, the POLS has undoubted advantages with respect to the fitting procedure based on the ECs method applied to raw (initial) data. So, if the initial data are strongly deteriorated by initial errors, the POLS becomes an effective procedure. The amplitudes have values more deviated than the calculated values of the power-law exponents. This phenomenon is observed in both cases.

1.4

Generalizations and Recommendations for the Eigen-Coordinates Method

The ECs method was applied to process data from many real systems for proving its effectiveness. Basics and concrete examples of the method were considered in many papers [10–19] and the specific details are not repeated here. However, it is instructive to specify some problems that can be investigated because they receive a high interest in mathematical statistics as a whole. At first, it is explained why this method is defined as the eigen-coordinates method. The origin of this definition is the following. As verifiable from the previous developments, the method gives not only the fitting procedure corresponding to the global fitting minimum (in case of seed values of the fitting parameters are absent), but also the possibility to select the most suitable hypothesis. Imagine verifying numerically the eigen-values for the stationary Schrödinger quantum equation HΨk ¼ E k Ψk , HΨk  Y k :

ð1:45Þ

26

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

Here the set Ψk forms the eigen-functions, H is the Hamiltonian of a system. If Ψk forms a set of eigen-functions, then the set of the functions Y k ¼ E k Ψk ,

ð1:46Þ

in coordinates (Yk, Ψk) should give a set of straight segments with slopes equal to Ek. If the set of Yk does not coincide with eigen-functions of the chosen Hamiltonian, then this set of segments is distorted, and the eigen-values Ek start to depend on some current variable. This important property is stressed in the definition of the method. Moreover, as shown above, the ECs method helps to solve some basic problems of the theory of hypothesis admission. This method is based on the presentation of some analytical function F(x, A) initially containing a set of nonlinear fitting parameters into a new set of the fitting parameters С(A), which become linear with ðpÞ respect to the chosen function yi  F({xi }, A). Such transformation becomes possible if the chosen function satisfies a linear/nonlinear differential equation with a new set of parameters С(A) forming a linear combination with respect to the independent variable x, the dependent variable y and the corresponding derivatives. In other words, the applicability of the ECs method is based on the following structure of the corresponding linear/nonlinear differential equation Y ðx, y, y0 , . . .Þ ¼ C1 X 1 ðx, y, y0 , . . .Þ þ . . . þ Cs X s ðx, y, y0 , . . .Þ: 00

ð1:47Þ

00

The set of functions Y(x, y, y0, y , . . .), Xk(x, y, y0, y , . . .);with k ¼ 1, 2,. . . is ðpÞ determined totally by the chosen function yi ¼ F({xi }, A). For example, the functions defined by the relationships y1 ðxÞ ¼ A1 exp ðλ1 xÞ þ A2 exp ðλ2 xÞ,

ð1:48aÞ

y2 ðxÞ ¼ Axν exp ðγxμ Þ,

ð1:48bÞ

y3 ðxÞ ¼ A1 x

ν1

þ A2 x

ν2

ð1:48cÞ

,

satisfy corresponding differential equations of the following structure: d 2 y1 dy þ C 1 1 þ C2 y1 ¼ 0, C1 ¼ λ1 þ λ2 , C 2 ¼ λ1 λ2 : dx dx2 dy x 2 ¼ C 1 y2 þ C 2 y2 ln y2 þ C 3 ln ðxÞy2 , dx C 1 ¼ ν  μ ln A, C2 ¼ μ, C 3 ¼ μν: D2 y3 þ C 1 Dy3 þ C 2 y3 ¼ 0, D  x C 1 ¼ ν1 þ ν2 , C 2 ¼ ν1 ν2 :

d , dx

ð1:49aÞ ð1:49bÞ

ð1:49cÞ

1.4 Generalizations and Recommendations for the Eigen-Coordinates Method

27

The dimension of the vector С ¼ (С1, С2, ..., Сk) connected with the initial vector А ¼ (a1, a2, ..., ak) by linear/nonlinear relationships coincides with the dimension (number of the fitting components) of the initial vector A or can be less. Any other function that is presented in these ECs will be deformed and accept the form of a curve because of the peculiarities of the construction of ECs. These invertible deformations are provided by the Picard theorem [9]. According to this theorem, the function, which satisfies the corresponding differential equation at the given initial conditions, is unique. So, the problem of the analytical relationship identification is visually simplified and the hypothesis about the acceptance or non-acceptance of the chosen function F(x, A) for the relationship of Y with X is based on the high level of its significance (at least in the limits of the external error dispersion). For example, the most of special functions used in mathematical physics satisfy the structure of the differential equation shown in (1.47) [8] and the most usual functions too, when the initial set of the fitting parameters presented by the vector А ¼ (a1, a2,..., ak) enters into the differential equation in a linear way, see (1.49). So, in comparison with the previous structure (1.5), the BLR of the type (1.47) can be nonlinear and includes in itself the fitting function y and its derivatives y’, y”, etc. In these cases, the integration procedure of the initial BLR (1.46) is necessary. As previously shown, the integration procedure alongside with the POLS decreases the value of the initial error.

1.4.1

Using a Priori Information

Assume that some constant Ck is known a priori. It means that it is located in some interval: ak  Ck  bk. How to take into account this information and make the calculation in the frame of the LLSM more definite and statistically stable? A new variable t located in the interval [0, 1] is introduced so that C k ðt Þ ¼ ak þ Δbk t, t 2 ½0, 1 , Δbk ¼ bk  ak :

ð1:50Þ

The limiting values ak and bk are supposed to be known. For this case, the minimized value of the total error has the form ε j ðt Þ ¼

Y j  ak X k 

s X

! Cp X p

 tΔbk X k,j  ε0,j  tε1,j , j

p6¼k

¼ 1, 2 . . . , N:

ð1:51Þ

It is necessary to minimize the value of the dispersion σ0 corresponding to the error εj with respect to the values of the remaining constants Cp ( p 6¼ k):

28

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

0 12 N s X   1 X@ σ¼ Y j  ak X k,j  C p X p  tΔbk X k,j A N j¼1 pðp6¼k Þ



N 2 1 X ε0,j  tε1,j : N j¼1

ð1:52Þ

Minimization of the expression (1.51) with respect to the constants Cp ( p 6¼ k) leads to the system of linear equations of the following type: s X

  C p X p  X m ¼ ðY  X m Þ  ðak þ tΔbk ÞðX k  X m Þ:

ð1:53Þ

pðp6¼k Þ

From the solution of the linear system of Eq. (1.53) it follows that the supposition (1.50) leads to the conclusion that any constant Cp starts to depend on the variable t, forming a “fork” located between the unknown constants ap and bp C p ðt Þ ¼ ap þ Δbp t, t 2 ½0, 1 ,

ð1:54Þ

Δbp ¼ bp  ap , p ¼ 1, 2 . . . , s, p 6¼ k: The total error of the initial BLR is written in the form εj ¼

Y j  ak X k,j 

 ε0,j  tε1,j :

s X

! ap X p,j

 t Δbk X k,j þ

p, p6¼k

s X p, p6¼k

! Δbp X p,j

ð1:55Þ

This form allows to calculate the unknown constants. The variable t is the independent variable and so the unknown limits entering in (1.54) are calculated at the limiting cases: t ¼ 0 and t ¼ 1. Therefore, for these cases, the following relationships hold: Y j  ak X k,j ¼

s X

ap X p,j ,

p, p6¼k

Δbk X k,j ¼ 

s X

ð1:56Þ

Δbp X p,j :

p, p6¼k

These two BLRs help to find the unknown limits [ap, bp] for other variables if the values of ak and bk are known. After calculation of the desired limits of the constants Cp ( p 6¼ k), the values of the errors from (1.55), say ε0,j and ε1,j, become known. Then the total dispersion is minimized with respect to variable t

1.4 Generalizations and Recommendations for the Eigen-Coordinates Method

1 σðt Þ ¼ N

N X

ε0,j  t

j¼1

N X

!2 ε1,j

 ε20  2t hε0  ε1 i þ t 2 ε21 :

29

ð1:57Þ

j¼1

The desired values are the following. It is easy to calculate the extreme values tmin and σmin, where the value tmin is located in the interval [0,1]: hε  ε i t min ¼ 0 2 1 , 0 < t min < 1, σmin ¼ ε20  hε0  ε1 i  t min : ε1

ð1:58Þ

So, the desired values of the unknown constants are found as C p ðt min Þ ¼ ap þ Δbp t min , Δbp ¼ bp  ap , p ¼ 1, 2 . . . , s, p 6¼ k:

ð1:59Þ

It can be concluded that, in the presence of a priori information, it is necessary to find only the value of tmin to calculate the true values of the desired constants (1.59) at t ¼ tmin. If an initial nonlinear fitting constant Ak entering into the initial fitting function F (x, A) satisfies the relationship (1.50), then it is difficult to show the general procedure for the derivation of the expressions for the set of nonlinear constants {Ak}. Namely, the relationships between the constants A(C) or C(A) (where the set of constants C is considered as linear) remain nonlinear. The finding of the relationship of type (1.50) for the desired constant Ak should be considered separately. Consider some examples from (1.48) showing that the solution of this problem is not so difficult. For the example in (1.48a), suppose that one exponent lies in the interval [a1, b1]. So, in this case, this constant can be presented as λ1 ðt Þ ¼ a1 þ Δb1 t, Δb1 ¼ b1  a1 , t 2 ½0, 1 :

ð1:60Þ

The basic linear relationship for the function (1.48a) has the form Y ðxÞ ¼ C1 X 1 ðxÞ þ C2 X 2 ðxÞ þ C 3 X 3 ðxÞ Y ðxÞ ¼ y < . . . > , Zx yðuÞdu < . . . > , C1 ¼ λ1 þ λ2 , X 1 ðxÞ ¼ ð1:61Þ

x0

Zx X 2 ðxÞ ¼

ðx  uÞyðuÞdu < . . . > , C 2 ¼ λ1 λ2 , x0

X 3 ðxÞ ¼ x < . . . > : Inserting the variable (1.60) into the BLR (1.61) yields the following relationship

30

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

Y ðxÞ  a1 X 1 ðxÞ ¼ λ2 ðX 1 ðxÞ  a1 X 2 ðxÞÞ þ C 3 X 3 ðxÞ þ t ðb1 X 1 ðxÞ  b1 λ2 X 2 ðxÞÞ:

ð1:62Þ

The relationship (1.62) should be minimized at t ¼ 0 and t ¼ 1. These calculations are realized by the conventional LLSM and provide the confidence limits for two other variables λ2 and C3: a 2 < λ2 < b 2 , a 3 < C 3 < b 3

ð1:63Þ

These limiting values (a2, a3 and b2, b3) are related by a nonlinear way with the values a1 and b1 that are supposed to be known. Then the following errors are defined:         ε0,j ¼ Y x j  ða1 þ a2 ÞX 1 x j þ a1 a2 X 2 x j  a3 X 3 x j ,     ε1,j ¼ ðb1 þ b2  a2 ÞX 1 x j  ða1 b2  b1 a2  a1 a2 ÞX 2 x j þ ðb3  a3 ÞX 3,j ε2,j ¼ ðb2  a2 ÞX 2,j ð1:64Þ The expression for dispersion that is minimized with respect to the variable t accepts the form 1 σ ðt Þ ¼ N

N X j¼1

ε0,j  t

N X j¼1

ε1,j  t

  þ 2t ðε1  ε2 Þ þ t ε22 : 3

2

N X

!2 ε2,j

   ε20  2thε0  ε1 i þ t 2 ε21  2hε0  ε2 i

j¼1

4

ð1:65Þ The main difference between the previous result (1.57) and the expression (1.65) is that the nonlinear dependence between the initial coefficients {Ak} and the coefficients {Ck} figuring in the corresponding BLR generates the polynomial dependence for the variable t in σ(t) and the local minimal points in the expression of σ(t) is possible. From a set of three possible minimal points, the global minimal point with 0 < tmin < 1.     hε0  ε1 i þ t ε21  2hε0  ε2 i þ 3t 2 ðε1  ε2 Þ þ 2t 3 ε22 ¼ 0,

ð1:66Þ

should be chosen. Consider now the example in (1.48b). The basic linear relationship for this equation has the form

1.4 Generalizations and Recommendations for the Eigen-Coordinates Method

31

Y ð xÞ ¼ C 1 X 1 ð xÞ þ C 2 X 2 ð xÞ þ C 3 X 3 ð xÞ Zx Y ðxÞ ¼ xy  yðuÞdu < . . . > , ð1:67Þ

x0

Zx X 1 ð xÞ ¼

yðuÞdu < . . . > , C1 ¼ ν  μ ln A x0

Zx X 2 ð xÞ ¼

ln ½yðuÞ  yðuÞdu < . . . > , C2 ¼ μ, x0

ð1:68Þ

Zx X 3 ð xÞ ¼

ln ½u  yðuÞdu < . . . > , C 3 ¼ νμ: x0

The limits of the interval for the power-law exponent μ are supposed known by the following relation: μðt Þ ¼ a þ ðb  aÞt, t 2 ½0, 1

ð1:69Þ

By taking into account this relationship, some simple algebraic manipulations lead the errors of the zero-order ε0,j and first-order ε1,j to the following form:    

      ε0,j ¼ Y x j  aX 2 x j  ν X 1 x j þ aX 3 x j þ a ln ðAÞX 1 x j ,       ε1,j ¼ ðb ln AÞ  X 1 x j þ bX 2 x j  νbX 3 x j :

ð1:70Þ

The total error εj(t) and the dispersion σ(t) are written in the form ε j ðt Þ ¼ ε0,j  tε1,j , σðt Þ ¼

N 2 1 X ε j ðt Þ : N j¼1

ð1:71Þ

From the expressions in (1.68), one can find the limits (at t ¼ 0, 1) of the confidence intervals corresponding to the other pair of variables: a2 < ln ðAÞ < b2 , a3 < ν < b3

ð1:72Þ

After that, the problem is reduced to the minimization of the dispersion of the type (1.65) with the following coefficients:

32

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

        ε0,j ¼ Y x j  ða3  aa2 ÞX 1 x j þ aX 2 x j  aa3 X 3 x j ,     ε1,j ¼ ðb3  a3 þ aa2  ab2  ba2 ÞX 1 x j  bX 2 x j þ ðaðb3  a3 Þ  ba3 ÞX 3,j , ε2,j ¼ ðb2  a2 ÞX 1,j  bðb3  a3 ÞX 3,j : ð1:73Þ Similar calculations are not repeated because, by the substitution x ! ln(x), the problem is reduced to considering the previous example (1.48a). Therefore, based on concrete examples, it is shown how to solve the problem of calculating the desired fitting constants in case some variables are located in confidence intervals.

1.4.2

The Problem of Elimination of Depending Constants

Another problem that can serve as a serious drawback in the application of the conventional LLSM is the mutual dependence of the constants Ck (k ¼ 1, 2,. . ., s) entering into the BLR. In this case, the direct application of the LLSM becomes impossible because the determinant of the main correlation matrix 2

ðX 1 X 1 Þ ðX 1 X 2 Þ: . . . . . .

ðX 1 X s Þ

3

6 ðX X Þ ðX X Þ, . . . , ðX 2 X s Þ 7 2 2 7 6 1 2 7 6 7 6:.......................................... 7 K¼6 6 ðX X Þ: . . . . . . . . . ðX X Þ ðX k X s1 Þ, ðX k X s Þ 7 k k 7 6 k 1 7 6 5 4 :: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ðX s X 1 Þ:: . . . . . . . . . . . . , : . . . . . . . . .

ð1:74Þ

ðX s X s Þ

reduces to zero. Then the Gramm-Schmidt orthogonal procedure becomes useless because the basic relationship (1.21) is applicable only for a set of linear independent initial vectors. Therefore, this case requires special treatment. The simplest procedure that can be applied for this case is the elimination of dependent variables. To understand better this case, consider a simple example. Suppose that the BLR has the following structure Y ðxÞ ¼ C 1 X 1 ðxÞ þ F ðC1 , C 3 ÞX 2 ðxÞ þ C 3 X 3 ðxÞ:

ð1:75Þ

The constant C2 ¼ F(C1, C3) is a dependent constant. The constant C2 is excluded by using the following linear procedure:

ðY  X 2 Þ ðX  X Þ ðX  X Þ ¼ C 1 1 2 þ F ðC 1 , C3 Þ þ C 3 3 2 X 2 ðxÞ: ðX 2  X 2 Þ ðX 2  X 2 Þ ðX 2  X 2 Þ

ð1:76Þ

1.4 Generalizations and Recommendations for the Eigen-Coordinates Method

33

Subtracting expression (1.76) from (1.75) finally gives the BLR for independent variables: e 1 ðxÞ þ C 3 X e 3 ðxÞ: e 1 ð xÞ ¼ C 1 X Y

ð1:77aÞ

Here the new variables are defined as e 1 ðxÞ ¼ Y ðxÞ  ðY  X 2 Þ X 2 ðxÞ, X e 1 ðxÞ ¼ X 1 ðxÞ  ðX 1  X 2 Þ X 2 ðxÞ, Y ðX 2 X 2 Þ ðX 2 X 2 Þ e 3 ðxÞ ¼ X 3 ðxÞ  ðX 3  X 2 Þ X 2 ðxÞ: X ðX 2 X 2 Þ

ð1:77bÞ

This procedure can be continued yielding e 11 ðxÞ ¼ C 1 X e 11 ðxÞ, Y e 13 ðxÞ e 12 ðxÞ ¼ C3 X Y     e1  X e1  X e3 e3 Y X e 11 ðxÞ ¼ Y e ðxÞ, X e ð xÞ e 1 ð xÞ   e 13 ðxÞ ¼ X e 1 ð xÞ   Y X X e3  X e3  X e3 3 e3 3 X X     e1  X e1  X e1 e3 Y X e 12 ðxÞ ¼ Y e ðxÞ, X e ð xÞ e 1 ð xÞ   e 12 ðxÞ ¼ X e 3 ð xÞ   Y X X e1 1 e1 1 e1  X e1  X X X

ð1:77cÞ

Definitely, this procedure is applied in the general case to finally obtain a structure of the type Y ðkÞ ðxÞ ¼ C k X ðkÞ ðxÞ:

ð1:78Þ

It is natural to define relationships (1.78) as the final ECs. The simplest form of the BLR (1.78) was obtained in another way based on the following considerations. Suppose that the BLR after exclusion of the dependent variables can be presented in the form Y ð xÞ  C k X k ð xÞ ¼

s X

Cp X p ðxÞ:

ð1:79Þ

p ðp6¼k Þ

If, in the last equation, the parameter Ck is considered as an independent variable, then any Cp from the right-side of Eq. (1.78), in accordance with the ideas discussed above (see Eqs. (1.51) and (1.54)), can be written as Y ð xÞ  C k X k ð xÞ ¼

s X p ðp6¼kÞ

C p X p ð xÞ 

s X 

 ap þ C k bp X p ðxÞ:

p ðp6¼kÞ

The unknown constants (ap, bp) are found by the LLSM from equations

ð1:80aÞ

34

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems s X

Y ðxÞ ¼

p ðp6¼k Þ

ap X p ðxÞ,  X k ðxÞ ¼

s X

bp X p ð x Þ

ð1:80bÞ

p ðp6¼kÞ

After calculation of these constants, the desired functions figuring in (1.78) are found from the following relationships s X

Y ð k Þ ð xÞ ¼ Y ð xÞ 

p ðp6¼kÞ

ap X p ðxÞ, X ðkÞ ðxÞ ¼ X k ðxÞ þ

s X

bp X p ð x Þ

p ðp6¼kÞ

ð1:81Þ

Y ðkÞ ðxÞ ¼ Ck X ðkÞ ðxÞ: The general expression looks very cumbersome. Then, the expression for s ¼ 3 is given. In this case, the BLR is of the type Y ðxÞ  C 1 X 1 ðxÞ ¼ ða2 þ C 1 b2 ÞX 2 ðxÞ þ ða3 þ C1 b3 ÞX 3 ðxÞ

ð1:82Þ

The constants a2, a3 and b2, b3 are found from the linear equations     8  2 ðY  X 2 Þ X 23  ðY  X 3 ÞðX 2  X 3 Þ > >   > ðY  X 2 Þ ¼ a2 X 2 þ a3 ðX 2  X 3 Þ, a2 ¼  2  2  > > > X 2 X 3  ðX 2  X 3 Þ2 <     >  2 > ðY  X 3 Þ X 22  ðY  X 2 ÞðX 2  X 3 Þ > >   Y  X ð Þ ¼ a ð X  X Þ þ a X ¼ :a > 3 2 2 3 3 2  2  2  3 > : X 2 X 3  ðX 2  X 3 Þ2   ( ðX 1  X 2 Þ ¼ b2 X 22 þ b3 ðX 2  X 3 Þ a2,3 ðY Þ ! b2,3 ðX 1 Þ :   ðX 1  X 3 Þ ¼ b2 ðX 2  X 3 Þ þ b3 X 23 ð1:83Þ After calculation of these constants, the final coordinates for the desired C1 are determined from (1.82): Y ð1Þ ðxÞ ¼ Y ðxÞ  a2 X 2 ðxÞ  a3 X 3 ðxÞ, X ð1Þ ðxÞ ¼ X 1 ðxÞ þ b2 X 2 ðxÞ þ b3 X 3 ðxÞ Y ð1Þ ðxÞ ¼ C 1 X ð1Þ ðxÞ: ð1:84Þ By complete analogy with expressions (1.84), other functions for the constants C2 and C3 can be found. To stress the importance of these constants for further considerations, consider the previous example presented by hypothesis (1.38a). It was above stressed that these ECs play an important role in the recognition of the corresponding hypothesis. Now their sensitivity is shown to small external errors. The final ECs for the

1.4 Generalizations and Recommendations for the Eigen-Coordinates Method

35

constants C1 and C2 are defined by expressions (1.39) (Figs. 1.6, 1.7, 1.8, and 1.9). The influence of the external error is taken into account by the expressions yin1 ðxÞ ¼ y1 ðxÞð1 þ Error ðxÞÞ Error ðxÞ ¼ 2D  rnd ðxÞ  D, 0  rnd ðxÞ  1

ð1:85Þ

Therefore, this example and other similar examples show the effectiveness of the final eigen-coordinates. From one side, they are sensitive for detection of a “strange” function, when the ECs do not belong to the verified function. From another side, the direct integration procedure helps to “dull” their extreme sensitivity and, in case of proper recognition of the verified hypothesis, to provide the acceptable accuracy in the presence of relatively large values of the external error. Moreover, this procedure helps to determine the limits of the confidence intervals if they are not preliminary known. Suppose that, in the result of calculations, it becomes possible to determine the limits of the confidence interval of a constant C Cðt Þ ¼ a þ ðb  aÞt:

ð1:86Þ

In this simplest case, the value of the error ( j ¼ 1, 2,. . ., N ) and dispersion are ε j ðt Þ ¼ ε0,j  tε1,j , σðt Þ ¼

N 2 1 X ε ðt Þ , N j¼1 j

ð1:87Þ

with the values of errors ε0,j and ε1,j given by ε0,j ¼ Y j  aX j , ε1,j ¼ ðb  aÞX j

ð1:88Þ

The minimal value of t minimizing the value of the mean-square dispersion is determined by expression (1.58): hε  ε i t min ¼ 0 2 1 , 0 < t min < 1, σmin ¼ ε20  hε0  ε1 i  t min : ε1

ð1:89Þ

As previously seen, under the influence of the external error, tmin can exceed the unit value and leave the limits of the admissible interval [a, b]. In this case, one can suggest the transformation that keeps the values of the new variable et min depending on the initial value tmin in the desired limits [0, 1], when tmin exceeds the unit value. Assume that the influence of the external error leads to a linear deformation of the functions (1.88): eε0,j ¼ ε0,j þ p1 ε1,j , eε1,j ¼ p2 ε0,j þ ε1,j

ð1:90Þ

In the result of these distortions, the new minimal value et min accepts the form

36

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

a

Y

Y(2)(x) L2(x)

(1)

(x)

L1(x)

5

Y(2)(x)

4 C2=-0.3463(-0.35)

0

-5

0

C1=-0.4944(-0.5)

Y (x)

Error_1%

-20

0

20

(1)

X(2)(x)

-4

-8

0

8 X(1)(x)

b

JY(2)(x) JL2(x)

JY(1)(x) JL1(x)

5

JY(2)(x)

4

0

0 C2=-0.3525(-0.35)

-5

C1=-0.5046(-0.5)

JY(1)(x)

-20

0

20

JX(2)(x)

-4

-8

0

8 JX(1)(x)

Fig. 1.6 (a) The influence of the small error (1%, D ¼ 0.01) on distortions of constants figuring in expressions (1.38a). As numerical calculations show, the plots for these constants C1 and C2 serve as the most sensitive indicators to the presence of an external error. (b) Direct numerical integration of expressions (1.38a) leads usually to a decrease of the external error. For small values of errors,

1.4 Generalizations and Recommendations for the Eigen-Coordinates Method

c 12

37

yin1(x)_1% yfit1(x)

yin

8

4

0 0

20 x0

40

Fig. 1.6 (continued) the integration is not so important but this procedure is becoming necessary (see figures below) if the influence of external error is essential. (c) The final fitting of the function yin1(x) (error ¼ 1%) based on the values of constants taken from Fig. 1.6a, b

et min ðt min Þ ¼

ðeε0  eε1 Þ ðε0 þ p1 ε1 Þ  ðp2 ε0 þ ε1 Þ p2 r 2 þ ð1 þ p1 p2 Þt min þ p1 ¼ : ¼ ðeε1  eε1 Þ p22 r 2 þ 2p2 t min þ 1 ð ε 1 þ p2 ε 0 Þ 2 ð1:91Þ

Here the value of r is determined as sffiffiffiffiffiffiffiffiffi ε2 r ¼  02  ε1

ð1:92Þ

The correction parameters p1 and p2 from (1.91) can be found from the following requirements et min ð0Þ ¼ 0, et min ð1Þ ¼ 1 if t min 2 ½0, 1 et min ðt min Þ ! 1, if t min >> 1:

ð1:93Þ

Simple calculations show that the desired relationship between et min and tmin satisfying the conditions (1.93) accepts the form et min ðt min Þ  t min for t min 2 ½0, 1 ð1 þ p1 p2 Þt min et min ðt min Þ ¼ 2 , t min > 1 p2 r 2 þ 2p2 t min þ 1 pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ 1 þ r 2 2 : p1 ¼ 1  1 þ r , p2 ¼ r2

ð1:94Þ

38

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

a

Y(2)(x) L2(x)

Y(1)(x) 4

4

Y(2)(x)

L1(x)

2 C2=-0.3144(-0.35)

0

2

C1=-0.4383(-0.5) -2

-14

-7

0

Y(1)(x)

X(2)(x)

0

-2 -10

-5

0

5

X(1)(x)

b

(2)

JY (x ) JL 2 (x )

JY(1)(x) 5

JL1(x) JY(2 )(x)

4

0 C2= -0 .3 5 8 5 (-0 .3 5 )

-5

0

C1=-0.5107(-0.5) -20

0

20

JY(1)(x)

JX(2)(x )

-4

-8

0

8 JX(1)(x)

Fig. 1.7 (a) If the value of the error is increasing (error ¼ 2.5%, D ¼ 0.025), then the slopes for the constants C1 and C2 give the values which strongly deviate from the exact values. These values are given in parenthesis. (b) The plots show the effectiveness of the integration procedure. After direct integration of the coordinates Y(k)(x), X(k)(x) (k ¼ 1, 2) the influence of the external error is decreased

1.4 Generalizations and Recommendations for the Eigen-Coordinates Method

39

c yin1(x)_2.5%

12

yfit1(x)

yin1(x)

8

4

0 0

20 x

40

Fig. 1.7 (continued) and the calculated values of the constants C1,2 are kept in the vicinity of exact values (C1 ¼ 0.5, C2 ¼ 0.35). (c) The fitting of the function y1(x) (error ¼ 2.5%) realized with the values of the constants C1 ¼ 0.4383, C2 ¼ 0.3144. It reminds a “conventional” fit when the value of the external error is rather small and the fitting curve is located inside a “cloud” of deviated points

These expressions are turned to be very useful if it is necessary to evaluate the values of Ck when the limits of the confidence interval (Ck2 [ak, bk]) are known, and the influence of the external error is essential. The plot of the function in the interval et min ðt min Þfor different values of r is shown in Fig. 1.10. The values of r were taken from the relationships       ε0,j ¼ Y ð1Þ x j  a1 X ð1Þ x j , ε1,j ¼ ðb1  a1 ÞX ð1Þ x j , a1 ¼ 0:55, b1 ¼ 0:35

ð1:95Þ

that corresponds to the calculation of the constant C1 figuring in expressions (1.86), (1.87), and (1.88). The plots of these random functions at different values of external error are given in Fig. 1.10b. After some algebraic transformations, the dependence of et min ðt min Þ against the r, see (1.94), can be expressed as et min ðt min Þ ¼

ð1 þ p1 p2 Þt min t min < 1 ð r > 0Þ : ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi p22 r 2 þ 2p2 t min þ 1 ð1 þ r 2 Þ þ t min

ð1:96Þ

As one can notice from (1.96), the dependence over the value of r is weak. Therefore, in the previous expression (1.94), one can replace approximately the current parameter t by the modified variable

40

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

a Y(2)(x) LC2(x)

6

Y(1)(x) LC1(x) Y(2)(x)

4

3

0

C2=-0.1771(-0.35)

2 Y(1)(x)

-8

0

8

X(2)(x) C1=-0.2017(-0.5) 0

-2 -10

-5

0

5

X(1)(x)

b

yin1(x)_7%

12

yfit1(x)

yin1(x)

8

4

0 0

20 x

40

Fig. 1.8 (a) Sensitivity of the eigen-coordinates Y(k)(x), X(k)(x) (k ¼ 1, 2) to the relatively large external error (error ¼ 7%, D ¼ 0.07). The calculated values of the constants C1 ¼ 0.2017, C2 ¼ 0.1771 are unsatisfactory to provide the acceptable fit. (b) This figure demonstrates the unsatisfactory fit of the function y1(x) in the presence of large error (7%). To receive more acceptable values of the constants, the direct integration is necessary. (c) Again, integration helps to get more accurate values of the constants C1 ¼ 0.4966 and C2 ¼ 0.3494. So, the integration procedure increases the stability of the initial eigen-coordinates to the presence of large values of the external error

1.5 Concluding Remarks

41

c

(2)

JY

JY(1)(x) JC1(x)

(x)

JC2(x) 5

JY(2)(x)

4 C2=-0.3494(-0.35)

0

-5

0 C1=-0.4966(-0.5)

-10

JY(1)(x)

-20

0

10

20

JX(2)(x)

-4

-8

0

8 JX(1)(x)

Fig. 1.8 (continued)

( et ¼

t, ½0, 1 t ½1, 1Þ 1þt

ð1:97Þ

and keep all the preceding considerations. Therefore, at the end of this section, some innovations can be suggested to improve the traditional LLSM in respect of the influence of the relatively large values of external error.

1.5

Concluding Remarks

This chapter showed how to increase the limits of applicability of the LLSM. If the differential equation for the fitting function contains a linear combination of the constants like in (1.47), then by integration, one can apply the ECs method. The following Chap. 7 will describe how to generalize the LLSM for the functional least squares method. This method appears as a result of searching the intermediate model for quasi-reproducible experiments.

42

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

a 0.0

C1 JC1

C1(Error)

-0.2

-0.4

C1=-0.5 -0.6 0

4

8

12

Error(%)

b

-0.1

C2 JC2

C2(Error)

-0.2

-0.3

C2=-0.35 -0.4

0

4

8

12

Error(%)

Fig. 1.9 (a) This plot demonstrates the dependence of the calculated values of the constant C1 (black triangle) and the same values of the constant JC1 obtained by the integration procedure (upturned white triangle). The advantages of the integration procedure are obvious. Only after 9% integration can lose its stability. However, in these cases, the POLS becomes effective. (b) The same tendency is observed for another constant C2 obtained from initial eigen-coordinates and the same values obtained after their direct integration. The advantages of the integration procedure are obvious

1.5 Concluding Remarks

43

a r=0.6617(1%) r=1.5011(2.5%) r=2.9032(7%)

ttilda(t)

1.0

t

0.5

0.0

0

2

4

6

t

b ε0(x)_1% ε1(x) 0.3

ε0(x)_2.5% ε1(x)

-0.3 0.5

ε0(x),ε1(x)

ε0(x),ε1(x)

0.0

-0.6

-0.9

0.0

-0.5

-1.0 0

20

x

40

-1.2 0

Fig. 1.10 (continued)

20 x

40

44

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

c C1 JC1

-0.3

Upper limit of the confidence interval (-0.35)

C1(Error)

-0.4

Exact value (-0.5)

-0.5

Lower limit of the confidence interval (-0.55) -0.6 0

7

14

Error(%)

d

C2 -0.2

JC2 Upper limit of the confidence interval (-0.25)

-0.3

C2

exact value (-0.35)

-0.4

Lower limit of the confidence interval (-0.45)

-0.5 0

7

14

Err

Fig. 1.10 (a) The dependencies correspond to analytical expression (1.95). By increasing the value of r, the second branches of the curves (starting from the values t > 1) go down. The first branches t2 [0,1] of these curves coincide with each other. (b) Random functions calculated in accordance with

1.7 Exercises

1.6 1. 2. 3. 4. 5. 6. 7. 8.

45

Questions for Self-Testing

How is the conventional fitting problem formulated? What is the regression model curve? Try to formulate the basic drawback of the nonlinear regression problem. How to formulate the linear least-squares method? In which cases can it be applied? What is the BLR and how to obtain it? Why is it necessary to eliminate the mean value of the error < ε(x) > from the BLR? If two competitive hypotheses are available for the fitting of available data, which one is more preferable? Assume the basic determinant of the LLSM is close to zero. What does it mean?

1.7

Exercises

1. Find the BLR for the function yðxÞ ¼ A0 þ A1 exp ðλ1 xÞ cos ðω0 xÞ þ A2 exp ðλ1 xÞ sin ðω0 xÞ, and investigate the value of the relative error with respect to the external error. Note that in this case, it is necessary to modify the fitting function as        nin1,2 x j ¼ y x j þ Δ þ 2Δ  Pr1,2 x j  max ðyÞ, Δ ¼ 0:5: The same questions are referred to other functions listed below. 2. Find the BLR for the beta-distribution yðxÞ ¼ Aðx  x0 Þα ðxN  xÞβ : The limits of the interval (x0, xN) are assumed to be known.  ⁄ Fig. 1.10 (continued) expressions (1.95). With an increasing external error, the value of r also increases. If the confidence interval is properly defined, then the optimal calculated values of the constant C1 is found easily in spite of the large external error. (c) This plot clearly demonstrates the advantages of the confidence interval approach compared with the direct integration. At sufficient values of the external errors, the deviations in calculation of the constant C1 obtained by integration become larger than the values obtained by the confidence interval. Two horizontal lines show the limits of the chosen interval. (d) The same tendency is observed for the second constant C2. So, a priori detection of the limits of the desired interval is important in achieving the stability of the ECs method

46

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

3. Find the BLR for the Gaussian function and its modifications       x  x0 2 2 α yðxÞ ¼ Ax exp a2 x þ a1 x  Ax exp  : σ α

4. Find the BLR for the log-normal distribution   yðxÞ ¼ A exp a2 ln 2 ðxÞ þ a1 ln ðxÞ : 5. Find the BLR for the generalized Gaussian distribution

yðxÞ ¼ A exp

" 4 X

# k

ak ð x  x 0 Þ :

k¼1

The center of the distribution x0 is assumed to be known. 6. Compare two alternative and competitive hypotheses that are represented by two similar functions: A A yðxÞ ¼ h 2 , θ > 0: iθ , θ > 0, yðxÞ ¼  2 1 þ ajx  x0 jθ 1 þ að x  x 0 Þ The center of these distributions x0 is assumed to be known. 7. Find the BLR for the function

yð xÞ ¼ A

  exp bðx  x0 Þ2 1 þ að x  x 0 Þ 2

:

8. Find the BLR for the three-exponential function

y3 ðxÞ ¼ A0 þ

3 X

Ak exp ðλk xÞ:

k¼1

9. Find the BLR for the combination of exponential and power-law functions: yðxÞ ¼ A0 þ A1 xp þ A2 exp ðλxÞ:

References

47

The power-law parameter p, with p > 0, is assumed to be known. How this result changes if the power-law parameter p is located in the interval [pmin, pmax]? 10. Find the BLR for the combination of exponential and power-law functions: yðxÞ ¼ A1 x þ A2 x2 þ A3 exp ðλxÞ:

References 1. L. Janossy, Theory and Practice of the Evaluation of Measurements (Oxford University, Clarendon Press, Oxford, UK, 1965) 2. N.L. Johnson, F.C. Leone, Statistics and Experimental Design in Engineering and the Physical Sciences (Wiley, New York, London, Sydney, Toronto, 1977) 3. D.J. Hudson, Statistics. Lectures on Elementary Statistics and Probability (CERN, Geneva, 1964) 4. M.A. Sharaf, D.L. Illman, B.R. Kowalski, Chemometrics (Wiley, New York, Chichester, Brisbane, Toronto, Singapore, 1986) 5. P.V. Novitsky, I.A. Zograf. The Evaluation of Errors because of the Measurements Results. “Energoatomizdat” (Publishing house). Leningrad, 1985. (in Russian) 6. M.L. Ciurea, S. Lazanu, I. Stavaracher, A.-M. Lepadatu, V. Iancu, M.R. Mitroi, R.R. Nigmatullin, D.M. Baleanu, Stressed induced traps in multilayed structures. J. Appl. Phys. 109, 013717 (2011) 7. R.R. Nigmatullin, C. Ionescu, D. Baleanu, NIMRAD: novel technique for respiratory data treatment. J. Signal Image Video Process., 1–16 (2012). https://doi.org/10.1007/s11760-0120386-1 8. E. Kamke. Differential Gleichungen und Losungen, 6. Verbesserte Auflage. Leipzig, 1959. Пер., Эрих Камке Справочник по обыкновенным дифференциальным уравнениям, М., 1971б 576 стр 9. G. Korn, T. Korn, Mathematical Handbook for Scientists and Engineers (MGraw Hill Book Company, Inc., New York, Toronto, London, 1961) 10. R.R. Nigmatullin, Eigen-coordinates: new method of identification of analytical functions in experimental measurements. Appl. Magn. Reson. 14, 601–633 (1998) 11. R.R. Nigmatullin, Recognition of nonextensive statistic distribution by the eigen-coordinates method. Physica A 285, 547–565 (2000) 12. R.R. Nigmatullin, M.M. Abdul-Gader Jafar, N. Shinyashiki, S. Sudo, S. Yagihara, Recognition of a new universal permittivity for glycerol by the use of the Eigen-coordinates method. J. Non-Crystalline Solids 305, 96–111 (2002) 13. M. Al-Hasan, R.R. Nigmatullin, Identification of the generalized Weibull distribution in wind speed data by the Eigen-coordinates method. Renew. Energy 28(1), 93–110 (2003) 14. R.R. Nigmatullin, G. Smith, Fluctuation-noise spectroscopy and a ‘universal’ fitting function of amplitudes of random sequences. Physica A 320, 291–317 (2003) 15. R.R. Nigmatullin, S.O. Nelson, Recognition of the “fractional” kinetic equations from complex systems: dielectric properties of fresh fruits and vegetables from 0.01 to 1.8 GH. J. Signal Process. 86, 2744–2759 (2006) 16. R.R. Nigmatullin, A.A. Arbuzov, F. Salehli, A. Gis, I. Bayrak, H. Catalgil-Giz, The first experimental confirmation of the fractional kinetics containing the complex power-law exponents: dielectric measurements of polymerization reaction. Physica B: (Physics of Condenced Matter) 388, 418–434 (2007)

48

1 The Eigen-Coordinates Method: Reduction of Non-linear Fitting Problems

17. R.R. Nigmatullin, Strongly correlated variables and existence of the universal distribution function for relative fluctuations. Phys. Wave Phenomena 16(2), 119–145 (2008) 18. R.R. Nigmatullin, R.A. Giniatullin, A.I. Skorinkin, Membrane current series monitoring: essential reduction of data points to finite number of stable parameters. Computat. Neurosci. 2014, 8, Article 120, 1, DOI: https://doi.org/10.3389/fncom.2014.00120 19. R.R. Nigmatullin, C. Ceglie, G. Maione, D. Striccoli, Reduced fractional modeling of 3D video streams: the FERMA approach. Nonlinear Dyn. 80(4), 1869–1882 (2015). https://doi.org/10. 1007/s11071-014-1792-4

Chapter 2

The Eigen-Coordinates Method: Description of Blow-Like Signals

Abstract This chapter considers the application of the eigen-coordinates (ECs) method to the nontrivial example of the analysis of blow-like signals (BLS). These signals show a typical behaviour that starts in some segment of time, then rises and achieves the peak value, and finally decreases, and can originate in very different scenarios, e.g. the propagation of earthquakes, or the dissemination of some sensational news on the Internet and of the comments that accompany this news. However, the characteristic feature of most BLS is the branching and fractal structure. This observation helps to find the desired, simplified fitting function by starting from an initial function containing the nonlinear fitting parameters. This chapter carries out an in-depth analysis on available data (e.g. from asthma disease, acoustic signals from queen bees, car valves in idle regime) with the help of the ECs method described in the first chapter. However, it is stressed that the proposed fractal model enables to fit the envelopes of the BLS only. The internal structure of an arbitrary BLS associated with high-frequency oscillations deserves a separate analysis. The information provided could help researchers to find their own BLS and analyse their structure. Keywords Fractal model · Blow-like signals (BLS) · The envelope of the BLS and its fit · The bronchial asthma disease, songs of queen bees and car valves noise

2.1

Introduction and Problem Formulation

The term “blow-like signal” (BLS) denotes the response of a complex system having a finite duration in time or space, starting from zero, achieving maximal/minimal values during a limited time interval, and in the end tending to zero again. Figure 2.1a–d depict typical smoothed BLSs. The formation of BLSs and their quantitative description are affected by many controllable and uncontrollable factors and draw the interest of many researchers (see references at the end of the chapter for details on the origins of the problem and some related models). Many branches of applied sciences and engineering deal with BLSs. BLSs could be signals resulting from the analysis of medical diseases (Fig. 2.1a), acoustics signals identified as a part © Springer Nature Switzerland AG 2020 R. R. Nigmatullin et al., New Digital Signal Processing Methods, https://doi.org/10.1007/978-3-030-45359-6_2

49

50

2 The Eigen-Coordinates Method: Description of Blow-Like Signals

a

Fup1 Fsmup1 Fdn1 Fsmdn1

1.2

Initial asthma signal

0.6

0.0

-0.6

-1.2 0

50

100

time (arbitrary units)

b

20 Fup1 Fsmup1 Fdn1 Fsmdn1

Amplitude (a.u.)

10

0

-10

-20 0

1

2

3

4

time (a.u)

Fig. 2.1 (a) Blow-like fragment of an acoustic signal recorded from a patient affected by bronchial asthma. The initial treatment involves the two steps described at the beginning of paragraph 4. The yellow colour marks the “up” and “down” envelopes of this signal, which result from the application of criterion (2.20) in this chapter. (b) Blow-like fragment of the generated acoustic

2.1 Introduction and Problem Formulation

51

c 1.0 0.8 0.6

Amplitude (a.u.)

0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1.0 0

0.05

0.10

0.15

0.20

0.25

0.30

time (a.u.)

d

Fup1 Fdn1 Fsm1 Fsmdn1

Amplitudes of EQs signal

12

6

0

-6

-12 0

2 time(a.u.)

4

Fig. 2.1 (continued) signal with a finite duration corresponding to the queen bee song. The entire song consists of 10–15 fragments having approximately the same length as depicted in this figure. For data treatment, arbitrary units are used for both the time variable and signal. (c) Fragment of the initial signal from car valvesThe spikes correspond to the valve knocksThe BLSs shown in Fig. 2.8 below results from the elimination of the trend by the optimal linear smoothing procedure (POLS). (d) A BLS recorded from an earthquake of small intensity. The cyan colour denotes the “up” and “down” envelopes. The correlation coefficient between them achieves only the value 0.7. The absolute values of duration and amplitudes of this earthquake are not essential, and so they are given in some arbitrary units

52

2 The Eigen-Coordinates Method: Description of Blow-Like Signals

of a “song” recorded from insects (Fig. 2.1b), car engine noise (Fig. 2.1c), signals from seismology data (Fig. 2.1d), etc. As shown in Fig. 2.1a, the envelope of the real signal (in yellow) does not precisely correspond to the definition of BLS given above. Some oscillations are evident, while a clearly expressed peak is not present. However, the data from which the plots are obtained do not contradict the definition of BLSs. Why is it so? The developments in this chapter will provide the answer. The quantitative description of BLSs (in terms of a finite set of fitting parameters) is, in general, a difficult problem to be tackled. The existing models, which give a qualitative description of these signals, are not developed considering reliable assumptions or justifications. Besides, a surprising result, which is influenced by many factors, can emerge: similar signals with finite duration registered from different phenomena can be described in the same way by a relatively small set of fitting parameters. More specifically, the problem can be formulated as follows. Is it possible to find a general reason justifying the generation of these BLSs? Is it possible to describe the BLSs quantitatively in terms of a general set of fitting parameters? This chapter shows that a general averaging procedure can describe the BLSs both at micro and mesoscale levels. Although this is an intuitive result, to the best of the authors’ knowledge, a broad mathematical analysis of the problem was absent in the literature. This chapter demonstrates how to reduce the function f(z) describing a set of micromotions to a newly defined function, which keeps only the asymptotic values of the original one, but on a mesoscale level. The mesoscale region denotes the intermediate level of scales (λ < η < Λ), where the complex system studied exhibits its self-similar (fractal) properties in space or time. For simple models describing a disordered medium (which could be in the form of the self-similar branched fractals/channels), it becomes possible to derive and generalize the wellknown Kohlrausch–Williams–Watts function [1, 2] and find a new “universal” function (expressed, for some partial cases, in the form of the well-known log-normal distribution). This function could be even used to describe collective motions in a disordered medium. In this chapter, three different sets of BLSs are considered to assess the validity of the derived model: 1. The envelopes of BLS corresponding to bronchial asthma; 2. The envelopes of the acoustic songs generated by the queen bees (Apis mellifera L.); 3. The envelopes of acoustic signals recorded from car engine valves in the idling regime. For additional justification, a particular fitting procedure based on the generalization of the eigen-coordinates (ECs) method is used to reduce the initial nonlinear fitting problem to a linear fitting problem [3, 4]. The new set of parameters corresponds to the desired global fitting minimum, which follows from the linear leastsquares method (LLSM). This transformation allows avoiding the procedure of an initial guess of the fitting parameters, which is required by any nonlinear fitting

2.2 The Reduced Fractal Model and its Realisation in the Fractal-Branched Structures

53

program used for finding the desired global minimum corresponding to a true set of the fitting parameters. The calculated set of the fitting parameters describes the envelopes of the BLS with high accuracy and can be influenced by some predominant/external factors. This observation helps us to construct the so-called calibration curve, i.e. dependence of a fitting parameter on some external factor (concentration of an additive, temperature, pressure etc.) without detailed knowledge of microscopic mechanisms underlying the complex phenomenon under study. In conclusion, the main results presented in this chapter can be summarized as follows. First, a reduced fractal model defined by a finite set of the fitting parameters provides a characterization of BLSs. The reduced model takes advantage of selfsimilarity in the averaged micromotions, which is a distinctive property considered in the representation of many other complex systems [5, 6]. Second, the opportunity to describe complex BLSs by a finite set of the fitting parameters definitely opens new perspectives in the research in this field.

2.2

The Reduced Fractal Model and its Realisation in the Fractal-Branched Structures

The time-domain evolution of a BLS through a self-similar medium can be described by using, as a prototype, the collective model of relaxation developed to describe the cooperative behaviour of microemulsion droplets near the percolation threshold [7]. The following assumptions are made to obtain the generalized model of collective motion in the mesoscale: A1. An elementary event transferring excitation/relaxation along the channel of length Lj is described by the microscopic relaxation function f(z/zj), where zj is a dimensional variable, characterizing the jth stage of the whole cooperative process, and z is an intensive (not depending on the system size) variable characterizing the considered self-similar branching structure. Throughout this chapter, the basic variable z will coincide with the dimensionless time z ¼ t/τ. The parameter τ can coincide with a specific time associated to an elementary event of excitation/relaxation. A2. It is assumed that zj ¼ a Lj, where Lj is the ‘effective’ length of a channel of relaxation on the jth stage of similarity, and a is a coefficient of proportionality. Possible lengths Lj are distributed by a self-similar law L j ¼ L0 k j ,

ð2:1Þ

where L0 is a minimal scale, and k is a scaling factor. A3. The number of neighbours (here one can imply excitation/transfer centres) involved in the local transfer/relaxation process located along the length Lj also obeys to the scaling law

54

2 The Eigen-Coordinates Method: Description of Blow-Like Signals

n j ¼ n0 b j ,

ð2:2Þ

where b is the scaling factor (b > 1), and n0 is the minimal number of the neighbour centres located near the fixed excitation centre. In other words, the propagation of a signal through a medium has a branching structure. To describe the cooperative relaxation, one can start with the ideas developed in papers [7–11]. In these papers, the authors considered a number of excitations of a donor molecule to an acceptor molecule in various heterogeneous media. If an excitation transfer takes place through many parallel channels, then the following relationship for this cooperative type of relaxation holds: Φðt Þ ¼

Y     1  c þ c exp W R j  t ,

ð2:3Þ

j

where Φ(t) is the relaxation function normalized to unity, t is the current time, c is the concentration of donors in the system, Rj is the distance between a donor and an acceptor (located on the jth site), and W(Rj) is the microscopic relaxation rate of the excitation transfer from the donor to acceptor at a distance Rj. The product extends on all structure sites except for the origin. The fractal-branched structures relationship (2.3) can be generalized and written as ΦN ðzÞ ¼

N 1  Y



f zξ

j

n0 b j

" ¼ exp n0

j¼0

N 1 X

#   j  b ln f zξ  exp ½SN ðzÞ, j

ð2:4Þ

j¼0

where z ¼ t/(aL0), ξ ¼ 1/k, and the value of N refers to the last stage of the selfsimilarity transfer process taking place in the considered self-similar structure. In general, the process of a BLS propagation can be divided into two parts. The first part after the application of external potential is related to the excitation (the act of excitation is described by the function g(z)), while the second part is related to the relaxation process and disappearance of the initial excitation, because of its dissipation inside the self-similar branching structure. Therefore, it is natural to generalize expression (2.4) and present the whole process in the form of the product of two factors: ΦN ðzÞ ¼ EN ðzÞRN ðzÞ ¼

N N Y  Y  n g z=z j f z=z j j : j¼1

ð2:5Þ

j¼1

Here it is supposed that an elementary or microscopic event of excitation is described by the finite function f(z), with zj is proportional to Lj as before, and where the effective excitation length is defined by (2.1). Now it is necessary to argue why expressions (2.4) and (2.5) can be assumed as the most probable formulations for the description of cooperative excitation/

2.2 The Reduced Fractal Model and its Realisation in the Fractal-Branched Structures

55

relaxation process in self-similar complex systems. To this aim, the following principal reasons can be considered: 1. For many partial forms imposed on the function f(z), expression (2.4) can lead to the stretched-exponential law of relaxation that is widely used for the description of cooperative processes in many disordered systems [9–11]; 2. The evaluation of the general formula (2.4) for N >> 1 in the continuous approximation (by the Euler summation formula) leads to the expression ΦðzÞ ¼ Azμ exp ðγzν  λzÞ,

ð2:6Þ

where the power-law exponents μ and ν depend on the asymptotic behaviour of the function f(z) (see equation (2.9a and 2.9b), and the more accurate analysis in Appendix 1). The simplified expression (2.6) has been applied to describe the collective relaxation process in the complex system of microemulsion droplets near the percolation threshold [7] and other disordered systems. The parameters γ and λ of the relaxation function are expressed in the form of integral expressions that functionally depend on the concrete type of the function f(z) describing the evolution of a variety of micromotions on a microscopic level. In this chapter, it will be shown how to evaluate the expression (2.4) not only in the case of continuous approximation but also for discrete variables taking into account the influence of the existing log-periodic oscillations. For a broad class of functions f(z), expression (2.6) and its general forms (given below) can serve as a basic analytical expression for the description of many excitation/relaxation phenomena in different disordered systems. The evaluation of (2.4) mainly depends on the asymptotic behaviour of the function f(z) and the interval of the location of the scaling parameters ξ and b. Initially, it is supposed that these scaling parameters satisfy the following inequality: 0 < ξb  1,

ð2:7Þ

while the other case ξb  1 is considered in Appendix 1. From expression (2.4), it follows that the sum SN(z) satisfies a scaling equation of the type    n 1 SN ðzξÞ ¼ SN ðzÞ þ n0 bN1 ln f zξN  0 ln ½ f ðzÞ: b b

ð2:8Þ

The following asymptotic behaviour of the function f(z) is considered: • for |z| > 1 (with β, r > 0), it results

ð2:9aÞ

56

2 The Eigen-Coordinates Method: Description of Blow-Like Signals

f ðzÞ ¼

A A1 exp ðrzÞ þ 1þβ exp ð2rzÞ þ : . . . zβ z

ð2:9bÞ

The last relationship combines the exponential and the power-law asymptotic behaviours. For β 6¼ 0 and r ¼ 0, it describes the power-law asymptotic behaviour; conversely, a pure exponential asymptotic behaviour is obtained if r 6¼ 0 and the zpowers are not present. In the limit case with N >> 1, one can obtain a simplified functional equation for the sum S1(z)  S(z):     1 n0 c 1 n0 n0 A  r z þ β ln ðzÞ  ln SðzξÞ ¼ SðzÞ  b c0 b c0 b b 1  SðzÞ þ Bz þ C ln ðzÞ þ D, b     n c n n A B ¼  0 1  r , C ¼ 0 β, D ¼  0 ln : c0 b c0 b b

ð2:10Þ

The derivation procedure of the last three terms in (2.10), following equation (2.9a and 2.9b) is in Appendix 1. The limits of the intermediate asymptotic behaviours for |z| >> 1 and |z| , C 1 ¼ λ1 þ λ2 þ λ3 , p0 ¼1

Zp X 2 ð pÞ ¼

ðuÞ

ðp  uÞF N du < . . . > , C2 ¼ ðλ1 λ2 þ λ1 λ3 þ λ2 λ3 Þ,

ð3:23aÞ

p0 ¼1

1 X 3 ð pÞ ¼ 2

Zp

ðuÞ

ðp  uÞ2 F N du < . . . > , C3 ¼ λ1 λ2 λ3 : p0 ¼1

X 4 ðpÞ ¼ p3  h. . .i, X 5 ðpÞ ¼ p2  h. . .i,

ð3:23bÞ

X 6 ðpÞ ¼ p  h. . .i: The constants C4, C5, C6 contain unknown values of derivatives at initial point ¼ 3, 2, 1) and are not essential for the calculation of the desired roots {λr} (r ¼ 3, 2, 1) by the linear least square method (LLSM). The symbol in (3.23a and 3.23b) defines the corresponding arithmetic mean, which should be subtracted from the corresponding functions Y( p), Xq( p) to provide the basic requirement ¼ 0 of the LLSM (see Chap. 1 and [22]). The unknown constants An (n ¼ 0, 1, . . ., s) are also found by the LLSM from (3.20). The following important advantages are highlighted because they could provide a wide application of the GMV-function for analysis of different random sequences:

ðr Þ y3 ðp0 Þ(r

(a) The approximate formal expression (3.21) provides a “universal” quantitative reduction of any random sequence to a set of parameters (An, λn), including the AUC and ymax values. (b) These fitting parameters (An, λn) allow separation of the values (amplitudes) of a random sequence yj into the n optimal statistical groups (clusters) that correspond to a reduced description of the considered random sequence. (c) If the number of samples is large (N > 100), this reduced presentation can be more informative with respect to an external factor than the numerical evaluation of the k roots from the system of equations (3.18). In this case, one can fit each ðpÞ ðpÞ function GN and Gk entering into expression (3.19) separately and then compare their proximity in terms of the fitting parameters (An, λn). The comparison of higher moments forming (in general) two different samplings (k 6¼ N) is more precise and adequate. At k ¼ 1, 2, the conventional reduction expressed in terms of mean value and the standard deviation is obtained. The universal description expressed in terms of the initial integer moments as Δ1, Δ2 can be unsatisfactory in most cases.

96

3 The Statistics of Fractional Moments and its Application for Quantitative. . . ðpÞ

(d) Numerical calculations show that the graphical representation of Gk

with

ðpÞ GN

respect is very informative in comparing two random sequences. If they are statistically close to each other, the plot should give a straight line with slope 1 and intercept close to 0. Possible deviations from these values give the possibility to detect something “strange” (i.e. a presence of a signal or modification of the previous statistical behaviour) in the compared random samplings. Examples of this convenient representation are shown in Sect. 3.6. This simple analysis helps to detect self-similar (fractal) components located in the random sequence. Suppose that there is a segment in two compared random sequences where the amplitudes approximately satisfy the condition yj ffi λ, ðλ 6¼ 1Þ, and j, m ¼ l, l þ 1, . . . , M: um

ð3:24aÞ

For this segment, it is easy to note that two GMV-functions are proportional to each other: ðpÞ

ðpÞ

GðyÞMlþ1 ¼ λGðuÞMlþ1 :

ð3:24bÞ

Condition (3.24b) allows detecting segments in two random sequences having self-similar (fractal) behaviour. It is worth to note that the self-similar behaviour can be identified in many real random sequences if they are ordered into the so-called rank plot [15]. Expression (3.31) given below can be used as a universal fitting function for describing the recognized fractal behaviour of random sequences. Equation (3.24b) allows defining the statistical proximity (external correlations) of two different sequences. If they respectively have N1 and N2 points and, for some segment, satisfy the condition ðpÞ

ðpÞ

GN 1 ¼ λGN 2 þ b,

ð3:25Þ

(λ ffi 1 and b is an arbitrary constant) in the space of moments, then these sequences can be considered as statistically close to each other. In this case, the external correlations between them are high. If λ accepts an arbitrary value and b ffi 0, then the sequences can be considered as self-similar. If λ and b start to depend on parameter p, then the sequences can be considered as statistically different. (e) The GMV-function approach allows a generalization of definition (3.12) for complex and any value of p. This two-dimensional generalization can be useful for more fine differentiations of one-dimensional random sequences. If the parameter p is described by a complex number p ¼ u + iv, then the GMV-function (3.12) is transformed into a surface

3.3 The Approximate Expression for the Generalised Mean Value Function, Fractional. . .

" ðpÞ

ðpÞ

ðpÞ

GN ¼ Re GN þ iImGN ¼

"

"

¼ M c ðu, vÞ þ iM s ðu, vÞ

1 uþiv

N 1 X uþiv y N j¼1 j

97

1 #uþiv

!# N

1 1 X ln ¼¼ exp exp ðu þ ivÞ ln y j : u þ iv N j¼1 ð3:26Þ

After some algebraic transformations, the real and imaginary parts of the GMV-function are defined by the following expressions: M c ðu, vÞ ¼

N    1 X u y j cos v ln y j , N j¼1

N    1 X u y sin v ln y j : M s ðu, vÞ ¼ N j¼1 j

ð3:27Þ

ðpÞ

Re GN ðu, vÞ ¼ exp ½F ðu, vÞ cos ½Φðu, vÞ, ðpÞ

ImGN ðu, vÞ ¼ exp ½F ðu, vÞ sin ½Φðu, vÞ: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u  ln G21 ðu, vÞ þ G22 ðu, vÞ v  φðu, vÞ þ 2 , F ðu, vÞ ¼ u þ v2 u2 þ v 2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi v  ln G21 ðu, vÞ þ G22 ðu, vÞ u  φðu, vÞ Φðu, vÞ ¼  þ 2 : u þ v2 u2 þ v 2

ð3:28Þ

In turn, the functions G1(u, v), G2(u, v) and their argument φ(u, v) are determined as: "

# N    1 X  u y cos v  ln y j G1 ðu, vÞ ¼ ln ¼ ln ðM c ðu, vÞÞ, N j¼1 j " # N    1 X  u y sin v  ln y j G2 ðu, vÞ ¼ ln ¼ ln ðM s ðu, vÞÞ, N j¼1 j

G2 ðu, vÞ φðu, vÞ ¼atan : G1 ðu, vÞ

ð3:29Þ

The last expressions need a special investigation to identify the cases when they can be the most suitable and informative. (f) Definition (3.12) admits the generalization for two-dimensional random sequences:

3 The Statistics of Fractional Moments and its Application for Quantitative. . .

98

"

N1 N2 X  1 X Gðp, qÞ ¼ Y N 1 N 2 j ¼1 j ¼1 1

pþq j1 ,j2

1 #pþq

,

ð3:30aÞ

2

This surface can be analyzed similarly to expression (3.12) if one replaces the external parameter p with p + q or by p1 + q1. For random sequence having m different components, by analogy with (3.30a), one can write:

G

m X i¼1

! pi

2 ¼4

N1 N2 Nm  X X X 1 ... Y N 1 N 2 . . . N m j ¼1 j ¼1 j ¼1 1

2

j1

 1 m P 3 pi m P i¼1  pi 5 : ,j2 ,...,jm i¼1

m

ð3:30bÞ In [15], the authors identified a universal function that describes a wide class of real random sequences. It describes the envelope of ordered amplitudes determined as the sequence of ranged amplitudes (SRA), and can be obtained for detrended random sequences having relatively large sampling volumes (N  1000). In many real cases, this envelope (or rank plot) is described by the function yðt Þ ¼ A0 þ A1 t ν1 þ A2 t ν2 :

ð3:31Þ

The calculated fitting parameters of this function, A1( f ), A2( f ), ν1( f ) ν2( f ) with respect to an external factor f, can be analyzed in terms of integer moments by using (3.5) or with the help of the GMV-function (3.12). The SFM based on the use of fractional moments can be exploited for construction of calibration curves, which can show the variations of distinct quantitative parameters characterizing any random sequence (containing a trend) with respect to the desired external factor (concentration of an additive, value of the external field, temperature, pressure, pH-factor and etc.). Here it is again stressed that these new approaches based on the calculation of higher moments do not use any model assumptions, based on traditional Gaussian statistics and its modifications. It is remarked that the application of the SFM method depends on the ratio between the identified number of additional points (k) and the volume of the sampling (N ). If k/N > 1) when many initial moments are close to each other, then the approach based on the approximate function (3.21) with subsequent calculation of the fitting parameters by the ECs method is preferred. Sometimes, for construction of the required calibration curves, it is sufficient to use representation (3.25) and then take the AUC   as a quantitative parameter calculated for the relative difference ðpÞ ðpÞ ðpÞ GN 2  GN 1 =GN 1 . Examples of this simplified approach for detection of the influence of an external factor are considered in the sequel.

3.4 The Generalised Pearson Correlation Function, External and Internal. . .

3.4

99

The Generalised Pearson Correlation Function, External and Internal Correlations, and Some Useful Inequalities

This section considers the internal correlations, and to this aim, generalizes the conventional Pearson correlation coefficient defined as the cosine of the angle between two vectors in the space of N-measurements: X ðV 1  V 2 Þ ffiqffiffiffiffiffiffi , ðV 1  V 2 Þ ¼ cos ðφÞ ¼ qffiffiffiffiffi Δy1 j Δy2 j j¼1 V 21 V 22 N

ð3:32Þ

N 1 X Δy j ¼ y j  hyi, hyi ¼ y: N j¼1 j

Generalization of this expression is based on the complete set of the fractional moments. For calculation of internal correlations, a more general relationship is used: GMV p ðs1 , s2 Þ GPCF p ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi , GMV p ðs1 , s1 Þ GMV p ðs2 , s2 Þ

ð3:33Þ

where GPCF stands for generalized Pearson correlation function and the GMV function of the K-th order is

GMV p ðs1 , s2 , . . . , sK Þ ¼

N mom 1 X  nrm j ðs1 Þ nrm j ðs2 Þ  nrm j ðsK Þ p N j¼1

!mom1

p

: ð3:34Þ

It employs normalized sequences nrmj( y), with 0 < nrmj( y) < 1, and the current value of the moment, i.e. momp. More specifically, for j ¼ 1, 2,. . ., N, it holds: 8     y j þ y j  y j  y j  > > >     > < max y þ y   min y  y  j j j j   nrm j ðyÞ ¼ > y  min y > j j > > ðfor initial positive sequenceÞ : max y  min y  j

ð3:35Þ

j

where yj denotes the initial random sequence, which can contain a trend or is compared with another trendless sequence. The initial sequences are chosen as follows. The minimum of the GMV function is zero, while the maximum coincides

100

3 The Statistics of Fractional Moments and its Application for Quantitative. . .

with the maximum of the normalized sequence. Moreover, the set of moments is computed as follows: momp ¼ eLnp , Lnp ¼ mn þ

  p ðmx  mnÞ, p ¼ 0, 1, . . . , P, P

ð3:36Þ

where Lnp takes values between mn and mx, which are the limits of the moments in the uniform logarithmic scale. Usually, for the most practical cases, mn ¼ 15, mx ¼ 15, and 50  P  100. This choice is supported by the fact that the transition region of the random sequences expressed in the form of GMV-functions is usually concentrated in the interval [10, 10]. The extension to [15, 15] is considered for an accurate calculation of the limit values of the functions in the space of fractional moments. Finally, note that GPCFp determined by (3.33) coincides with the conventional Pearson correlation coefficient at momp ¼ 1 (Lnp ¼ 0). If the limits mn and mx have opposite signs and take sufficiently large values, then the GPCF has two plateaus (with GPCFmn ¼ 1 for small values of mn) and another limit value GPCFmx that depends on the degree of internal correlation between the two compared random sequences. This right-hand limit, say Lm, satisfies the following condition:   M  min GPCF p  Lm  GPCF mx  1:

ð3:37Þ

The appearance of two plateaus implies that all information on possible internal correlations is complete, and a further increase of mn and mx becomes useless. Several tests showed that the highest degree of correlation between two random sequences is achieved when Lm ¼ 1, while the lowest when Lm ¼ M. Here consider some instructive examples related to comparison of statistical properties of two distributions, i.e. the beta and the exponential distributions, respectively. Figure 3.1a, b show the comparison of the conventional beta-distribution (selected as the pattern one) with segments of the same distribution and a random sequence which represents the mixture between beta and the exponential distribution. The comparison results in three different behaviours of the internal correlations in terms of the GPCF (Fig. 3.2). Numerous tests showed that the maximal correlation is achieved in the limit L ¼ 1, while the minimal correlations are obtained at L ¼ M. This condition holds for all the random sequences and allows to introduce a new quantitative parameter, the so-called complete correlation factor (CCF):  LM , 0  CCF  1 1M ICF ¼ ML, M 2 < ICF < M CCF ¼ M



ð3:38Þ

In some cases, it is convenient to consider also the internal correlation factor (ICF) that is located in the interval [M2, M]. The last factor takes into account the existing non-zero correlations.

3.4 The Generalised Pearson Correlation Function, External and Internal. . .

101

a ShN

pattern_noise

4

2

0

0

40 00

" # 8 k P > > λ x q > > 0 > >

> > :

meanðΔN ðxÞÞ

9 > > > > = 100%

> > > > ;

ð3:53Þ

To synthesize, the reduction procedure is defined by (3.51)–(3.53) and solves (3.48) to determine the reduced set of parameters {(ws, λs), s ¼ 1, 2,. . ., k}, with k >1 X  k¼1

    x x Ack cos 2πk þ Ask sin 2πk : Tx Tx

ð6:2Þ

It is deliberately shown only the segment of the Fourier series because, in practice, data points are always discrete, and the number of “modes” k ¼ 1,2,. . ., K (coinciding with the coefficients of the Fourier decomposition) remains finite. Hereafter, the capital letter K denote the finite mode. The choice of K affects the accuracy in fitting the experimental data. As shown later, it is possible to determine the value of K necessary to limit the relative error within the acceptable interval [1–10%]. This interval provides the desired fit of the measured function y(x) to Pr(x) with an initially chosen number of modes k ¼ 1,2,. . ., K. From (6.1) and (6.2) one important conclusion follows. For an ideally reproducible experiment, which fulfils the condition (6.1), the F-transform (6.2) can be used as an intermediate model (IM), and the 2K + 2 decomposition coefficients (A0, Ack, Ask) (the unknown value of Tx should also be included independently as an additional nonlinear fitting parameter) constitute a set of fitting parameters of the IM. This set defines the conventional “amplitude-frequency” response (AFR) associated with the recorded “signal” y(x)  Pr(x) and coinciding with the measured function y(x) 2 ym(x) (m ¼ 1,2,. . .,M ). Here the limits of interpretation of the conventional F-transform are expanded to any deterministic variable x (including frequency also, if the input variable x coincides with some current ω), and it is shown that the segment of this transformation (following to definition (6.1)) can describe an ideal experiment. Another functional equation generalises the expression (6.1): F ðx þ T x Þ ¼ aF ðxÞ þ b:

ð6:3Þ

This functional equation was introduced firstly in [11] by the first author and later defined as a quasi-periodic (QP) process [12]. The solution of this equation can be written in the following form [11]:   x b a 6¼ 1 : F ðxÞ ¼ exp λ PrðxÞ þ c0 , λ ¼ ln ðaÞ, c0 ¼ , T 1a x a ¼ 1 : F ðxÞ ¼ PrðxÞ þ b : Tx

ð6:4Þ

The interpretation of this equation is given in [11, 12] and in the previous chapter. From Eq. (6.3) it follows: F ðx þ mT x Þ ¼ aF ðx þ ðm  1ÞT x Þ þ b, m ¼ 1, 2, . . . , M:

ð6:5Þ

238

6 The General Theory of Reproducible and Quasi-Reproducible Experiments

This result can be interpreted as the repetition of a set of successive measurements, corresponding to an ideal experiment with the shortest memory, when the subsequent response “remembers” only the nearest measurement. It is assumed that the stable properties of the object studied during the period Tx are preserved, i.e. constants a and b do not depend on time or input variable x. Such a situation, in spite of its initial attractiveness, cannot occur in reality, because uncontrollable factors influence the parameters of the function describing the object (as shown later in the chapter). In practice, it is expected that all the constant parameters, including the period Tx, depend on the current number of the measurement m. It is convenient to evaluate this dependence with respect to its mean : ym ðxÞ ¼ am hyi þ bm or F ðx þ mT x ðmÞÞ ¼ am hF ðxÞi þ bm , m ¼ 1, 2, . . . , M, M 1 X y ðxÞ: h yð xÞ i ¼ M m¼1 m

ð6:6Þ

This relationship was used in [13] on different available data for quantitative presentation in terms of the AFR belonging to the Prony spectrum. Nevertheless, the solution (6.4) is valid in this case also, so that it is possible to express approximately the current measurement ym(x) in terms of the function (6.4) corresponding to the chosen IM. From this IM, one can obtain a fitting function to describe the reproducible measurements with the shortest memory (6.6). Therefore, for each measurement, one can derive the following fitting function easily from expression (6.4):  ym ðxÞ ffi F m ðxÞ ¼ Bm þ E m exp λm

 X K x ½Ack ðmÞyck ðx, mÞ þ Ask ðmÞysk ðx, mÞ, þ T x ðmÞ k¼1         x x x x cos 2πk , ysk ðx, mÞ ¼ exp λm sin 2πk : yck ðx, mÞ ¼ exp λm T x ðmÞ T x ðmÞ T x ðmÞ T x ðmÞ

ð6:7Þ The period T ffi Tx(m) determines the time interval when one cycle of measurement terminates. However, an experimentalist prefers to work not only with the temporal variable. It is frequent to refer to different variables (e.g. wavelength, scattering angle, electric current, magnetic field intensity, frequency, etc.) depending on experimental conditions and types of equipment used. In this case, the relationship between the imposed “period” Tx defined above and the true period T is not known. However, the nonlinear fitting parameter Tx in (6.7) can be calculated from the fitting procedure to obtain an accurate fit (by minimising the relative error). The optimal value Topt is reasonably located within the interval [0.5 Tmax, 2Tmax], where the inoculating value of Tmax(x), in turn, should be defined as Tmax(x) ¼ ΔxL(x). The value Δx coincides with a step of discretisation, and L(x) ¼ xmax – xmin determines the length of the interval associated with the current discrete variable x. This

6.2 The Basis of the General Theory of Reproducible Experiments. . .

239

important observation helps to find the optimal values of Topt and K from the minimisation of the relative error surface that always lies between the measured function y(x) and the fitting function (6.7):    stdev yðxÞ  F x, T opt , K min ðRelErr ðT, K ÞÞ  min  100%, meanjyðxÞj 1% < min ðRelErr Þ < 10%, T opt 2 ½0:5  T max , 2  T max ,  T max ¼ x j  x j1  ðxmax  xmin Þ  Δx  L: 

ð6:8Þ

Direct calculations show that instead of minimising the surface RelErr(T,K ) with respect to the unknown variables Tx and K one can minimise the cross-section at a fixed value of K. This initially chosen value of K should satisfy the condition given by the second row of (6.8). This procedure should be applied to each successive measurement and, therefore, one can omit the index m (m ¼ 1,2,. . .,M) in (6.8). It is desired by the experimentalist to realize the conditions close to the “ideal” experiment with memory expressed by relationship (6.3). To this aim, after averaging the set of constants am and bm together with the measured functions ym(x), Eq. (6.6) can be substituted by an approximate equation that is close to the ideal case (6.3): Y ðx þ hT x iÞ ffi haiY ðxÞ þ hbi, M M 1 , 1 X 1 X ym ðxÞ, Y ðxÞ ¼ y ðxÞ: Y ðx þ hT x iÞ ¼ M  1 m¼2 M  1 m¼1 m

ð6:9Þ

The second row in (6.9) defines the averaged functions obtained from the given set of the reproducible measurements. This functional equation is defined as the reduced experiment to its mean values (REMV). The constants am and bm are calculated from (6.6) as slopes and intercepts with respect to the mean value hyi. am ¼ slopeðhyðxÞi, ym ðxÞÞ, bm ¼ intercept ðhyðxÞi, ym ðxÞÞ, M M 1 X 1 X m ¼ 1, . . . , M, hai ¼ am , h bi ¼ b : M m¼1 M m¼1 m

ð6:10Þ

The complete set of measurements is necessary to justify the functional equation (6.6). The requirement of the shortest memory is not met in real experiments, and correlations exist in the general case. Therefore, instead of Eq. (6.6), it is necessary to consider an ideal situation in which the memory covers L neighbouring measurements. For this case, it is possible to write a more general functional equation: F ðx þ LT x Þ ¼

L1 X l¼0

al F ðx þ lT x Þ þ b:

ð6:11Þ

240

6 The General Theory of Reproducible and Quasi-Reproducible Experiments

The set of the parameters {al, b} can be easily calculated by the linear least square method (LLSM) by assuming that L ¼ M, where M coincides with the last measurement. But, up to now, it is not known how to calculate the true value of L, which belongs to the interval [1, M]. The true value of L is probably associated with more deep physical reasons and is influenced by the type of equipment used, its determination L calls for deeper research. The set of constants [al] (l ¼ 0,1,. . ., L-1) can be quantitatively interpreted as the influence of memory (strong correlations) between the successive measurements. The solution of generalized functional equation (6.11) was given in the previous chapter and can be found in [11–13]. This solution can be put in two different forms: A:

L1 X

al 6¼ 1 : F ðxÞ ¼

L X

l¼0

ðκ l Þx=T x Prl ðxÞ þ c0 , c0 ¼ 1

l¼1

b L1 P

, al

l¼0

B:

L1 X

al ¼ 1 : F ð x Þ ¼

l¼0

L X

ðκ l Þ

l¼1

x=T x

x Prl ðxÞ þ c1 , c1 ¼ Tx

L

ð6:12Þ

b L1 P

: l  al

l¼0

Here, the functions Prl(x) coincide with a set of periodic functions (l ¼ 1, 2,. . ., L ) defined by expression (6.2), the values κl coincide with the roots of the characteristic polynomial PðκÞ ¼ κL 

L1 X

al κl ¼ 0:

ð6:13Þ

l¼0

In general, these roots can be positive, negative, g-fold degenerated (with the value of the degeneracy g), and even complex-conjugated. It is worth to note that, for the case (B), one of the roots κl coincides with the unit value (κ1 ¼ 1) that leads to the pure periodic solution. As before, the finite set of the unknown periodic functions ðlÞ Prl(x, Tx) (l ¼ 1,2,. . ., L ) is determined by the decomposition coefficients Ack and ðlÞ Ask , with l ¼ 1, 2, . . ., L, and k ¼ 1, 2, . . ., K: Prl ðx, T x Þ ¼

ðlÞ A0

þ

K>>1 X 

ðlÞ Ack

k¼1



x cos 2πk Tx

 þ

ðlÞ Ask

  x sin 2πk Tx

ð6:14Þ

The conventional Prony decomposition [14–16] is widely used in the processing of different random signals, and usually expressed by

6.2 The Basis of the General Theory of Reproducible Experiments. . .

PRðxÞ ¼ p0 þ

K  X

241

ack  eλk x cos ðωk xÞ þ ask  eλk x sin ðωk xÞ

ð6:15Þ

k¼1

Equation (6.15) contains 2K nonlinear parameters (λk, ωk) and 2K + 1 linear parameters ( p0, ack, ask). This decomposition, which does not have any specific meaning, is used as an alternative to other transformations (Fourier, wavelet, Laplace, etc.) commonly adopted in signal processing. The fit of different random signals by (6.15) represents itself a serious problem [14–16]. The original solution to this problem was tackled recently in papers [17, 18], but the criterion that justifies the specific advantages of this decomposition among others remains unknown. The principle difference emerges by comparing solutions (A) and (B) of (6.12) with expression (6.15). Equations (6.12) have an additional interpretation associated with the memory of successive measurements and contain the only one nonlinear parameter Tx. All possible solutions to the general functional equation (6.11) for different roots were considered in [11–13]. Indeed, a set of the reproducible measurements having a memory associated with L neighbouring measurements should satisfy to the following functional equation: ym ðxÞ ffi F ðx þ ðL þ mÞT x ðmÞÞ ¼

L1 X

ðmÞ

al F ðx þ ðl þ mÞT x ðmÞÞ þ bm ,

l¼0

:

ð6:16Þ

m ¼ 1, 2, . . . , M: As before, one can apply the REMV procedure close to the ideal case (6.11) initially with the help of relationships Y ðx þ LhT x iÞ ¼

L1 X

hal iY ðx þ lhT x iÞ þ hbi,

l¼0 M X 1 Y ðx þ l hT x iÞ ¼ F ðx þ ðm þ lÞT x ðmÞÞ, l ¼ 1, 2, . . . , L, L < M: M  l þ 1 m¼l

:

ð6:17Þ The second row in (6.17) represents a sliding window averaging procedure to calculate the desired mean functions. From a mathematical point of view, the functional equations (6.17) and (6.11) are similar to each other, but they have a different meaning. The functional equation (6.11) is associated with an ideal experiment with memory, while the functional equation in (6.17) describes the typical situation of the real experiment when random behaviour of the initially measured functions is reduced to its successive mean values. The averaged coefficients and in (6.17) can be found by the LLSM from the first row of (6.17). In practice, it is desirable to use the minimum value of L, because L increases considerably the number of the fitting parameters needed for the final fitting of the measured function. The general solution of this equation is similar to (6.12) and can be omitted.

242

6 The General Theory of Reproducible and Quasi-Reproducible Experiments

Equation (6.17) has a clear meaning; it corresponds to the linear representation of the potential memory effect existing between repeated measurements after the averaging procedure. These coefficients also reflect (to some extent) the influence of the experimental uncontrollable factors coming from the measured equipment impact. Earlier, these factors were taken into account only statistically through some unjustified suppositions, while the proposed theory suggests a direct way for their evaluation. The objective here is to demonstrate how to exclude the influence of the uncontrollable factors that can be defined here and below as an apparatus (instrumental) function and to reduce the set of the given reproducible measurements to an ideal experiment containing the set of periodic functions only. From Eq. (6.11), it follows that the functions F(x), F(x + T ), . . ., F(x + (L-1)T ) are linearly independent and available from experimental measurements in averaged (see expression (6.17)) or in another sense. Therefore, the following system of linear equations can be written: F ð xÞ ¼

L X

EPl ðxÞ þ c0 ,

l¼1

F ðx þ T Þ ¼ :

L X

κ l EPl ðxÞ þ c0 ,

l¼1

F ðx þ ðL  1ÞT Þ ¼

L X

where EPl ðxÞ ¼ ðκl Þ

l¼1 x=T x

ð6:18Þ κL1 EPl ðxÞ þ c0 , l Prl ðxÞ, l ¼ 0, 1, . . . , L  1,

or Prl ðxÞ ¼ EPl ðxÞ  ðκl Þx=T x : The solution of this linear system gives the unknown functions EPl(x), and then restores the unknown set of the periodic functions Prl(x). Therefore, it becomes possible to reduce a wide class of reproducible measurements, initially presented in the frame of the desired IM and corresponding to the Prony decomposition, to an ideal experiment, with only one periodic function. It is possible to note that the L-th order determinant of the system (6.18) coincides with the well-known Vandermonde determinant [19], which is not null if all roots of Eq. (6.13) are different. Finally, there is only one ideal periodic function that corresponds to the reduction of the real set of measurements to an ideal (perfect) experiment: Pf ðxÞ ¼

L1 X l¼0

wl Prl ðxÞ,

L1 X

wl ¼ 1,

ð6:19Þ

l¼0

where wl is associated with some weighting factors. In general, this function could serve as a “keystone” in the arc of the “bridge” connecting the proposed theory and experiment. In the simplest case (6.4) the following obvious relationships hold:

6.2 The Basis of the General Theory of Reproducible Experiments. . .

 h bi Y ð xÞ  , hai 6¼ 1, PrðxÞ ¼ ðhaiÞ 1  h ai   x : hai ¼ 1, PrðxÞ ¼ Y ðxÞ  hbi Tx ðx=T x Þ

243



ð6:20Þ

Therefore, these simple formulas give a solution for the elimination of the apparatus (instrumental) function based on the verified suppositions (6.9) and (6.17). In the same manner, one can consider the case (B) in (6.12). The solution for this case is trivial and similar to the linear equation (6.18) by replacing the lefthand side by the functions Φðx þ lT Þ ¼ F ðx þ lT Þ  c1



 x þ l , l ¼ 0, 1, . . . , L  1: T

ð6:21Þ

It is instructive also to give the formulas for the case L ¼ 2. These expressions will be used for the approximate treatment of real data considered later. In this case, simple algebraic manipulations yield the following expression, corresponding to an ideal experiment: Pf ðxÞ ¼ Pr1 ðxÞ þ Pr2 ðxÞ,         Y 1 ðxÞ  κ 2 Y 0 κ Y ðxÞ  Y 1 x x  exp  ln κ 1 , Pr2 ðxÞ ¼ 1 0  exp  ln κ2 , Pr1 ðxÞ ¼ Tx Tx κ1  κ2 κ1  κ2 Y 0 ðxÞ ¼ F ðxÞ  c0 , Y 1 ðxÞ ¼ F ðx þ T x Þ  c0 :

ð6:22Þ Before proceeding further, it is instructive to summarise the basic conclusions of the introduced theory. 2.1 The extended Fourier decomposition, for any input variable x figuring in (6.2) as a fitting function should be only used for the quantitative description of an “ideal” experiment (defined by expression (6.1)). This new interpretation of the F-transform should add new elements for a proper application of this transformation in the random data processing. 2.2 Successive measurements in the most reproducible experiments are stronglycorrelated (expression (6.11)) and, hence, have a memory. This fact was confirmed recently on different real data sets [20, 21]. For their quantitative description, it is necessary to use the Prony decomposition (6.12) as the fitting function associated with the IM. 2.3 One can derive the general expression (6.18) that helps to eliminate the memory between reproducible experiments and reduce the quantitative description of successive measurements to an ideal experiment without memory (see expression (6.1)). This memory effect is associated presumably with apparatus (instrumental) functioning.

244

6.3

6 The General Theory of Reproducible and Quasi-Reproducible Experiments

Description of the General Algorithm and Its Testing on Available Data

A deep analysis of many data (see, for example, papers [20, 21]) brings out the efficiency of the following simplified algorithm, which can be easily implemented by using many common programming languages. The proposed algorithm includes the following basic steps: S1 From the available data, calculate the mean measurement as h yð xÞ i ¼

M 1 X y ðxÞ, M m¼1 m

ð6:23Þ

and then the distributions of the corresponding slopes and intercepts, which highlight the “marginal” measurements with the largest deviations from the mean value (considered as a centre of the statistical cluster): SLm ¼ slopeðhyðxÞi, ym ðxÞ Þ, Int m ¼ intercept ðhyðxÞi, ym ðxÞ Þ:

ð6:24Þ

If ordered with respect to their deviations, these values form the desired unit value distributions, corresponding to the averaged measurement with a slope equal to one and a zero intercept. By computing the slopes after eliminating the mean values N  P Δym(x) ¼ ym(x) – Dm (Dm ¼ N1 ym x j ), the values of intercepts are negligible j¼1

and their analysis can be omitted at the first stage. The remaining slopes distribution SLm (m ¼ 1,2,. . .,M ) can be characterized quantitatively by the range of this distribution with respect to the mean value. If all deviations in the positive and negative directions can be described by the values 1+Δup and 1Δdn, then the quantitative characteristics Rsl ¼ 1 þ Δup  ð1  Δdn Þ ¼ Δup þ Δdn ,

ð6:25Þ

indicate the quality and accuracy of the equipment used for measurements. In particular, the accuracy is high if this value is close to zero, while an increase of this value during successive measurements is related to the instability and low accuracy of the equipment. Depper evaluations to characterise the quality and stability of the equipment will be done below. S2 Considering the distribution of the slopes, the measured functions with maximal deviations, identified as yup(x), ydn(x), form two limits. In particular, they represent two limiting boundaries with respect to a set of mean functions ymn(x) forming the central part of the statistical cluster:

6.3 Description of the General Algorithm and Its Testing on Available Data

ymnðxÞ ¼ a1 yupðxÞ þ a0 ydnðxÞ þ b

245

ð6:26Þ

The distribution of the slopes should be divided approximately into three independent parts. The averaged part of measurements located near the unit value on the right-hand side form the function ymn(x), the other two averaged “marginal” functions yup(x), ydn(x) located on the left branches describe the limits of the statistical cluster formed by all deviated measurements. The coefficients {a0,1, b} can be determined from (6.26) by using the LLSM. The deviated functions yup(x), ydn(x) reflect the essential influence of the apparatus function. From expressions (6.25) and (6.26) the obvious conclusion follows: if all slopes SLm ¼ 1, Intm ¼ 0, then the constants in (6.26) should give a1 ¼ 2 and a0 ¼ 1, b ¼ 0 giving for the doubledegenerated roots with the obvious values κ1,2 ¼ 1. S3 After calculating the fitting parameters {a0,1, b}, the desired roots κ1,2 are computed by the quadratic equation κ2  a1 κ  a0 ¼ 0,

ð6:27Þ

and the fit of the function ymn(x) to the Prony’s decomposition gives the optimal value of Tx from expression (6.8). S4 The obtained values (κ1, κ2 and Tx(opt)), in turn, enable the calculation of the complete periodic function Pf(x) from (6.22) and the desired reduction of the measured data to an ideal experiment where the final function (without the influence of memory) should be expressed only in terms of the parameters associated with the F-transfrom. In other words, only this single function Pf(x) obtained from reproducible data measurements can be used as a representation for comparison with competitive hypothesis obtained from the existing theory. If the proper theoretical hypothesis (“best fit” model) is absent, then the Pf(x) represented in the form of the finite segment of the Fourier series and the AFR of this function (Tx, A0, Ack, Ask, k ¼ 1,2,. . .,K ) containing 2K + 2 fitting parameters can be used as an IM for the considered experiment. Although the true value of L (memory length) is not known, the proposed algorithm was effective on model data and the MR (magnetic resonance) data, as shown by the first author [13, 21]. To conclude this subsection, it is worth emphasising two aspects concerning the application of the proposed algorithm on model data. 3.1 If the distribution of the slopes is symmetrical (Δup  Δdn) and the deviations are small (0.01 < Δup, Δdn < 0.1), then the influence of the apparatus function is minimal. 3.2 If the equipment has a malfunction or is inaccurate in the realization of reproducible measurements, then the differences between the slopes are significant (Δup,dn /Δdn,up > 1) and it is necessary to take into account the distortions introduced by the apparatus.

246

6 The General Theory of Reproducible and Quasi-Reproducible Experiments

In the following, two typical experiments associated with stable measurements are considered.

6.3.1

The Raman Spectra of Distilled Water

They are considered some reliable spectroscopic data at the given external conditions. During the experiment, the object under analysis (the distilled water in the considered case) is affected by the slight influence of some factor. Each test is repeated M times. The following question arises: Is it possible to notice the small influence of an external factor and express its changes quantitatively? An answer to this question can be found by considering the Raman spectra (RS) for the pure distilled water. The Raman spectroscopy is a well-known method for the investigation of vibration modes of different substances. This type of spectroscopy does not need its general description. For the proposed case study were considered six datasets of 1024 points, representing the Raman spectra of distilled water at a 20  C room temperature. The differences between them consisted in what follows. Three files referred to pure water obtained by three different distillation techniques, i.e. double distilled water, distilled water subjected to the additional filtration in accordance with the conventional MQ (microcapillary) technology [20], and water prepared with the help of double degasified procedure, respectively. Each Raman spectra was recorded as a minimum of 100 times (M ¼ 100) for 1 minute. During the first set of experiments, the substance was irradiated by an alternate electro-magnetic (EM) field with the help of low-power laser (103Wt), with an excitation frequency of 2109 Hz and a number of pulses equal to 100. The duration of the irradiation was 1, 3 and 5 minutes, respectively. In the following, the corresponding files are denoted as “FD1,3,5”, accordingly. Therefore, file 4 denoted by FD1 is associated with the irradiation duration of 1 minute, file 5(FD3) and finally file 6(FD5) are related to irradiation of 5 minutes. For experimentalists it was important to receive an answer to the following questions: 1. Is there any difference between the Raman spectra corresponding to different procedures of water distillation? 2. Is it possible sense the influence of the low-power EM field on the properties of the Raman spectra recorded for pure distilled water (defined in these experiments as file 1)? To this aim, it was applied the algorithm (S1-S4) previously described. Figure 6.1a, b show the RS spectra in different representations recorded for file 1. It is evident that the RS is “noisy” (distorted by the influences of uncontrollable factors), and it is impossible to note any small difference between the original spectra. Figure 6.2a, b show the distributions of the slopes and intercepts relative to the mean spectrum . To decrease the influence of HF fluctuations, all the spectra are divided into three statistical groups with reference to the unit slope, which

6.3 Description of the General Algorithm and Its Testing on Available Data

The RS of the double distilled water: measurements (m=1,2,3,98,99)

a 9000

247

RS_m=1 RS_m=2 RS_m=3 RS_m=98 RS_m=99

6000

3000

0 5.6

6.4 wavenumber (cm-1)/100

7.2

b Current Raman spectra corresponding to ddw

9000

RS_m=1 RS_m=2 RS_m=3 RS_m=98 RS_m=99

6000

3000

0 0

2000 4000 mean Raman spectrum

6000

Fig. 6.1 (a) Typical Raman spectra corresponding to the double-distilled water (ddw). Only 5 randomly taken spectra that correspond to successive measurements m ¼ 1, 2, 3, 98, 99 are shown. (b) Distribution of deviations of the current measurements with respect to its averaged Raman spectrum. It is another and more convenient presentation of the initial RS depicted in Fig (a)

corresponds to the mean measurement. Then, one can calculate the averaged functions yup(x), ydn(x) and ymn(x), respectively. Figure 6.2a shows the clusterisation procedure. After averaging the measurements belonging to each group, the number of RS reduces to three (Fig. 6.3a, b). The proof of the existence of long memory (6.17) at L ¼ M between successive measurements is omitted. The decomposition coefficients {al, b} (l ¼ 1,2,. . .,M-1) are determined by the means of the LLSM. After calculating the functions yup(x), ydn(x) and ymn(x), it is necessary to compute the coefficients a0,1 and b in (6.26) by

248

6 The General Theory of Reproducible and Quasi-Reproducible Experiments

Distribution of slopes

a

1.5

measurements upper branch down branch

Rsl=0.72786

yup(x) 1.2 ymn(x) slope=1 -----------------------------------------------------------------------------------------------------0.9 ydn(x)

0.6 0

b

40 80 1< current measurement(m) y2 > . . . > yN, with N ¼ 5000, see Fig. 6.6 for details), they form the sequence of the ranged amplitudes (SRA) that restore the hidden memory between initial sequences. For the sake of brevity, the following analysis focuses on the most significant results that demonstrate the generality of the proposed approach. Firstly, Fig. 6.7 shows the distribution of the slopes. Since the distribution of intercepts is very narrow (within the interval [1015, 1014]), their contributions are negligible.

6.3 Description of the General Algorithm and Its Testing on Available Data

253

The ordered realizations Dym(m=2,5,26,27) and

y2 y5

0.5

y26 y27

0.0

-0.5

-1.0 0

2

4

6

x=N/1000

Fig. 6.6 A set of ordered realizations Δym (m ¼ 2, 5, 26, 27) in relation to their mean values (Δym ¼ y – mean(ym(x)). It is evident of the influence of the quantized noise affecting the correct transmission (expressed by a solid line having random length). A solid grey line denotes the mean signal Distribution of slopes to respect of the unit value

Distribution of slopes

1.08 yup(x)-6 1.02 ymn(x)-15 0.96 ydn(x)-6 0

10

20

30

m

Fig. 6.7 Distributions of the slopes, divided into three equal independent groups relative to the unit value. The averaged curves (formed from the slopes belonging to each group) forms three independent but strongly-correlated curves, denoted by yup(x), ydn(x) and ymn(x), respectively. The Figure reports the number of realisations entering in each group(6, 6, 15, respectively)

6 The General Theory of Reproducible and Quasi-Reproducible Experiments

Three curves ydn(x), yup(x) and ymn(x) that are formed in the result of clusterization

254

ydn(x) yup(x) ymn(x) 0.4

0.0

-0.4

0

2 4 the number of the packets x/1000

6

Fig. 6.8 Three independent but strongly-correlated curves resulting from the division procedure of the slopes. The three curves replace approximately the conventional mean value , which does not take into account the influence of the “device”

Figure 6.7 shows how to organise all realisations in three independent groups, and replace the conventional definition of the mean function by three functions yup(x)-6, ydn(x)-6 and ymn(x)-15. Figure 6.8 depicts the 3 functions replacing the 27 initial realizations. After fitting the central function ymn(x) by the functions yup(x) and ydn (x), steps 2 and 3 of the proposed algorithm give the unknown roots in (6.27). These roots, in turn, help to find the true periodic function Pf(x) from (6.22) corresponding to an ideal experiment. Figure 6.9 shows the difference between the fitting function ymn(x) and Pf(x). Fitting this function by the segment of the F-transform (6.2), and using the AFR (A0, Ack Ask (k ¼ 1,2,. . .,K )), give a description of this experiment. The high-quality fitting of this function (with the value of the relative error equal to 2.15%) emerges from Fig. 6.10. Figure 6.11 shows the distribution of these coefficients as a function of the number of modes k (k ¼ 1,2,. . .,K ), with K ¼ 19. The last three Figs. 6.9, 6.10, and 6.11 resume the main result of the general theory: the elimination of the apparatus function and the decomposition the Pf(x) to the segment of the Fourier series if the best fit model is absent. In the same manner, one can consider the integrated (cumulative) curves that correspond to the initial SRAs depicted in Fig. 6.6. As before, the integrated curve is defined as

6.3 Description of the General Algorithm and Its Testing on Available Data

255

ymn(x) anf Pf(x) after elimination of the apparatus function

ymn(x)_app_function Pf(x)_true_Fourier

0.4

0.0

-0.4

0

2 4 The normalized number of packets x/1000

6

Fig. 6.9 Effects of the application of steps S2-S3 and of expression (6.22) to eliminate the “instrumental” distortions. The solid green line represents the mean function that can be presented by the Fourier decomposition. The original line ymn(x) is denoted by the red line

Y_true_Fourier Fit_Y

The value of the relative error with respect to the value of the period

The value of the relative error

The fit of the mean function that is reduced to an ideal experiment

0.4

0.0

100

50

Topt =5.25 0

2

4 The value of the period 2.5 < T < 6.5

6

T(opt)=5.25, RelErr(%)=2.15% -0.4

0

2

4

6

x

Fig. 6.10 Fit of the true Fourier function Ytf(x) to the decomposition (6.2). Only 40 fitting parameters are necessary to fit 5000 data points with relative errors lower than 2.2%. The small frame shows the dependence of the relative error on the value of the period from the interval [0.5–1.5] Tin. Tin ¼ 5.0. The minimum of the relative error is achieved at Topt ¼ 5.25

The dependence of the decomposition coefficients Ac k , Ask , Amdk with respect to number of a mode k

256

6 The General Theory of Reproducible and Quasi-Reproducible Experiments

Ac k As k

0.4

2

2 1/2

Amd k=(Ac k +As k )

0.2

0.0

0

10

20

number of a mode k: 1 < k < 20

Fig. 6.11 Dependence of the decomposition coefficients Ack and Ask in expression (6.2). Only K ¼ 19 modes are necessary to obtain an accurate fit (RelErr (%) ¼ 2.15). The value of the constant is A(k ¼ 0) ¼ 0.037, while the overall number of fitting parameters is Nfp ¼ 2K + 2

Jy j ¼ Jy

j1

þ

1 x x 2 j

Δy j ¼ y j  hyi, hyi ¼

j1

  Δy j þ Δy

N 1 X y: N j¼1 j

j1

, ð6:28Þ

This simple procedure helps in many cases to eliminate HF and makes the initial measurements more sensitive to the influence of a small external factor. It is important to note the following points: 1. The distribution of the slopes in the vicinity of the unit value is preserved, but the number of realizations that form the desired statistical cluster is changed. Instead of the combination yup(x)-6, ydn(x)-6 and ymn(x)-15 (in parenthesis the number of measurements entering into each group is shown) there is the combination Jup (x)-7, Jdn(x)-5, Jmn(x)-15. This type of clusterization leads to other distortions of the apparatus function corresponding to integrated curves (see Figs. 6.9 and 6.12). 2. After calculating the JPf(x) corresponding to the integrated curve, one can repeat the decomposition of this integrated curve to the F-transform and calculate the distribution of the coefficients JAck and Jask, which represent quantitative parameters for the description of the integrated curves. They are plotted in Figs. 6.13a, b, where the same value of the limiting mode K ¼ 19 is considered for comparison.

6.4 Generalisation of Results for Quasi-Reproducible (Non-stationary) Measurements

257

Integrated curves Jmn(x) and JPf(x)

Jmn(x) JPf(x)

0.6

0.3

0.0

0

2

4

6

The normalized number of packets x/1000

Fig. 6.12 Clusterization of the initial realizations plays an important role in the formation of the true function JPf(x). For the integrated curves (they become more sensitive after elimination of the HF fluctuations) the distortions are expressed stronger in comparison with the ordered sequences. Compare this figure with Fig. 6.9

6.4

Generalisation of Results for Quasi-Reproducible (Non-stationary) Measurements

At this point, the question arises as to whether the previous results can be generalised by a general theory, with a “universal” fitting function, for a quantitative description of a set of quasi-reproducible (QR) or nonstationary measurements, without the attraction of a priori hypothesis. There exist many competitive theories/hypothesis claiming to describe metadata obtained from measurements on a vast collection of different systems by using any kind of equipment. Recent excellent publications (see, for example, [1–9]) clearly expressed the conventional paradigms of the data treatment analysis and mathematical statistics. The specific (alternative) theory proposes models, in terms of fitting function and parameters, derived from accurately measured data, and assuming that the influence of uncontrollable external factors and the effect of the measurement equipment are supposed to be negligible. But, in many cases, the ideal scheme does not work. In fact, the resulting model has a narrow limit of applicability, the experimental data are “noisy”, and the reliable comparison of the complex theoretical curve with data in many practical and significant cases becomes unreliable and questionable.

258

6 The General Theory of Reproducible and Quasi-Reproducible Experiments

The fit of the integrated curve JPf(x) corresponding to an ideal experiment

a

JPf(x) Fit_JPf(x)

0.6

0.3 T opt =5.25, RelErr(%)=0.05

0.0 0

2

4

6

The normalized value of the packets x/1000

JAck corresponding to the integrated curve

The decomposition coefficients JAc k, JAsk and JAmd k

b JAsk

0.06

2

2 1/2

JAmdk=(JAck+JAsk)

0.00

-0.06

0

10

20

k

Fig. 6.13 (a) Fitting of the true Fourier function JPf(x) to the decomposition (6.2). K ¼ 19 modes are necessary to obtain an accurate fit of 5000 data points (RelErr (%) ¼ 0.05). The optimal period is Topt ¼ 5.25. (b) Dependence of the decomposition coefficients JAck and JAsk for the integrated curve JPf(x) in expression (6.2). To obtain an accurate fit (RelErr (%) ¼ 0.05) are necessary K ¼ 19 modes. The value of the constant is A(k ¼ 0) ¼ 0.4253. The overall number of fitting parameters is Nfp ¼ 2K + 2 ¼ 40

The previous first question can be reformulated in a different way: Is it possible to propose an intermediate model (IM) that enables to conciliate the measured data with the “best fit” curve that follows from the microscopic or phenomenological theory?

6.4 Generalisation of Results for Quasi-Reproducible (Non-stationary) Measurements

259

The basic idea is that the “true” theory expressed in the unified parameters belonging to the IM could find a common platform, where the reliable data (cleaned from the influence of uncontrollable factors) would be expressed quantitatively by means of a limited number of fitting parameters coming from the IM. It implies that any alternative theory should also be expressed by the same set of the fitting parameters related to the IM. The aim of this Section is to prove that a “bridge” connecting an experiment and a “true” theory does exist, and the fitting function coming from the IM can be presented by the segment of the Generalized Prony spectrum (GPS). This analysis can be considered as a logical continuation of the results obtained earlier in papers [11–13, 22], where it was proved that the reproducible or quasi-periodic measurements have a memory, which can be mathematically expressed as: F L ð xÞ ¼

L1 X

al F l ðxÞ þ b:

ð6:29Þ

l¼0

Here, Fm(x) F(x + mT) represents a response function that coincides approximately with current measurement ym(x) i.e. (Fm(x) ffi ym(x)), where (m ¼ 0,. . .,M–1) defines the total number of measurements, L is a parameter that determines the specific length of the “memory” (or partial correlations) that exists between reproducible measurements, x is an input variable (it can be associated with time (t), wavelength (λ), frequency (ω), tension of electromagnetic field (E, H ), power (P) and etc.), T is a mean period of measurements associated with variable x. Equation (6.29) assumes that the set of constants al (which are found easily by the LLSM) does not change during the whole cycle of measurements. To generalize expression (6.29) and increase the limits of its applicability it is necessary to replace the constants by a set of functions (l ¼ 0,1,. . .,L–1; L < M). If this set of functions can be found, the solution of the functional equation F L ð xÞ ¼

L1 X

hal ðxÞiF l ðxÞ

ð6:30Þ

l¼0

is obtained for a set of functions Fm(x). Then, one can apply Eq. (6.2) for the description of the quasi-reproducible (QR) experiments, when the influence of different uncontrollable factors becomes significant and the so-called successive/ reproducible measurements during the acquisition process differ from each other. This section contains the justified arguments on the questions posed above, and considers the description of the QR experiments in the frame of expression (6.30) using one assumption only. It is assumed that functions hal(x T )i ¼ hal(x)i are periodical with mean period T, but they can differ from each other because of other parameters.

260

6 The General Theory of Reproducible and Quasi-Reproducible Experiments

It should be stressed that the proposed theory is self-consistent, because no a priori hypothesis is considered, and a “universal” fitting function is derived from random functions associated only with the measured data. The second part of this section is organized as follows. Section 6.4.1. gives the basics of the general theory for the derivation of the desired fitting function and quantitative description of the QR data. Section 6.5 describes the case study, which considers the single heartbeat (the desired ECG data are taken from reliable opensource databases). Finally, Sect. 6.6. discusses the obtained results and draws perspectives of further research.

6.4.1

Self-Consistent Solutions of the Functional Equation (6.30)

Consider the solution of the functional equation (6.30) when the length of the memory L is supposed to be known. It is assumed that all successive measurements meet the functional equation F Lþm ðxÞ ¼

L1 X

hal ðxÞiF lþm ðxÞ, m ¼ 0, 1, . . . , M  1:

ð6:31Þ

l¼0

In order to find the unknown functions (l ¼ 0,1,. . .,L; L < M), the wellknown LLSM is generalized and it is required the functional dispersion to accept the minimal value " σðxÞ ¼ F Lþm ðxÞ 

L1 X

#2 hal ðxÞiF lþm ðxÞ

¼ min :

ð6:32Þ

l¼0

By computing the functional derivatives with respect to unknown functions , it results " !# ML1 L1 X X δσðxÞ 1 F lþm ðxÞ F Lþm ðxÞ   ¼ has ðxÞiF sþm ðxÞ δhal ðxÞi M  L m¼0 s¼0 ¼0

ð6:33Þ

The averaging procedure has been applied to all set of measurements, assuming that the set of the functions (l ¼ 0,1,. . .,L; L < M) do not depend on index m. By introducing the definition of the pair correlations functions

6.4 Generalisation of Results for Quasi-Reproducible (Non-stationary) Measurements

K L,l ¼

261

ML1 ML1 X X 1 1 F Lþm ðxÞF lþm ðxÞ, K s,l ¼ F ðxÞF lþm ðxÞ, M  L m¼0 M  L m¼0 sþm

s, l ¼ 0, 1, . . . , L  1, ð6:34Þ it can be obtained the system of linear equations to calculate the unknown functions

L1 X

K s,l ðxÞhas ðxÞi ¼ K L,l ðxÞ, for l ¼ 0, 1, . . . , L  1, :

ð6:35Þ

s¼0

It makes sense to define this approach as the functional least square method (FLSM), which includes conventional LLSM as a particular case. By reconsidering the functional equation (6.31), a solution is sought in the form F 0 ðxÞ ¼ ½κðxÞx=T PrðxÞ, F m ðxÞ ¼ ½κðxÞmþx=T PrðxÞ:

ð6:36Þ

Here, the functions κ(x T) ¼ κ(x), Pr (x T) ¼ Pr (x), in accordance with supposition hal(x T )i ¼ hal(x)i made above, are periodic and can be represented by the segment of the Fourier series ΦðxÞ ¼ A0 þ

K>>1 X h k¼1

   i x x þ Ask sin 2πk : Ack cos 2πk T T

ð6:37Þ

Obviously, the decomposition coefficients Ack, Ask (k ¼ 1,2,. . .,K) depend on the type of function Φ(x). Putting solutions (6.36) into (6.31) yields the equation for calculating the unknown functions κ(x): ½ κ ð xÞ  L 

L1 X

hal ðxÞi½κðxÞl ¼ 0:

ð6:38Þ

l¼0

If the “roots”κ q(x), q ¼ 1, 2, . . ., L of Eq. (6.10) can be found, then the general solution for the function Fm(x) can be written in the form F 0 ð xÞ ¼

L

L

X X x=T mþðx=T Þ κ q ð xÞ Prq ðxÞ, F m ðxÞ ¼ κ q ð xÞ Prq ðxÞ, q¼1

q¼1

ð6:39Þ

m ¼ 0, 1, . . . , M  1: The set of periodic functions Prq(x) should coincide with the number of functions calculated from Eq. (6.38). Therefore, the last relationships (6.31), (6.32), (6.33), (6.34), (6.35), (6.36), (6.37), and (6.38) give the general solution of the functional

262

6 The General Theory of Reproducible and Quasi-Reproducible Experiments

equation (6.31). The physical interpretation of the basic Eq. (6.31) is the following. If the successive measurements “remember” each other and can be varied during the mean period of measurement T, then a “universal” fitting function for the description of these measurements can be found self-consistently (i.e without any additional suppositions related to their evaluations), which is derived totally by all set of measurements participating in this process. In contrast with the conventional approach, a priori hypothesis and the evaluation of its fitting parameters are not needed. This hypothesis can be considered as a competitive model and can be estimated alongside with other ones in terms of the fitting parameters that are given by the IM. It is evident that these results represent a generalization of the previous results presented in [22, 23] by replacing the constants al by the functions . In comparison with the previous case, these experiments can be defined as quasi-reproducible, having in mind the dependence of the functions from the input variable x. It would be important to obtain the solutions of Eq. (6.3) when the functions are not completely periodic. However, to the best of the author’s knowledge, the theory of solutions of the functional equations of arbitrary order is not a well-developed section of mathematics [24]. Therefore, the general theory of experiments determines a new direction of research for mathematicians working in the theory of the functional analysis aimed for applications in physics, chemistry and technics. The following detailed example considers the most common case with a minimal “memory” of length L ¼ 2, and when the number of the fitting parameters of the IM is minimal. It will be used for fitting purposes in the next section. For L ¼ 2, one has: F 2þm ðxÞ ¼ ha1 ðxÞiF 1þm þ ha0 ðxÞiF m m ¼ 0, 1, . . . , M  1:

ð6:40Þ

In this case, Eq. (6.35) takes the form: K 00 ðxÞha0 ðxÞi þ K 10 ðxÞha1 ðxÞi ¼ K 20 ðxÞ K 10 ðxÞha0 ðxÞi þ K 11 ðxÞha1 ðxÞi ¼ K 21 ðxÞ

ð6:41Þ

The solution of Eq. (6.40) can be written as F 0 ðxÞ ¼ ½κ 1 ðxÞx=T Pr1 ðxÞ þ ½κ 2 ðxÞx=T Pr2 ðxÞ, sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi  2 h a1 ð x Þ i h a1 ð x Þ i κ1,2 ðxÞ ¼ þ h a0 ð x Þ i : 2 2

ð6:42Þ

If one of the roots in (6.42) is negative (e.g.κ2(x) < 0), then the general solution can be presented approximately as

6.4 Generalisation of Results for Quasi-Reproducible (Non-stationary) Measurements

  x F 0 ðxÞ ¼ ½κ 1 ðxÞx=T Pr1 ðxÞ þ ½jκ2 ðxÞjx=T cos π Pr2 ðxÞ: T

263

ð6:43Þ

If the order of measurements is important, the proposed theory allows restoring all other measurements in accordance with the relationships    x þ m Pr2 ðxÞ, F m ðxÞ ¼ ½κ 1 ðxÞmþðx=T Þ Pr1 ðxÞ þ ½κ2 ðxÞmþðx=T Þ cos π  T m ¼ 0, 1, . . . , M  1:

ð6:44Þ

Here, the periodical functions Pr1,2(x T ) ¼ Pr1,2(x) keep their periodicity, but the decomposition coefficients Ack, Ask (k ¼ 1,2,. . .,K ) entering into decomposition (6.37) can be different from the initial case m ¼ 0, and reflect possible instability during the whole measurement process. How is it possible to define the minimal limit of measurements (minimum of parameter M ) to keep the validity of this approach? The answer to this question is important when cycling measurements are performed, and it is necessary to know the minimum of M for the realization of the first quantitative description of the “initial” measurement. For the minimal memory length L ¼ 2, the solution can be obtained as follows. From Eq. (6.40), for m ¼ 0 it holds F 2 ðxÞ ¼ ha1 ðxÞiF 1 þ ha0 ðxÞiF 0

ð6:45Þ

Equation (6.45) connects only two independent measurements (m ¼ 0,1; M ¼ 3). In this case, the determinant of the system (6.41) K 00 ðxÞK 11 ðxÞ  ðK 10 ðxÞÞ2 ¼ F 0 ðxÞ2 F 1 ðxÞ2  ðF 0 ðxÞF 1 ðxÞÞ2 ¼ 0,

ð6:46Þ

is equal to zero. Therefore, the functions are replaced by the constants a1,0 that can be found by the LLSM. The functions degenerate when M ¼ 4, and the pair correlation functions (6.34) take the form K 2l ¼

1 1 1X 1X F 2þm ðxÞF lþm ðxÞ, K sl ¼ F ðxÞF lþm ðxÞ, 2 m¼0 2 m¼0 sþm

ð6:47Þ

s, l ¼ 0, 1: Equation (6.40) allows finding the functional dependence of each measurement (m ¼ 0,1,. . .,M–1) if two of them are independent F0,1(x) and known. But the quantitative description of these measurements in the frame of the proposed theory will start from the initial four measurements (M ¼ 4). This result can be generalized for any L. One can notice that, for a more general case, M – L ¼ 2, and this simple relationship connects the given value of L with the minimal number of measurements as M ¼ L + 2.

264

6 The General Theory of Reproducible and Quasi-Reproducible Experiments

If the true sequence of measurements is not important, and the results of the measurements remain invariant relatively to their permutations, they can be combined into three clusters. It reduces the analysis to only these three mean measurements. This simple procedure significantly decreases the number of the fitting parameters required for further analysis. This reduction procedure is described in the next section.

6.4.2

The Clusterization Procedure and Reduction to an “Ideal Experiment”

As it has been stressed in [22], the evaluation of the “true” value of L in the general case represents itself an unsolved problem. If it is assumed that the true sequence of measurements (m ¼ 0,1,2. . ., M–1) remains invariant relatively to the permutations during the experimental cycle, then the following procedure can be adopted for their clusterization. Firstly, the distribution of the slopes with respect to the mean measurement is considered: ð ym  h yi Þ , ðhyi  hyiÞ 1 N  M X 1 X ym , ðA  BÞ ¼ A jB j: h yi ¼ M m¼0 j¼1 Slm ¼ slopeðhyi, ym Þ 

ð6:48Þ

The parenthesis determines the scalar product between two functions having j ¼ 1,2,. . .,N measured data points. Here it is supposed that the initial measurements ym(x), for m ¼ 0,1,. . ., M–1, approximately coincide with the functions Fm(x) (ym(x) ffi Fm(x)) figuring in the functional equation (6.30). By plotting Slm with respect to successive measurement m, and then rearranging all measurements in the descending order SL0 > SL1 > . . . > SLM-1, it is possible to divide all measurements into three groups. The “upper” group has the slopes located in the interval (1+Δ, SL0); the mean group (denoted by “mn”) is characterized by slopes in (1–Δ, 1+Δ); the lower group (denoted by “dn”) has slopes in the interval (1–Δ, SLM-1). A value Δ is chosen for each set of the QR-measurements separately. The 3sigma criterion can be used when the total range Rg ¼ SL0-SLM-1 is divided into three equal parts, i.e. Δ ¼ Rg/3. This curve has great importance and reflects the quality of the realized successive measurements and the used equipment. A preliminary analysis performed on different available datasets allows selecting the three different cases represented in Fig. 6.14a–c, respectively. The bell-like curve (BLC) (that can be fitted with the help of four fitting parameters α, β, A, B) is obtained after the elimination of the mean value and subsequent integration; it can be described by the beta-function

6.4 Generalisation of Results for Quasi-Reproducible (Non-stationary) Measurements

a

slopes SL - ordered slopes

3

265

Bell-like curve Line Rt=71% ("good" experiment)

16 14

Nup=15 Nmn=71

12

BL curve

10 8 6

Ndn=14

Slopes and SRA

4 2

2

0 -2 -10

0

10

20

30

40 50 60 70 1 < m < 100

80

90 100 110

1

0 -10

0

10

20

30

40

50

60

70

80

90

100

110

1 < m Nup + Ndn, then the cycle of measurements is characterised as “good” (stable), if NmnNdnNup, the measurements (and the corresponding equipment) are characterised as “acceptable”, and if Nmn < Nup + Ndn, the measurements are characterised as “bad” (very unstable). Quantitatively, for each case, it can be computed the ratio

6.4 Generalisation of Results for Quasi-Reproducible (Non-stationary) Measurements

 Rt ¼

   Nmn Nmn  100%:  100% ¼ Nup þ Ndn þ Nmn M

267

ð6:50Þ

where M denotes the total number of measurements. This ratio determines three classes of measurements (Fig. 6.14a–c): “good” (60% < Rt < 100%), “acceptable” (30% < Rt < 60%), and “bad” (0 < Rt < 30%). Therefore, if this clusterization is applied, instead of Eq. (6.40) it is possible to write F 2 ðxÞ ¼ ha1 ðxÞiF 1 ðxÞ þ ha0 ðxÞiF 0 ðxÞ, Nup1 1 X ðupÞ F 2 ðxÞ  YupðxÞ ¼ y ðxÞ, 1 þ Δ < Slm < SL0 , Nup m¼0 m F 1 ðxÞ  YdnðxÞ ¼ F 0 ðxÞ  YmnðxÞ ¼

Ndn1 1 X ðdnÞ y ðxÞ, SLM < Slm < 1  Δ, Ndn m¼0 m

:

ð6:51Þ

Nmn1 1 X ðmnÞ y ðxÞ, 1  Δ < Slm < 1 þ Δ: Nmn m¼0 m

Here, the function SLm determines the slopes located in the descending order, and the parameter Δ associated with the value of the confidence interval is selected for each specific set of measurements separately. These three “artificially” created measurements F2,1,0(x) are added to the previous set ym(x), so that the functions do not depend on the index m, and remain almost the same in comparison with the functions where this reduction procedure was not used. It is also assumed that the mean function Ymn(x) is identified with initial measurement F0(x), while two other measurements F2,1(x) coincide with the functions Yup(x), Ydn(x), correspondingly. The solution of Eq. (6.51) is given by (6.42). This reduction procedure will be used for the fitting of a single heartbeat (HB) from available ECG data, as shown below. The next question of this Section regards the reduction of the measurements to an “ideal” experiment. In accordance with the definition given in [10], an “ideal” experiment implies the following condition: F m ðxÞ  F ðx þ mT Þ ¼ F mþ1 ðxÞ  F ðx þ ðm þ 1ÞT Þ,

ð6:52Þ

with the results of measurements remaining the same in one cycle. It means that the IM for this ideal case coincides with the segment of the Fourier-series (6.37). So, the possibility to extract the F-components Prq(x) (q ¼ 1,2,. . .,L ) from the general solution (6.39), would give the opportunity to eliminate the influence of uncontrollable factors (associated with the functions κq(x) and the apparatus function). Therefore, a refined function would be available for further comparison with the theory claiming to describe the experimental results from a microscopic point of view. In the following, the desired expression will be obtained only for the simple case (L ¼ 2), having in mind that this situation is the most probable in practical

268

6 The General Theory of Reproducible and Quasi-Reproducible Experiments

applications and that generalizations of these expressions for any L are relatively simple. 1. L ¼ 2, with κ1,2(x) > 0 F 0 ðxÞ ¼ ½κ 1 ðxÞx=T Pr1 ðxÞ þ ½κ 2 ðxÞx=T Pr2 ðxÞ F 1 ðxÞ ¼ κ1 ðxÞ1þðx=T Þ Pr1 ðxÞ þ κ2 ðxÞ1þðx=T Þ Pr2 ðxÞ

ð6:53Þ

From this system of equations, one can easily find the periodic function Pr(x) that presents the linear combination of the functions Pr1,2(x): F 0 ðxÞκ 2 ðxÞ  F 1 ðxÞ , κ 2 ð xÞ  κ 1 ð xÞ F ðxÞ  F 0 ðxÞκ1 ðxÞ Pr2 ðxÞ ¼ ½κ2 ðxÞðx=T Þ 1 , κ 2 ð xÞ  κ 1 ð xÞ PrðxÞ ¼ w1 Pr1 ðxÞ þ w2 Pr2 ðxÞ: Pr1 ðxÞ ¼ ½κ1 ðxÞðx=T Þ

ð6:54Þ

Here, the unknown constants w1,2 can be used in the final stage of comparison of the microscopic theory with “pure” data. It is obvious also that the zeros of these functions do not define the desired periodic functions, and the case κ 2(x)  κ 1(x) ¼ 0 should be considered separately. 2. L ¼ 2, with κ1 (x) > 0, κ2(x) < 0   x F 0 ðxÞ ¼ ½κ1 ðxÞx=T Pr1 ðxÞ þ ½jκ 2 ðxÞjx=T cos π Pr2 ðxÞ T  x F 1 ðxÞ ¼ κ1 ðxÞ1þðx=T Þ Pr1 ðxÞ  jκ2 ðxÞj1þðx=T Þ cos π Pr2 ðxÞ T

ð6:55Þ

The solution for this case can be written in the form F 1 ðxÞ þ jκ 2 ðxÞjF 0 ðxÞ , κ1 ðxÞ þ jκ2 ðxÞj F ðxÞκ1 ðxÞ  F 1 ðxÞ Pr2 ðxÞ ¼ ½jκ2 ðxÞjðx=T Þ 0 κ 1 ð xÞ þ j κ 2 ð xÞ j PrðxÞ ¼ w1 Pr1 ðxÞ þ w2 Pr2 ðxÞ: Pr1 ðxÞ ¼ ½κ 1 ðxÞðx=T Þ

ð6:56Þ

The cases of the degenerated “roots”, when the functions κ1(x)κ2(x) coincides identically with other and the complex-conjugated “roots” (κ1, 2(x) ¼ Re (κ(x)) i Im (κ(x))), are not considered here because they have little chance to be realised in real measurements.

6.5 Validation of the General Theory on Experimental Data HB1

HB1 HB25

1500

HB50 HB75 HB100

Heart Beats (in mV)

1000

HB10

Heart Beats towards mean value

HB10

1500

HB25 HB50 HB75

1000

HB100

500

0

-500 -500

500

269

0

500

1000

1500

0

-500

0

100

200 300 400 t (arbitrary units)

500

600

Fig. 6.15 Typical behaviour of the single HB for the person s0020are. The figure represents six randomly selected HBs among the 104 considered. The mean HB is bolded by the black line. All signals occupy a band and can be presented approximately by a set of straight lines

6.5

Validation of the General Theory on Experimental Data

This chapter aims at demonstrating the application of the general approach to the treatment of real data obtained from a complex system for which the “best-fit” function is not known. The dataset consists of the ECG signal obtained from the reliable Internet resource PhysioNet Bank (http://physionet.org/cgi-bin/atm/ATM). In particular, after randomly selecting ECG data from the section ECG-ID Database (ecgiddb), they were prepared approximately 100–104 single heartbeats (HBs) associated with each randomly selected person. The intensity of a heartbeat (depolarisation voltage) is measured in (mV), and each file contains approximately 600 data points. The basic problem is to fit the single heartbeat (HB) and express its behaviour in terms of the fitting parameters belonging to the generalised Prony spectrum. The human heart acts as a biologic “pump”, and the knowledge of the distribution of the amplitudes describing the individual heartbeat is an important problem for cardiology, which can be solved within the frame of the proposed theory. For the fitting purposes, five anonymous patients were selected, denoted as s0020are(1), s0022lre(2), s0026lre(3), s0039lre(4), s0043lre(5) in the database. The detailed analysis of the single HBs associated with different diseases can be a subject of separate research. The proposed algorithm can be divided into three basic stages.

270

6 The General Theory of Reproducible and Quasi-Reproducible Experiments

Stage 1 The separation of all measurements in three mean curves Figure 6.15 show the results of this stage, where some HBs from subject s0020are (1) are randomly selected (HB1, HB10, HB25, HB50, HB75, HB100, ) from the overall set (M ¼ 104), and plotted as a function of time in the interval [0, 600] in some arbitrary units (a.u given in mV). Figure 6.15 depicts even HBs with respect to their mean function , showing the strong correlations between measurements. The distribution of the slopes (see Eq. (6.48)) for all measurements m (m ¼ 1,2,. . .,M) is shown on Fig. 6.16a. It is approximatively set the value of Δ ¼ 1/3, and the span between the ordered measurements (the curve SL) is divided in three parts: i.e. “up” with (1+Δup, max(SL)), “mn” with (1–Δdn, 1+Δup), “dn” with (min(SL),1 –Δdn,), where Δup ¼ [max(SL)1]/3 and Δdn ¼ [1min(SL)]/3. The numbers of measurements in each selected interval are Nup ¼ 25, Nmn ¼ 63, Ndn ¼ 16. After subtraction of the unit value and the subsequent integration, it is obtained the bell-like curve in Fig. 6.16b. The quality of measurements, calculated by expression (6.41), is equal to Rt(Nmn ¼ 63)/(M ¼ 104) ¼ 60.58%. This value of Rt > 60% can be interpreted as a relatively stable work of the human heart. Most of the beats are located in the vicinity of the unity slopes. The clusterisation realised by Eq. (6.23) results in only three mean curves Yup, Ydn, Ymn (see Fig. 6.17). The averaged curves can be added to the original set of measurements. The application of the procedure represented by expression (6.34) shows that the values of the averaged constants remain practically unchanged. These functions, together with the functional “roots” κ1,2(t) (see expression (6.42)), are shown in Fig. 6.18. Stage 2 Reduction to three incident points This stage is very important to justify the scaling properties of the curves that are subjected to the fitting procedure. The general solution (6.39) depends on the ratio x/ T, so it remains invariant after the scaling transformation x=ξ x x0 ¼  0, T T=ξ T

ð6:57Þ

where ξ is an arbitrary scaling parameter. Therefore, this transformation helps to decrease the number of the modes appearing in the GPS, by keeping the same information in the shorten/scaled data. This procedure was successfully applied to many random functions [15] proving their self-similar (fractal) properties. They are chosen s ¼ 1,2,. . .,b ¼ 6 points (Y1,Y2,. . ., Yb) and reduced to three incident points (max(Y ), mean(Y ), min(Y )), invariant relatively to all permutations inside the chosen b points. Having in mind the total number of data points N ¼ 600 and the length of a small “cloud” of points b ¼ 6, a reduced number R of data points are calculated as the integer part of the ratio [N/b] (R ¼ 100), by keeping the form of the initial curve almost unchanged with respect to this transformation. As a new variable t it is considered the value of tmn averaged

6.5 Validation of the General Theory on Experimental Data

a

271

Slopes SRA(SL) 1.10

SL

1.05

Slopes and SRA

1+

up

=1.02589

1.00

10.95

dn

= 0.96741

0.90

-10

0

10

20

30

40

50

60

70

80

90

100 110

1 < m < 104

b

Bell-like curve The fit to beta-function Rt=60.58% - "good" HBs

1.5

1.0

Nup=25

B

Nmn=63 Ndn=16 0.5

0.0

-10

0

10

20

30

40

50

60

70

80

90

100 110

1< m < 104

Fig. 6.16 (a) Distribution of the slopes for 104 HBs. The horizontal lines show the limits between three clusters and are calculated by following the procedure described in the text. (b). The bell-like curve demonstrates the number of measurements that enter in each cluster and are used for calculations of the three mean curves Yup(Nup ¼ 25), Ymn(Nmn ¼ 63) and Ydn(Ndn ¼ 16). Following criterion described in the text, the most HBs are located in the band (1–Δdn,1+Δup) and the work of the “human pump” is categorised as “good”. The solid line shows the fit of this curve to the beta-distribution function (6.21). The necessary fitting parameters are in Table 6.2

272

6 The General Theory of Reproducible and Quasi-Reproducible Experiments

HBup HBdn HBmn

HBup HBdn HBmn

1500

R 1000 mean HBs

HBup, HBmn and HBdn (mV)

1500

1000

500

0

-500

500

-500

P

0

500

1000

1500

T

0 S

Q -500

0

100

200

300

400

500

600

time in (a.e.)

Fig. 6.17 Three mean HBs obtained after the clusterisation procedure. The subplot depicts the HBs with respect to the mean and shows that mean curves are close to each other. All specific extreme points defined conventionally as PQRST can be clearly seen

over b points in each chosen interval. Figure 6.19 shows the result of the reduction procedure on the curve Ymn(tmn). Because of the strong correlations, Yup(tmn) and Ydn(tmn) are very similar to Ymn(tmn) and are not considered. The similarity between Ymn(t) and its reduced replicas ymn(tmn), yup(tmn) and ydn(tmn) emerges from Fig. 6.19. The reduced curves yup(tmn) and ydn(tmn) are not considered because the reduction (b ¼ 6) makes them practically identical to the curve ymn(tmn). Functions κ1,2(t) calculated by Eq. (6.42) are depicted in Fig. 6.20. A direct application of the reduction procedure to these functions is impossible because the HF fluctuations eliminate the self-similar property [16]. To restore this property and then apply the reduction procedure again, the functions κ1,2(t) with a correlation between initial and smoothed curves equal to 0.9 should be preliminarily smoothed. They are shown in Fig. 6.20a by bold lines. These smoothed functions can be reduced again, and after reduction, it is possible to obtain the reduced functions r1,2(x) from the smoothed roots. These functions are represented in Fig. 6.20b. The subplot shows the original smoothed roots for comparison. Stage 3 The fitting of the mean reduced function.

6.5 Validation of the General Theory on Experimental Data k 1(x) k 2 (x)

1.0

The calculated roots k1,2(t)

The behavior of the constsnts , x=t.

2.5

273

2.0

1.5

0.5

0.0

-0.5

0

1.0

100

200

300

400

500

600

t (a.u.)

0.5

0.0

-0.5 0

100

200

300

400

500

600

time (a.u) Fig. 6.18 Behaviour of the functions and the “roots” κ1,2(t). The subplot shows that κ1(t) > 0 and κ2(t) < 0

The previous stages have a preparatory character. The main result is obtained by fitting the reduced function y(x) (y ¼ Ymn, x ¼ tmn) to function (6.43). For convenience, this function is presented in the following form: ð1Þ

yðxÞ ffi F ðx; K, T x Þ ¼ A0 E 0 ðxÞ þ

K  X

ð1Þ

ð1Þ

ð1Þ

ð1 Þ

Ack Eck ðxÞ þ Ask Esk



k¼1

þ

K  X

ð2 Þ ð2 Þ Ack Eck ðxÞ

þ

ð2Þ ð2Þ Ask Esk

 ,

k¼1

  x E 0 ðxÞ ¼ ½r 1 ðxÞx=T x þ ½jr 2 ðxÞjx=T x cos π , T     x x ð1Þ ð1 Þ Eck ¼ ½r 1 ðxÞx=T x cos 2πk , Esk ¼ ½r 1 ðxÞx=T x sin 2πk , Tx Tx         x x x x ð2Þ ð2Þ x=T x x=T x cos cos cos 2πk , Esk ¼ ½jr 2 ðxÞj sin 2πk : Eck ¼ ½jr 2 ðxÞj Tx Tx Tx Tx

ð6:58Þ Here, the known functions r1,2(x) should be associated with the reduced values of ð2Þ ð2Þ the smoothed roots κ1,2(t). The functions E 0 ðxÞ, Eck ðxÞ, Esk ðxÞ take into account

274

6 The General Theory of Reproducible and Quasi-Reproducible Experiments

yup ydn ymn

Ymn

1500

R

1500

Reduced yup, ymn,ydn

Ymn

1000

1000

500

0

-500

500

0

100

200

300

400

500

600

t (a.u)

P

T

0

Q -500

S 0

100

200

300

400

500

600

tmn Fig. 6.19 Result of the application of the reduction procedure to three incident points. The subplot depicts Ymn(t). The main plot shows three reduced curves that occupy the length R ¼ 100. From the analysis of these curves, one can conclude that the fit of the curve ymn(tmn) is sufficient because the other two curves yup(tmn) and ydn(tmn) are similar to the central one Table 6.2 The fitting parameters that enter into the beta-distribution function (6.21) Person s0020are

A 0.00302

α 0.76799

β 1.55191

B 6.46622E4

s0022lre

7.31894E4

0.94533

1.90262

0.00462

s0026lre

0.00151

1.16655

1.9679

0.03408

s0039lre

0.00175

0.85788

1.66406

0.0054

s0043lre

0.00114

0.8368

1.70635

0.00318

xmx, ymx 52; 1.3897 51; 1.3004 61; 3.9877 56; 1.223 52; 0.9138

As(%) 0.48544

RelErr (%) 0.09681

0.49505

0.07439

7.69231

0.21553

3.92157

0.04207

1.51515

0.17451

the fact that the root r2(x) is negative. The function F(x; K, Tx) contains only two nonlinear fitting parameters, which can be computed by minimising the relative error surface

6.5 Validation of the General Theory on Experimental Data

275

κ1(t) κ2(t)

rsm1(t)

1.0

rsm2(t)

0.5

0.0

-0.5 0

b

100

200

300 t(a.u)

400

r1 r2

1.0

The reduced roots r1 and r2

500

The roots k1,2(t) and their replicas rsm1,2(t)

The roots κ1,2(t) and their replicas rsm1,2(t)

a

600

k1(t) k2(t) rsm1(t)

1.0

rsm2(t)

0.5

0.0

-0.5 0

100

200

300

400

500

600

t(a.u)

0.5

0.0

-0.5 0

100

200

300 tmn

400

500

600

Fig. 6.20 (a) Smoothed functions (shown by bold lines) that can be reduced to three incident points (same value b ¼ 6). (b). Self-similar properties of the reduced roots r1,2 (tmn) resulting from the reduction procedure to three incident points. The subplot shows the smoothed curves (marked by bold lines) that are similar to the curves depicted in the central figure

276

6 The General Theory of Reproducible and Quasi-Reproducible Experiments

y(x) Fit_y(x) 1500

R

y(x)

1000

500 P

T

0 Q -500

S 0

100

200

300

400

500

600

x=tmn

Fig. 6.21 Final fit of the single HB (mean reduced function) realised with the help of the fitting function (6.30). The value of the fitting error is less than 1.7%



   yðxÞ  F ðx; K, T x Þ min RelError stdev  100%, meanðjyðxÞjÞ

ð6:59Þ

i.e. (K, Tx). Usually, the mean period Tx is not known and lies in the interval (0.5Tin < T < 2Tin), Tin ¼ (x1 – x0)length(x). The minimal value of the final mode K derives from the condition that the relative error should be in the acceptable interval (1–10%). After performing the minimisation in (6.59), the desired amplið1,2Þ ð1,2Þ tudes A0 , Ack ðxÞ, Ask ðxÞ are obtained by the LLSM. Figure 6.21 presents the result of the fit of expression (6.58) to the reduced function y(x). The quality of the fitting curve (6.58) is rather high because the total number of the amplitudes 4K ¼ 72 is comparable with the number of the reduced points R ¼ 100. Figure 6.22a shows the overall distribution of amplitudes. This distribution, together with other fitting parameters (see Table 6.1), represents the desired IM expressed in terms of the GPS. The importance of the BLC (Fig. 6.22b) is that it serves as a useful tool for analysis of spectrograms containing a large number of the discrete amplitudes (>100). ð1,2Þ ð1,2Þ Figure 6.23 shows the separated distributions of amplitudes Ack ðxÞ, Ask ðxÞ. Other HBs from different subjects and collected in the same database have been treated by following the same steps of the proposed algorithm. Therefore, only the final results are shown in the following. Figure 6.23a–c show the desired BLC and their fit. This new source of information signifies about the stability and statistics of the treated HBs forming approximately by the M ¼ 100 samplings. The heights of these distributions indicate the character of deviations from the unity slope. The

6.5 Validation of the General Theory on Experimental Data

a

277

Amp_tot Odr_Amp

400 300 200

Amplitudes - 72

100 0 -100 -200 -300 -400 -10

0

10

20

30

40

50

60

70

80

1< k < 72

b

Total distribution of amplitudes presented in the form of BLC The fit of the BLC to beta-distribution function

BLC and its fit to beta-distribution

4000

3000 A=10.653, α=0.861 β=1.656, B=5.521

2000

RelErr=0.08%

1000

0 -10

0

10

20

30 40 0 < k < 71

50

60

70

80

Fig. 6.22 (a) The total distribution of the amplitudes in the fitting function (6.30). This presentation is convenient when the number of amplitudes is sufficiently large (in our case K ¼ 72). The amplitudes form a specific “piano” and each mode shows the intensity of each “key”. (b) For a large number of amplitudes, one can suggest another convenient presentation expressed in the form of the BLC. This curve can be fitted by expression (6.21), and the parameters of this distribution can characterise the GPS at whole. For the case considered (person 00s20are) here, the fitting parameters are shown in the figure

278

6 The General Theory of Reproducible and Quasi-Reproducible Experiments

Table 6.1 Additional quantitative parameters that enter into the fitting function (6.30) Person s0020are s0022lre s0026lre s0039lre s0043lre

Period T 408 480 462 426 462

ln(mean (r1)) 0.28498 0.06952 0.20376 0.1709 0.1709

ln(mean (r2)) 0.83963 0.86644 0.91514 0.8792 0.8792

A0 3.2843 82.7063 11.5059 273.108 212.449

Range (Ampl) 708.651 788.482 349.228 2519.02 1851.55

RelErr (%) 1.66194 2.02615 4.10437 4.50235 9.12826

K 18 18 18 18 18

The value of the sixth column is determined as Range ( f ) ¼ max( f ) – min( f ), and shows the range of all amplitudes that enter into the final function (6.30). Distribution of amplitudes for the person(1) s0020are

Ac1(k)

Distribution of amplitudes Ac1,2(k) and As1,2(k)

As1(k) Ac2(k)

300

As2(k)

200 100 0 0

K=18

-100 -200 -300 -400

Fig. 6.23 Conventional presentation of the amplitudes figuring in the fitting function (6.58). The analysis of these amplitudes can be realised with traditional methods accepted in any spectroscopy

larger heights of the BLCs correspond to the stronger deviations of the slopes from the unit value. Figure 6.24a, b demonstrate the fit of the functions defined above as y (x). They can give a new source of information about the detailed behaviour of the individual HBs for each tested person. Figures 6.25a–d show the final result of the whole procedure, i.e. the distribution of the amplitudes forming each mean HB (Fig. 6.26). The analysis of the GPS of these signals can give a new source of information in cardiology with respect to the conventional approach in the diagnosis of different cardio-diseases. In Table 6.2, the seventh column gives the quantitative measure of the horizontal asymmetry:

6.5 Validation of the General Theory on Experimental Data

BLC and its fit to beta-distribution

a

1.5

279 BLC_s0022lre Fit to BLC Line

Rt=46% - "acceptable" HBs

Nup=30

1.0

Nmn=47 Ndn=26

0.5

0.0

-10 0

10 20 30 40 50 60 70 80 90 100 110 0 < m , SB ¼ -2SA and, therefore, by (8.20), it follows that r (φ) ¼ 0. In this case, expression (8.20) degenerate into a point with coordinates ¼ located on the line y ¼ x.

350

8 Applications of NIMRAD in Electrochemistry

It is worth to note that the proof of the proposed algorithm differs from the proofs in the abovementioned literature. One more possible fourth-order invariant can be found in the same way, and its determination is left to the reader as an exercise.

8.1.2.3

Application of the Statistics of the Fractional Moments (SFM) and Use of the Internal Correlation Factor

The generalized Pearson correlation function (GPCF or GPC-function) was used in [13, 14] and recalled in Chap. 3. Hereafter are recalled some basic elements of the SFM, and it is introduced the internal correlation factor (ICF), which is preferable to the complete correlation factor (CCF). The GPC-function is GMV p ðs1 , s2 Þ GPC p ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi , GMV p ðs1 , s1 Þ GMV p ðs2 , s2 Þ

ð8:22Þ

where the generalised mean value GMV-function of the K-th order is

GMV p ðs1 , s2 , . . . , sK Þ ¼

N mom 1 X  nrm j ðs1 Þ nrm j ðs2 Þ  nrm j ðsK Þ p N j¼1

!mom1

p

ð8:23Þ It employs the normalised sequences nrmj( y), with 0  nrmj( y)  1, and the current value of the moment, defined as momp. More specifically, for j ¼ 1, 2,. . ., N, it holds  y j  min y j   , nrm j ðyÞ ¼ max y j  min y j

ð8:24Þ

where yj denotes the initial random sequence that can contain a trend, or has to be compared with another trendless sequence. The initial sequences are chosen as follows. The minimum of the GMV-function is counted from the zero-th value, while the maximum coincides with the maximum of the normalised sequence. Moreover, it is convenient to present the set of moments in the uniform logarithmic scale: momp ¼ eLnp , Lnp ¼ mn þ

p ðmx  mnÞp ¼ 0, 1, . . . , P P

ð8:25Þ

where mn and mx are the minimum and maximum values of Lnp , respectively, and define the limits of the moments in the uniform logarithmic scale. As mentioned in Chap. 3, it is common to set mn ¼ 15, mx ¼ 15, and 50  P  100. This choice is

8.1 Application of the Discrete Geometrical Invariants for the Analysis. . .

351

justified by the fact that the transition region of the random sequences expressed in the form of GMV-functions is usually concentrated within the interval [10, 10]. The extension of this interval to [15, 15] gives a more accurate computation of the limit values of the functions in the space of the fractional moments. Finally, note that GPCp given by (8.22) coincides with the conventional definition of the Pearson correlation coefficient at momp ¼ 1. If the above limits have opposite signs and assume sufficiently large values, the GPCF has two plateaus with GPCFmn ¼ 1 for small values of mn and another limiting value GPCmx that depends on the degree of internal correlation between the two compared random sequences. This right-hand limit, say Lm, satisfies the following condition:  M  min GPC p  Lm  GPC mx  1:

ð8:26Þ

The appearance of two plateaus implies that all information about possible correlations is complete and a further increase of (mn) and (mx) is useless. Several tests showed that the highest degree of correlation between two random sequences is achieved when Lm ¼ 1, and the lowest when Lm ¼ M. For a quantitative comparison of the internal correlations between two random sequences, it is introduced a different correlation parameter defined as the ICF instead of CCF: ICF ¼ M  Lm:

ð8:27Þ

Note that ICF is determined by using the complete set of the fractional moments from the interval [emn, emx]. By setting (mn) ¼ 15 and (mx) ¼ 15, ICF tends to M for high correlation, and to M2 for the lowest (remnant) degree of correlations. Moreover, ICF does not depend on the amplitudes of the two compared random sequences. Since 0  |yj|  1 must hold for both sequences, (8.27) contains complete information about the internal correlation between the pair of compared random sequences that is based on the similarity of probability distribution functions of the sequences, even if the last ones are usually not known. Recent applications of the SFM has brought promising results [13, 14], giving the idea to use the ICF for the unification of the significant parameters. Namely, for a set of significant parameters referring to one qualitative factor, it holds: cf min ¼ M 2  ICF  M,

ð8:28Þ

where it is always M < 1 and the low limit cfmin is determined by the sampling volume and the practical conditions of compared random sequences that should be almost the same, e.g. the first affected by a qualitative factor, the second by another factor like an external “control” action. Then, the clusterization method is based on the comparison of the values of the ICF factor, by making a sort of extension of the conventional method based on the Pearson correlation coefficient, which is not suitable for a detailed comparison of a pair of random sequences.

352

8 Applications of NIMRAD in Electrochemistry

8.1.3

An Application of the Method to Electrochemistry

8.1.3.1

Experimental Set-Up

All measurements were conducted with the use of a potentiostat/galvanostat ElinsP30S (Chernogolovka, Russia) and a three-electrode cell, using a glassy carbon rod as the counter electrode and an Ag/AgCl (3.5 М KCl) electrode as the reference electrode. As working electrodes were used graphite (C) and platinum (Pt) electrodes. The standard phosphate buffer solution (pH ¼ 6.86, the mixture Na2HPO4 and KH2PO4) served as background electrolyte. A total of 1000 measurements were recorded continuously with constant stirring. Each measurement cycle included two stages: 1. Electrochemical regeneration – five successive cycling with a potential scan rate of 2.5 V/s. 2. Registration of VAG in the background electrolyte at the potential scan rate of 0.5 V/s within the rates [0, 1.5] V. Figure 8.1 depicts the scheme of the measurement cycle.

8.1.3.2

Algorithm Description

The set of 1000 measurements resulting from the experiment was divided into ten groups of 100 measurements each (1–100, 101–200, 201–300, . . ., 901–1000), with the aim to detect the surface modifications (due to ageing) of each electrode. Before introducing the proposed algorithm, it should be put the following question: which curve is more sensitive for detection of possible electrode changes: (a) the differential curve from dJ/dU without trend or (b) the integral curve J (U) obtained after integration with trend?

Fig. 8.1 Scheme of the measurement cycle. The time interval between measurements is 10 s. The period of the VAG registration is 3 s

8.1 Application of the Discrete Geometrical Invariants for the Analysis. . .

353

An answer can be found by comparing two random curves with the help of the GPC-function (8.22), which is based on the complete set of the real moments and generalizes the conventional Pearson correlation coefficient. Therefore, as a first step, the data were prepared to eliminate the influence of the remnant currents. The prepared data satisfy the following requirements: DV m ðU Þ  hDV m ðU Þi , stdevðDV m ðU ÞÞ DY m ¼ Ynm  hYnm i, JY m ¼ IntergalðU, DY m Þ, M M 1 X 1 X DY m , hJY im ¼ JY : hDY im ¼ M m¼1 M m¼1 m Ynm ¼

ð8:29Þ

Here, the parameter m ¼ 1, 2,. . ., M defines the current measurement number over the total number of measurements in one set (M ¼ 100) and the dimension of the measurement space, accordingly. DVm(U ) is the initial data file referred to the measurement m; the symbol denotes the arithmetic mean for each fixed measurement and is calculated according (8.14). The number of data points ( j ¼ 1, 2,. . ., N ) for each measurement is N ¼ 1181 and any dependence from the applied potential Uj defines the data space. As previous, the symbol Integral(x,y) represents the conventional integral calculated by the trapezoidal rule: Integralðx, yÞ ) J j ¼ J The final notation hY im ¼ M 1

j1

P m

þ

1 x x 2 j

j1

  y

j1

þ yj :

ð8:30Þ

Y m determines the couple of functions averaged

over all measurements. This simple procedure essentially helps in eliminating the remnant currents and obtain the desired averaged curves that can be used for comparison of two sets of 100 measurements from d + 1 to d + 100, with d ¼ 0, 100, 200,. . ., 900. The typical curves corresponding to the comparison of the pair of sets (1–100, 101–200) for the Pt electrode are depicted in Fig. 8.2a, b, and in Fig. 8.3a, b for the data obtained by dJ/dU and J(U ) and measurements (DYm and JYm) spaces, respectively. From the figures, it emerges the different sensitivity to the influence of an external factor of the curves in the data space plotted as a function of the applied potential, in comparison with the diagrams of the ranges of the same curves plotted as a function of the number of measurements (m ¼ 1, 2,. . ., M ¼ 100). The range of any random curve is defined by the conventional expression: Range( y) ¼ max( y)-min( y). Figure 8.4a, b show the differences between these curves expressed in the form of the GPC-functions (8.22). Moreover, Fig. 8.5a, b show the behaviour of the ICF for both types of curves in data and measurement spaces, respectively. From the analysis of these diagrams emerges that, as the number of measurements increases (from 1 to 1000), the correlations between the sets of hundred measurements diminish.

354

8 Applications of NIMRAD in Electrochemistry

a 1,0

dJ/dU(BG):1-100 dJ/dU(next): 101-200

0,5

dJ/dU (101-200)

dJ/dU

dJ/dU (1-100)

0,0

-0,5 0,0

0,7 U/1000

b

1,4

JBG_1-100 Jnext_101_200 BG integral curve (1-100)

Integral curves

0,00

next integral curve for (101-200) -0,06

-0,12 0,0

0,7

1,4

U/1000mV

Fig. 8.2 (a) Initial data after the elimination of the mean value, normalized for the standard deviation and averaged over 100 measurements. (b) The same curves as in Fig. 8.2a after the application of (8.29) and the integration procedure. The differences between the two curves are more evident than in Fig. 8.2a. The correlations between these curves, expressed in the form of the internal correlation factor (ICF), confirm these differences quantitively

Figures 8.6a, b, and 8.7 show the correlation parameters of the DGIs of second- and the fourth-order, respectively.

8.1 Application of the Discrete Geometrical Invariants for the Analysis. . . Distribution of the ranges for dJ/dU (1-100) Distribution of the ranges for dJ/dU (101-200)

a Distribution of the ranges for dJ/dU curves

355

30

20

10

0

40

80

1 < m < 100

Distribution of the ranges over all measurements

b

RJBG(m) Distribution of Ranges (1-100) RJnext(m) Distribution of Ranges (101-200)

0,3

RJnext(m)

0,2

0,1

RJBG(m)

0,0 0

40

80

1< m < 100 Fig. 8.3 (a) The behaviour of the ranges of the curves in Fig. 8.2a. They are evident the monotonicity and proximity of the curves, bringing out some process on Pt electrodes. (b) Distribution of the ranges for two integral curves. Deviation of the ranges make them suitable for detection of different additives, chemical electrode changings, and influence of external factors (e.g. temperature, pressure, etc.)

356

8 Applications of NIMRAD in Electrochemistry

JCorr DCorr

a GPC-function caclutated for integral and differential curves

1,000 L=0.99997

M=0.99529

0,995

0,990

M=0.98738 -10

0

10

Ln

The GPC-function for the range distributions, corresponding to the differential curves (1-100) - (101-200).

GPC-function for R-distributions

GPC-functions for the range distribution curves

b

1,00 the GPC-function for the distribution of the ranges corresponding to integral curves (1-100) - (101-200).

1,00000

0,99996

L=0.99996

0,99992 M=0.99991 -10

0

10

Ln

0,98

L=M=0.96913 -10

0

10

Ln Fig. 8.4 (a) Typical behaviour of the GPC-functions calculated for the curves in previous figures. The comparison shows that the correlations for integral curves are stronger than the correlations expressed for differential curves. The comparison with other curves for measurements (201–300),

8.1 Application of the Discrete Geometrical Invariants for the Analysis. . .

8.1.4

357

Results and Discussion

The following remarks can better highlight the results obtained in the frame of the DGIs parameters: 1. Achieving a stable sensor response is an essential problem for the whole electroanalytical chemistry. Besides the mechanical cleaning of the sensor surface after its treatment by various solutes and reagents, it is carried out an electrochemical regeneration of the sensor surface. The regeneration efficiency depends on the applied potential range, the scanning rate, the content of the background electrolyte, etc. The choice of the regeneration conditions in each specific situation has an empirical character. The direct control related to the state of the sensor/ electrode surface is limited, especially in the routine analysis conditions. Besides, in the presence of low sensor sensitivity, its stability or the temporal drift do not influence the accuracy of the complete analysis. 2. In nano-electroanalytics (which investigates the formation of different nanostructures on the electrode surface, the chemical reaction of the solute components with nanoparticles on the electrode surface, etc.), the final result strongly depends on the formed structure and nanostructures composition [15]. The results of electro-analysis have practical importance in any case, but primarily when the massive data are obtained in the long-time functioning sensors conditions. The data collected in these situations are strongly influenced by the electrodes material. In particular, graphite electrodes exhibit stronger surface variations than platinum electrodes. 3. The electrochemical intercalation in various background electrolytes used to modify the graphite surface produces the following alterations: (a) a partial carbon oxidation; (b) cavities opening between the layers of the graphite rings; (c) penetration of molecules and ions from the solutes inside the formed carbon cavities; (d) formation of new chemical compositions with carbon. Under the conditions of the experiment described here, the process leads to the formation of some regions where the registered signal changes monotonically, and the presence of sharp cross breaks on dynamical curves, where the formed graphite structure could have been abruptly changed. These changes for carbon electrodes are noticeable in Fig. 8.6a–c, which track the quantitative parameters belonging to the DGI of the second-order. They can even be noticed in Fig. 8.7, especially when the background set (0–100) is compared with the set (401–500). This evident cross break in the same region appears as a specific marker on all the

 ⁄ Fig. 8.4 (continued) . . ., (901–1000) shows that correlations for integral curves, as time grows, become weaker than for differential curves. (b) Comparison of the curves showing the range distributions for integral (main plot) and differential (subplot) curves depicted in Fig. 8.3. While the correlation between integral curves weakens, for differential curves it is possible to observe strong correlations within the range 0.99991 < ICF < 1.0

358

8 Applications of NIMRAD in Electrochemistry

a

Integral curves_Pt Dif.curve_Pt Integral curve_C Dif.curve_C

Distribution of the correlations over 1-1000 measurements over integral and differential curves

(0-100)-(101)-(200) 1,00

0,98

Dif.curve_Pt 0,96

Dif.curve_C Int.curve_Pt

0,94

Int.curve_C 0,92

Almost monotone curve

(0-100)-(901-1000)

0,90 0

3

6

9

Number of files Integral Range Curve_Pt Dif.Range Curve_Pt Integral Range Curve_C Dif. Range Curve_C

Distributions of correlations for the range curves

b 1,05

1,02

Dif.Curves for Pt and C

0,99

0,96 Int. Curve_Pt

0,93 (0-100)-(101-200) 0,90

Int.Curve_C

0,87

0

3

6

9

Number of files

Fig. 8.5 (a) Distribution of correlations over integral and differential curves. These plots demonstrate the different sensitivity of the integral and differential curves in the data space for all set of measurements covering the whole interval (1–1000). It is possible to note the general tendency that is common for all curves: the correlation between measurement decreases as time increases. For integral curves, this tendency is stronger in comparison with differential curves. The almost monotone behaviour belongs to the integral curve for Pt electrode, while for the carbon

8.2 The Generalized Principal Component Analysis and Its Application. . .

359

corresponding curves characterising the behaviour of the C-electrode in Fig. 8.6a–c. 4. The Pt-electrode is a chemically inert material if compared with the C-electrode, and the main changes take place on its metal surface. In the result of electrochemical input, one obtains the oxygen and hydrate films with implantation of molecules and ions from the background electrolyte. On the surface of the C-electrode, one observes a constant changing of the graphite layers that are chemically modified increasing the distance from the initial layer, while on the Pt-electrode it grows a dense film formed by the oxygen and hydrate atoms that reacted with pure Pt surface. After 1000-fold activation cycles, while the surface of the C-electrode becomes loose, the Pt-surface shows an opposite tendency, with the formation of the film, without a significant modification of the electrode structure [16, 17]. It is interesting to note that the creation process of the oxide film is monotone in time, as confirmed by Figs. 8.6 and 8.7. Further short remarks can conclude this subsection. The proposed approach can help to quantitatively detect all the qualitative changes of the analysed VAGs, to detect monotone regions, and to select the cross breaks regions, where the abrupt surface changes could occur. For this purpose, one can use the quantitative parameters of the DGIs of the second order, namely , , A, C, α specified by expressions (8.9) and (8.10) above, and the most sensitive parameters such as σB and σC defined by (8.18) and belonging to the DGIs of the fourth-order. Moreover, besides the applications in electrochemistry, the approach is general and can be applied for quantitative comparison of any couple of random sets located in a 2D-plane in various physical and chemical applications.

8.2 8.2.1

The Generalized Principal Component Analysis and Its Application in Electrochemistry Formulation of the Problem

The modern methods of analytical chemistry have vast possibilities in the quantitative and qualitative analysis of a broad set of various compounds. In the last decades, thanks to the development of new mathematical methods for signal data processing, have been proposed multisensor methods and systems (e.g. the “electronic tongue” or the “electronic nose”) associated with simultaneous treatment of a large amount of data. These systems enable the identification of a few components that are  ⁄ Fig. 8.5 (continued) (C) electrode these correlations weaken and their behaviour is not monotone. (b) Distribution of correlations for the ranges referring to integral and differential curves, respectively. The correlations for integral curves weaken in comparison with the ranges distributions for similar differential curves. However, this weakening correlation tendency is not monotone

360

8 Applications of NIMRAD in Electrochemistry

xc_Pt

a

yc_Pt

0,00

xc_C

Distribution of the gravity centers for two types of electrodes

yc_C

yc_C

-0,01

-0,02 xc_C

-0,03

-0,04 xc_Pt

-0,05

-0,06 yc_Pt -0,07 0

3

6

9

Number of files

b

A_Pt C_Pt A_C C_C

Distribution of the ellipces sizes for two types of electrodes

0,12

A_C electrode

C_C electrode 0,09

A_Pt electrode 0,06

C_Pt electrode

0

3

6

9

Number of files

Fig. 8.6 (a) Parameters of the DGI of the second-order (Eq. (8.9)). The gravity centres Pt,C for both ellipses remain constant, while the parameters Pt,C change. These changes for carbon (C) electrode are expressed more clearly and are not monotone in comparison with the behaviour of the gravity centre Pt for the Pt electrode (the lowest curve). (b) The behaviour of the parameters

8.2 The Generalized Principal Component Analysis and Its Application. . .

361

Correlation Angle(a)_Pt Correlation Angle(a)_C

c 0,25

Behavior of the correlation angle a for two types of electrodes.

C_electrode 0,20

0,15

Pt_electrode

0,10

0,05 0

3

6

9

NF Fig. 8.6 (continued) A and C (see expressions (8.7) and (8.11)) characterizing the DGI of the second-order. The parameters C for two types of electrodes remain constant, while the parameters A change. Again, for C-electrode these changes are expressed more clearly than for C(Pt) electrode. (c) The trend of the correlation angle α (in radiants) for two types of electrodes: it is monotone for the Pt-electrode, and not monotone and more miscorrelated for the C-electrode. This angle is computed with respect to π/4 clockwise

simultaneously present in the analysed solution, and complex solutions can be treated without a detailed quantitative analysis (e.g. the identification of quality, evaluation of the taste and smell, etc. [18]). In the recent past, there has been a widespread diffusion of voltammetric and potentiometric multisensor systems. Well-known commercial variants of these systems are the “α-Astree electronic tongue” (France) and the “Insent taste sensing system TS-5000Z” (Japan) [19]. The sensitivity and selectivity of the electrochemical sensor systems can be enhanced by incorporating sensors of various types. The treatment of the obtained experimental data can be performed by using different mathematical methods, depending on the analytical problem to be solved. For example, the PCA and SIMCA have been used for the qualitative detection of various solutions in [20, 21], while PLS, Artificial Intellectual Nets (AIN), and other methods have been applied in [22, 23] for quantitative analysis. Besides, wavelet-analysis, Fourier transform, wavelet, and others [23], are methods suitable for preliminary data treatment. The main difficulties in data analysis are associated with the recognition problems and quantitative detection of the substances having a similar structure. Therefore, many papers in the electroanalytical chemistry field

362

8 Applications of NIMRAD in Electrochemistry

σB _Pt σC _Pt

Distribution of the statistical parameters σB and σC for the curve of the 4-th order

1,0

σB _C

0,9

σC _C

0,8 0,7 0,6

σB _Pt

0,5

σB _C

0,4

σC _Pt

0,3

σC _C

0,2 0,1 0

3

6

9

Number of files Fig. 8.7 The behaviour of the parameters σB,C (given by (8.18)) of the DGI of the fourth-order. These parameters are the most sensitive to possible changes on electrodes (Pt,C) in the given solution. If the sequences are strongly correlated, then these parameters tend to the unit value. Again, it emerges the weakening correlation phenomenon for all analyzed parameters, with a quasimonotone behaviour

regard sensor surface modifications aimed at increasing the selectivity in the detection of similar substances. The isomers fall within these classes because they can be reduced and oxidized at near potentials. They are used as drag substances and have different biological activity depending on their molecular structure. Their pharmacological activity is associated presumably with the action of the only stereoisomer [24, 25]. This peculiarity stimulates developing enantioselective sensors and multisensor systems that are used for the identification of biologically active enantiomers [26–29]. To this class of sensors, one can refer tryptophan, cysteine (class of amino acids), propranolol, and atenolol (used as an anti-arrhythmia remedy). They exhibit electroactivity in the anode potential range that enables the use of voltammetric methods as detectors in pharmacology. One of the approaches used for the creation of the enantioselective layer for enantioselective sensors is the electrochemical oxidation/reduction polarization of the electrode surface in presence of one enantiomer [24, 29–36]. Variants of this realization are reported in [33, 34, 36]. There is a variety of voltammetric sensors based on the inclusion complexes [37, 38], molecularly imprinted polymers [39–41], elements of living systems [42–44], and sensors including various organic and

8.2 The Generalized Principal Component Analysis and Its Application. . .

363

nonorganic structures [45, 46]. The practical application of these sensors for multicomponent analysis is limited by their low selectivity with respect to the given enantiomer, sensitivity to random/obstructive components and short life of the enantioselective layer during electrolyze response time. By applying conventional treatment methods (e.g. peak current analysis, the potential peak of oxidation/ reduction), the evolution of the signal drift and its dispersion exceeds the limiting values of the sensor resolvability. This problem could be overcome by creating new methods of registration of voltammetric data, including the activation of a sensor surface in the condition of continuous sensors functioning. As shown in [9], it is possible to obtain a new type of analytical signals purified from random noise even when the electroactive component is absent. The problem tackled in this chapter can be formulated as follows: Is it possible to modify the conventional principal component analysis (PCA) to define a general and reliable method for the quantitative reading of any component of the score matrix? Besides, the solution to this problem should help to answer to the following question: Is it possible to differentiate the signal of sensors activated in the solutions of different enantiomers based on the proposed approach? This subsection proposes a combined method based on the modified Fourier transform and the generalised principal component analysis (GPCA). The combination gives additional quantitative information, useful for continuously functioning multi-sensor systems based on using the multi-stage electrochemical modification of the working electrode.

8.2.2

Experimental Set-Up and Preliminary Data Analysis

The proposed multi-sensor system includes the sensor block with two glassy carbon electrodes (GCEs) and an electrochemical cell, the potentiostat/galvanostat ElinsP30S, and a mathematical treatment block (see Fig. 8.8 for details). The standard phosphate buffer solution (pH ¼ 6.86, a mixture between Na2HPO4 and KH2PO4) is used as a background electrolyte. The analysed tryptophan solutions with 103 M concentration (SIGMA-ALDRICH, assay 98.0% (HPLC)) is prepared by dissolving the accurately weighed portion in the background electrolyte. Electrochemical activation of the sensor layer takes place in the condition of continuous registration of 100 cycles of the polarisations. This process includes two stages: 1. Electrochemical regeneration – five successive cycling with the potential scan rate of 2.5 V/s in the presence of one (L- or D-) enantiomers solution. 2. Registration of VAG oxidation/reduction curve in the solution at the given potential scan rate of 0.5 V/s in the range of the potentials [0, 1.9] V. The scheme of the activation cycle is the same as in Fig. 8.1.

364

8 Applications of NIMRAD in Electrochemistry

Fig. 8.8 Block scheme of the experiment and the chemical reaction of tryptophan oxidation [33]

Potentiostat-galvanostat

GCE

Magnetic stirrer

COOH –2e NH –2H

COOH

CH2 CH

2

N H

40

CH

CH

Mathematical treatment block

GCE

NH2

N

Sensor block

L-Trp

D-Trp

35 30 25 20 15 10 400

500

600

700

800

900

1000

1100

1200

Fig. 8.9 Voltammograms of the tryptophan isomers obtained without sensors activation

A set of experiments was performed to test the sensor system. After the activation of the sensor system in the conventional solution belonging to one of the enantiomers, the electrodes were placed in a different enantiomer solution with the subsequent registration of the voltammograms given in VAGs. The registration of each VAG took 4 seconds, and the time duration between successive VAGs took 16 seconds (Fig. 8.1). Therefore, the duration of the whole experiment was 10020 ¼ 2000 sеconds ffi 0.56 hours. The Faraday’s range of the voltammograms was used for detection of the oxidation tryptophan peak in the interval [+600, +1700] mV. As an example, Fig. 8.9 shows the registered VAGs obtained in the absence of sensors activation; the integral voltammograms obtained in these conditions on the unmodified GCEs almost overlap.

8.2 The Generalized Principal Component Analysis and Its Application. . .

365

Fig. 8.10 The averaged differential VAG of enantiomers: (a) activation of the L-Trp (initially the L-type is registered then the D-type); (b) activation of the D-Trp (initially the D-type is registered then the L-type). For registration, the two-electrode cell is used. (a) After the sensor activation in the L-trp solution, the VAGs of L- and D-trps are differs from each other by the values of the peak oxidation potentials: the peak of D-trp is shifted in the anode region. (b) The activation of the sensor in the D-trp solution leads to the shifting of oxidation D-peak to anode region. This observed shifting is not related to the background changes of electrode potentials in time; it evokes the electrochemical activation of the sensor due to the nature of the used enantiomer

This circumstance indicates the correct preparation of the isomers solutions and equality of their molar concentrations. After the sensor activation procedure, the VAGs of L- and D-tryptophans are differentiated easily with respect to their oxidation peak position (see Fig. 8.10). The experimental results show that the order of the replacement of the L- and D-solutions does not affect the location of the corresponding peaks. The peak shifting is the result of the directed (control)

366

8 Applications of NIMRAD in Electrochemistry

changing of the sensor surface. The surface modification appears as the result of the electrode activation in the enantiomer solution. For qualitative confirmation of the sensor surface modification, another experiment was performed for which the impedance spectra are registered after each activation cycle. To highlight the originality of the proposed algorithm, it is worth to spend a few words on the impedance spectroscopy, which is a high-sensitive method for studying the boundary phase division near the electrode surface. Impedance spectra can be registered for different processes [1, 47] in an electrochemical cell (charge transfer, semi-infinite and quasi-spherical diffusions, diffusion in some limited region, non-faradaic processes in substance sorption/desorption on the electrode surface, etc.) and, therefore, have different forms. Different mathematical methods and algorithms have been proposed for decoding the impedance spectra, depending on the specific problem. When analysing the electrochemical systems “electrode/solution”, all problems can be divided into two sub-problems [48–51]: (a) the determination of the physicochemical parameters (the chemical reaction velocities, diffusion coefficients, etc.) with the subsequent identification of the nature of these dynamical processes; (b) the identification of the kinetic reaction (nature) of the considered process. In [52, 53], based on the PCA, the authors proposed the combined impedancekinetic method that allows determining the quantity and intervals of time associated with various stages of physical and chemical processes related to the interaction with the electrode surface. Here, the PCA is generalised and, for the first time, the combined approach is applied to confirm the electrode activation in the continuous sensor system functioning condition. Impedance spectra were recorded on an impedance meter “Elins” Z500R in the frequency range of alternating current from 500 kHz to 10 Hz, with an amplitude of 5 mV, with a constant mixing on a magnetic stirrer. By using the conventional PCA, Fig. 8.11 plots the lowest score matrix components that correspond to the imaginary impedance components obtained during 160 sensor cycles with and without activation procedure in the solution of D-Trp. Figure 8.11 shows the typical bends at the achievement of the 12-th, 55-th, and 120-th activation cycles with the activation procedure. These bends inform about the changing of the sensor layer during the activation procedure inside the enantiomer solution. In the absence of applied potential on the sensor (without activation), with the same experimental duration but with activation, only one bend in the 12-th cycle can be observed (see Fig. 8.12). By continuing the experiment, the extreme peculiarities in the behaviour of the compared PCA-components are not observed. This circumstance can be explained by the absence of changings in the sensor surface placed in the given enantiomer solution. The large share of the uncontrolled random component in the registered VAGs does not allow to identify two enantiomers when their oxidation potentials have a small difference. Figure 8.13 depicts the standard deviations of currents as a function of the applied potential.

8.2 The Generalized Principal Component Analysis and Its Application. . .

367

Fig. 8.11 PCA impedance kinetic curves covering 160 activation cycles: (а) 1–160 activation cycles, (b) 12–160 activation cycles. The black arrow (from right to left) shows the direction of the activation order in time

From the analysis of Figs. 8.10 and 8.13, it follows that the relative deviation of the analytical signal in the region of the tryptophan oxidation can exceed 50%. The literature [54] dealing with the analysis of noise in the electrochemical systems shows that it is possible to apply background signals for analytical purposes, i.e. for detection of unknown substances, their structures, etc. [55–58]. For preliminary analysis of the VAGs and their derivatives, one can use the PCA [28, 59, 60], which allows discovering the internal data structure and identifying some new relationships in the analyzed data. Here, it is used the GPCA in combination with

368

8 Applications of NIMRAD in Electrochemistry

Fig. 8.12 РСА-kinetic impedance curves obtained without activation: (а) 1–160 cycles, (b) 12–160 cycles. The black arrow shows the direction of the activation cycles number in time

the Fourier analysis to read the large amount of data quantitatively at any stage of their possible evolution.

8.2 The Generalized Principal Component Analysis and Its Application. . .

369

Fig. 8.13 Standard deviations of the corresponding currents as a function of the applied potential U (U is given in mV): (a) activation of L- then D-Trp; (b) activation of D- then L-Trp. (a) The standard deviations related to instantaneous current values depend on the applied potential and are increased in the Faraday region. This peculiarity says about the additional information in massive data associated with modifications of the sensor surface in time and requires the application of special methods for reading this specific type of data. (b) The narrow peaks inform about more uniform changes of the sensor surface at activation in the D-trp solution

370

8.2.3

8 Applications of NIMRAD in Electrochemistry

The Mathematical Section of the PCA and the Modified F-Transform

This subsection recalls the basic expressions and definitions of the PCA. Consider the rectangle matrix Mi,j (i ¼ 1, 2,. . ., Nr, j ¼ 1, 2,. . ., Nc), where indexes i, j define all rows (Nr) and columns (Nc  Nr), respectively. According to the general algorithm of PCA, the initial matrix should be put in the form M i,j ¼ T  PT þ E 

A X

ðaÞ ðaÞ

T i P j þ E i,j ,

a¼1

,

ð8:31Þ

A 3) and prove that the further components, plotted as a function of the others and looking initially as a random distribution of data points, have a certain deterministic nature. Therefore, it is possible to retrieve more information from such representations than if they would be presented in a different way. This modification of the conventional PCA solves the problem formulated above and broadens out the limits of its application. To illustrate the new analysis approach, it is convenient to use the dimensionless

8.2 The Generalized Principal Component Analysis and Its Application. . .

371

potential by simply normalizing the applied potential U to 1 V, i.e. u ¼ U(V)/1 V, and using the uniform scale ui ¼ min ðuÞ þ i ¼ 0, 1, ::, N r :

i ð max ðuÞ  min ðuÞÞ, Nr

ð8:34Þ

Then it is used the normalized matrix G defined below, which is associated with the measurements of the background by the cycling potential U, for GCE and covering the faraday region, i.e. the currents region associated with oxidation of different organic compounds (if present in the solution). The main problem is to find a general platform for the quantitative description of the basic components of the score matrix T(a). Figure 8.14a, b plot the measurements for the normalised matrix Gs ¼

M s  meanðM s Þ , s ¼ 1, 2, . . . , N r StdevðM s  meanðM s ÞÞ

ð8:35Þ

for s ¼ 1, 10, 20, 50, 100 and the mean measurement averaged over all experiments for two types of L- and D- triptophans (Trps). Figure 8.14a demonstrates that the derivative curves of dJ/du are more sensitive components for the GPCA analysis than the integral curves used in Sect. 8.1 for the DGI method. Differentiating these curves with respect to u eliminates the undesirable envelopes and gives a clearer expression of the desired shift. Moreover, Fig. 8.15 represents the second component of the score matrix T(2) as the function of the vector T(1), after the application of the SVD procedure. The application of Fourier-transform to vectors T(1) belonging to L- and Dtryptophans gives the results in Fig. 8.16. The plot shows two leading frequencies ð1Þ with predominant amplitudes which approximately fulfil the condition ω0 þ ð1Þ ω1 ffi 2π, and other frequencies with unnoticeable amplitudes that can be omitted in the first stage. The dependence of the vector T(1)(u) can be expressed mathematically approximately as

ð1Þ ð1Þ ð1Þ T ð1Þ ðuÞ ¼ A0 þ Acðm1Þ cos ω0 u  φ0 , ð1Þ

ð1Þ

ð1Þ

ð8:36Þ

where the parameters A0 , Acðm1Þ , ω0 , φ0 determine the initial vibration point, the initial amplitude, frequency and phase, respectively. The contribution of the second ð1Þ ð1Þ frequency ω1 ¼ 2π  ω0 keeps (8.36) invariant and, therefore, initially is not considered. The application of Fourier-transform to vector T(2)(u) produces similar ð1Þ ð1Þ results, with the same leading frequencies ω1 ¼ 2π  ω0 , and the other parameters may differ. So, one can write approximately

372

8 Applications of NIMRAD in Electrochemistry

M1 M10 M20 M50 M100 Mmn

a

L-set

4

0

-4 0

b

800 U

1600 M1

10

M10 M20 M50 M100

5 D-set

Mmn

0

-5 0

800 U

1600

Fig. 8.14 (a) Randomly selected set of measured curves dJ/du obtained for L-Trps. These initial curves are changed essentially with respect to the number (m ¼ 1, 10, 20, 50, 100) of experiment. The time interval between measurements is 16 seconds. All curves are normalized in accordance with (8.35). (b) Randomly selected set of the measured curves dJ/du obtained for D-Trps. These initial curves are changed essentially with respect to the number (m ¼ 1, 10, 20, 50, 100) of experiment. The set is similar to the set depicted if Fig. 8.14a, but the peaks are shifted and the amplitudes for D-Trps slightly exceed the values calculated for L-Trps. All curves are normalized in accordance with (8.35)

8.2 The Generalized Principal Component Analysis and Its Application. . .

373

T(2)(T(1))_L 9

T(2)(T(1))_D

6

T(2)(L,D)

3 0 -3 -6 -9 -8

-4

0

4

8

T(1)(L,D)

Fig. 8.15 T(2)(u) component as a function of T(1)(u) component for L- or D-Trps, respectively. The plots form two “fat” ellipses with the same centre (see Fig. 8.16 for details)

Fig. 8.16 Fourier transform (FFT) of the components of the score matrix T(1)(u) for L- or D-Trps, respectively. The F-spectra are very similar and present two leading frequencies. This peculiarity prompts the decomposition of the score matrix components to the segment of the Fourier series. The sublot depicts the ratio FFT(L)/FFT(D)

374

8 Applications of NIMRAD in Electrochemistry

ð2Þ ð2Þ ð2Þ T ð2Þ ðuÞ ¼ A0 þ Acðm2Þ cos ω0 u  φ0 , ð1Þ

ð2Þ

ð2Þ

ð2Þ

ω0 ¼ ω0 , ω0 þ ω1 ffi 2π:

ð8:37Þ

Therefore, the plots of T(2) ¼ F(T(1)) in Fig. 8.15 express the different well known Lissajous figures in the conventional theory of mechanical vibrations. In particular, ð1Þ ð2Þ for φ0 ¼ φ0 , the Lissajous figure is a straight line, which indicates, in the conventional PCA, the strong correlation between the compared random sequences. ð2Þ ð1Þ The Lissajous figure becomes the classical ellipse for φ0 ¼ φ0 þ π2 , while it is a ð2Þ ð1Þ ð1Þ counterclockwise rotated ellipse of angle α for φ0 ¼ φ0 þ α . For ω0 6¼ ð2Þ ð2Þ ð 1Þ ω0 , φ0  φ0 ¼ α, they are obtained the different Lissajous figures that can be observed in the case of clusterization of different data points forming the initial vectors T(1) and T(2). Figure 8.15 demonstrates that the dependence T(2) ¼ F(T(1)) for results in “fat” ellipses for the considered case study. The envelope of the ellipse is slightly distorted by the influence of other small frequencies that are present in the decomposition of the Fourier series forming the vectors T(1) and T(2). The quantitative characterization and parameterization require to use the fragments of the Fourier series belonging to the vectors of the score matrix: ðsÞ

T ðsÞ ðuÞ ffi A0 þ

K



X ðsÞ ðsÞ ðsÞ ðsÞ Ack cos ωk u þ Ask sin ωk u  k¼1

ðsÞ

 A0 þ

K X

ðsÞ ðsÞ ðsÞ Amk cos ωk u  φk ,

ð8:38Þ

k¼1

ðsÞ Amk

¼

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi



ffi ðsÞ

Ack

2

ðsÞ

þ Ask

ðsÞ

2

AsðsÞ ðsÞ , tan φk ¼ kðsÞ , s ¼ 1, 2, . . . , N c : Ack ðsÞ

(s) coincides with the It is worth to note that, if ωkþ1  ωk ¼ aðsÞ, aðsÞ ¼ τ2π ðsÞ , and τ (s) period of vibration of the vector T , then the fragment (8.38) coincides with the ðsÞ ðsÞ Fourier decomposition. Conversely, when ωkþ1  ωk 6¼ aðsÞ , the decomposition recovers the NAFASS (Non-orthogonal Amplitude-Frequency Analysis of the Smoothed Signals) approach [61, 62], where the set of frequencies follows a different dispersion law. To conclude, one can say that the behaviour T(s)(u) can be

characterized naturally ðsÞ ðsÞ by its Amplitude-Frequency Response (AFR) defined by ωk , Amk and, there-

fore, the dependencies T(s + 1)(u) ¼ F(T(s)(u)) looking initially as random dependencies can have a natural (and deterministic) explanation. Actually, the parameterization of the score vectors has been found by using the dimensionless potential u. Therefore, as a criterion of neglecting by a possible dependence T(s)(u), for relatively large s, one can use the range of this vector. From the definition of T(s)(u) (US  T ), when it is multiplied by the eigenvalue σs, its range decreases with the increasing of order s. However, there is no adequate chemical interpretation of

8.2 The Generalized Principal Component Analysis and Its Application. . .

375

ðsÞ

the AFR, especially for the spectrum {ωk } belonging to each component of the score matrix T(s). Therefore, though the problem of finding a suitable parameterization of the components T(s)(u) has been solved, the following one naturally arises: What are the microscopic reasons that select a set of frequencies and lead to the specific behaviour emerging from Figs. 8.15 and 8.16? This question is still open and needs further research.

8.2.4

Application of the Modified Platform to Real Data

This subsection shows how to apply the modified method to detect the differences between L- and D- tryptophans. The algorithm can be outlined as follow: 1. Normalize the initial rectangle matrix according to (8.35). 2. Apply the SVD procedure to obtain the score components of the vector T(s) (u). 3. Analyse the neighboring plots T(s + p)(u) (s, p ¼ 0, 1, 2, 3,...) with respect to T(s)(u). This preliminary observation helps to differentiate the most different vectors. In the case considered here, it is sufficient to choose the initial vectors to satisfy condition s + p ¼ 1, 3. 4. Decompose the selected vectors into the segment of the Fourier

series according ðsÞ

ðsÞ

to (8.38). The dispersion law ωk and the modulus Amk

ðsÞ

ωk

are markers that

(s)

serve as necessary functions to describe any score vector T (u), especially in the absence of an adequate microscopic model.

Fig. 8.17 Dispersion laws for two types of tryptophans. For “truncated” spectra it is chosen the set of frequencies located in the interval mean(ω)/2 < ω < max(ω). This interval ensures an acceptable fit of the components T(1)(L,D) with a relative within the interval 0 < Rel(Error) < 10%

376

8 Applications of NIMRAD in Electrochemistry

Figures 8.14, 8.15, and 8.16 represent, in part, the results of the algorithm application. To complete the analysis on the first components, Fig. 8.17 depicts the dispersion laws for L- and D- types tryptophan. These frequency spectra are obtained as part of the whole set of frequencies and are located in the interval mean (ωk)/2 < ωk < max(ωk). The selected set is sufficient to provide an acceptable fit with a relative error RError ¼



stdevðy  yft Þ  100%, meanjyj

ð8:39Þ

within the interval 0 < RError < 10%. The function T(1)(u) is associated with the component of the score vector T(s) and the fitting function is given by (8.38). Because of this requirement, it is considered a number of frequencies sufficient to calculate the desired AFR. Figure 8.18a, b represent the fit of the components T(1)(u) for L- and D-Trps, respectively. The fitting function provides an acceptable fit with a relative error within the interval [0, 10]% for the components T(1)(u), which initially looked as “random” sequences. However, Eq. (8.38) gives the opportunity a different representation of data, which is widely used in radioelectronics, i.e. the modified F-transform based on the limited number of frequencies. This presentation helps to find easily the desired discrepancies expressed in the form of AFRs, which are represented in Fig. 8.19. From this simple analysis, one can notice the significant set of frequencies that allows differentiating the L- and D-Trps expressed on the “frequency” language. Attention should be drawn to the following points. An acceptable fitting accuracy was obtained by using 480 frequencies only from the large set of N ¼ 1190 points. The set of selected frequencies in the interval mean(ωk)/2 < ω < ωmax is sufficient for differentiating two types of tryptophans with a fitting error defined by (8.39) and within the interval [0,10]%. One more aspect regards the values of the calculated AFRs. Figure 8.17 shows that the values of the fitting vectors T(1)(u) are 106 times larger than the values of the initial components, which enables even the detection of small differences between the compared components. The analysis proceeds in the same way for the other components of the score matrix, as T(0)(u) and T(3)(u) (the T(2)(u) component is not considered here because its behaviour is similar to T(1)(u)). The ellipses in Fig. 8.20 appear distorted if compared with those in Fig. 8.15. However, expression (8.38) will be applied even in this case. As before, it is considered a limited number of frequencies (K ¼ 500), to obtain a fitting with a relative error within the interval [0,10]%. Figures 8.21 and 8.22 show the desired fit of the components T(0,3)(u), while Figs. 8.23 and 8.24 represent their AFRs. It is possible to observe differences between the two types of tryptophans almost but with different amplitudes (the most substantial for the components T(1)(u), if compared with the other components). Besides this new “frequency” platform based on the analysis of the functions Amk (ωk), it is possible to retrieve additional information from the eigenvalues of the

8.2 The Generalized Principal Component Analysis and Its Application. . .

377

a

b

Fig. 8.18 (a) The fit of T(1)(u) associated with L-Trp. The fitting accuracy is 7.5%. The initially spurious random component can be described in terms of the AFR given by (8.38). (b) The fit of the component T(1)(u) associated with D-Trp. The fitting accuracy is 7.5%. The initially spurious random component can be described in terms of the AFR given by (8.38)

378

8 Applications of NIMRAD in Electrochemistry

Fig. 8.19 The AFR(s) for T(1)(u) components belonging to L-(red) and D-(blue) tryptophans

Fig. 8.20 The plot of T(3)(T(0)) as a function of T(3)(T(0)) for two L- and D-tryptophans

initial matrix S. Figure 8.25 represents the distributions of the eigenvalues in descending order for both the L- and D-Trps, and can serve as specific fingerprints to differentiate the two types of tryptophans.

8.2 The Generalized Principal Component Analysis and Its Application. . .

379

Fig. 8.21 The fit obtained by (8.38) for T(0)(u). The subplot shows the dispersion laws containing 450 frequencies, together with the value of the fitting error

Fig. 8.22 Comparison of the AFRs for the score components T(0)(L,D). In the modified approach, one can use the spectroscopic “language” for in-depth analysis of possible differences between the compared vectors of the score matrix. The boxed highlight the frequencies that can serve to differentiate the components T(0)(L,D) in the frequency domain

380

8 Applications of NIMRAD in Electrochemistry

Fig. 8.23 The fit obtained by expression (8.38) for T(3)(u), with the corresponding fitting errors. This acceptable fit is obtained by increasing the number of frequencies values up to 500

8.2.5

Results and Discussion

The main results of the previous analysis can be summarised as follows. 1. The conventional PCA method has been modified by adding the opportunity of a quantitative reading of the basic components T(s) (s ¼ 0, 1, 2,. . .) of the score matrix. This possibility is based on Eq. (8.38) and gives a potential new spectroscopic “space” associated with the set of frequencies (ωk) and their amplitudes (Ack, Ask, Amk) for the measured electrochemical data. 2. It is worth to remark also the following fact. The application of the conventional Fourier-transform is possible in the presence of leading frequencies in the corresponding spectrum (see, for example, Fig. 8.16). The proposed scheme can be modified to deal with situations in which these leading frequencies are absent. To this aim, in Eq. (8.38), the period τ(s) and the number of the final mode K can be considered as nonlinear fitting parameters. 3. This combined platform is relatively flexible and universal. It can be applied for the analysis of different types of curves registered in an electrochemical cell, especially when the microscopic model is absent. 4. The results presented in this subsection increase substantially the possibilities of the multi-sensor systems for studying compounds having similar nature. Accumulation of the chemical information about the depolarizator nature, formation and quantitative identification of a massive amount of data from analytical

8.2 The Generalized Principal Component Analysis and Its Application. . .

381

Fig. 8.24 Comparison of the AFRs for the score vector components T(3)(L,D). As in Fig. 8.11, they are evident the spectroscopic differences for the two types of tryptophan analyzing the third components T(3)(L,D) in the frequency domain. The comparison of the AFRs for the three vectors T(0,1,3)(u) (see Figs. 8.17, 8.14 and 8.24) shows that the most significant differences regard the T(1)(L,D) component

signals, which are specifically related with a small amount of a substance located in the given solution, becomes possible not only in presence of a large number of sensors, as it is conventionally accepted in the practice of the multi-sensor analysis. Namely, a different way is proposed here: the accumulation of a large amount of chemical information in the condition of the continuous functioning of even one (initially non-selective) working electrode. The conventional multisensory analysis lies in the background of the formation of the electronic “tongues” and “noses” and implies the wide spectrum of the “tuned” electrodes for detection of a specific substance in the given solution. The multisensory formation can be obtained differently by using only one electrode during the whole electrochemical experiment. Each activation cycle changes the sensibility of the given sensor (and, hence, its response) and a large number of similar measurements can change significantly the surface of the electrode. A large number of activation cycles generates, in turn, large amounts of different states of the working electrode. Therefore, it could be created a fictitious number of “pseudo-sensors” shifted in time, and repeating similar measurements provides a large amount of information on the evolution of the given solution and its components. The combination of the GPCA with the fast Fourier represents a new

382

8 Applications of NIMRAD in Electrochemistry

Fig. 8.25 Distribution of the eigenvalues of the rectangular matrix for both the L- and D-Trps. The subplot depicts the ratio Eval(L)/Eval(D). As one can note from this figure, only a few (8.4) initial components can be used for identification

informative block related to the analytical signals measured in the frequency domain, expressed in the form of the Fourier spectrum.

8.3 8.3.1

The Fractal Theory of Percolation and its Application in Electrochemistry Formulation of the Problem

The development of mathematical methods associated with a quantitative description of the measured Voltammograms (VAGs) aimed at increasing their sensitivity and resolution when the material of electrodes was replaced by another one. One more important aspect considered was the variations of the temporal conditions during the registration process that can affect the form of the registered VAGs. Basic analytical equations were obtained in an explicit or numerical form in past research [2]. They connect the value of the peak current with the parameters of the investigated solution, geometry of the measuring device and type of electrochemical cell. The shape of the measured VAG depends mostly on size (macro(102 m), micro (106 m), ultra-micro(108 m)) and geometry (plane, disc-shaped, cylindrical, spherical and modifications as stripe-shaped and needle-shaped) of the used electrodes [63]. This conventional classification system solves the problems related to electrodes sensitivity only if the registration conditions of the corresponding VAGs

8.3 The Fractal Theory of Percolation and its Application in Electrochemistry

383

follow their reproducibility and in the absence of overlapping signals. The two last features improve the signal/noise ratio. In [64] were developed effective mathematical methods and the corresponding algorithms to extract the peaks of the desired micro-components for overlapping signals. However, these methods mostly have a phenomenological character and are “tuned” to extract physical/chemical parameters when the concentration of the detected reagents are rather high. In any case, the results of any voltammetric experiment depend primarily on the structure of the interfacial area defined as double electric layer (DEL). In recent literature, there are many specific models to describe the formation of the specific DEL near the electrodes [65–67] but, up to now, a “universal” model of the DEL suitable for different experimental situations does not exist. However, it has been experimentally proved that the DEL structure mainly depends on the material used for the preparation of electrodes, its porosity/surface irregularity, the presence of the absorbed organic/nonorganic films on its surface, the nature of the background electrolyte, and the covering solution. Therefore, the interest in understanding DEL nature is not depleted. For a more in-depth understanding of the problem posed in this subsection, it is worth mentioning the important remark made by Z. Stojek in the well-known book [68]: “there is still much able to predict the behavior and the capacitance of the double layer in the entire available potential window and under all conditions. In consequence, there is still no satisfactory agreement between the experimental and theoretical data regarding capacitance of the double layer. Hopefully, the new experimental techniques, such as atomic force and scanning tunneling microscopies [69] and scanning electrochemical microscopy [70] will allow electrochemists learn more about the structure of the double layer at the atomic level. On the theoretical side, the new digital methods of calculations provide a possibility to simulate, in time, all the changes within the double layer.” Actually, this extended textual fragment contains the formulation of the problem that will be tackled later. Is it possible to quantitatively describe a wide set of measured VAGs based on their self-similar/fractal properties? The section contains argumentations to prove the feasibility of the percolation theory taking into account the fractal properties of the solid electrodes and the DEL in the region of mesoscale. It is worth mentioning the relevant literature investigating the influence of the roughness/interfacial surface of the solid electrodes [71– 75]. However, in spite of their initial attractiveness, these papers didn’t have further propagation and developments in electrochemistry. The first real power-law exponents found theoretically and confirmed experimentally for the description of the solid electrodes were not sufficient for a quantitative description of the whole curve that fits the measured VAGs. As shown later, the fractal dimension becomes complex and starts depending on an external factor (the applied potential U in the considered case). These two new elements (the fractal dimension complexity and its dependence on the external factor) can be used to create the desired theory.

384

8.3.2

8 Applications of NIMRAD in Electrochemistry

Foundations of the General Theory of Percolation Currents

The following assumptions underly the proposed theory. The structures of the electrodes and the surrounding space between them (including the influence of the DEL) subject to the applied potential U have a self-similar/fractal structure. It is supposed that the range of scales occupies some intermediate position (λ < η < Λ), where λ and Λ delimit the mesoscale regions. It means that it is possible to introduce some percolation region, which produces the desired current J under the applied potential U. Mathematically, this statement can be represented in the form J l ðzÞ ¼ R l

N X

ðbl ðzÞÞn  fl ðzξn Þ,

ð8:40Þ

n¼N

where N determines the number of self-similar regions, the variable z ¼ U/U0 is associated with the dimensionless potential; fl(z) gives the distribution of the currents in the l-th percolation channel; the function bl(z) and the scaling parameter ξ are the scaling factors. The parameter Rl, which has a current dimension, determines quantitatively the geometric factor of the l-th percolation channel. Its role in the percolation process is explained below. The first factor bl(z) selects the number of scaling regions participating in the percolation process and could depend on the applied dimensionless potential z ¼ U/U0. The scaling factor ξ could be random, and here the mean value is considered (the reduction of the set of the distributed scaling factors (ξ1ξ2. . .ξn)1/n to the modified mean value ξ is discussed in the Appendix to this subsection). The grounds of the modern electrochemistry [2] are based on complicated diffusion equations, including the elementary act of electron(s) transportation from one electrode, through the conducting solution, to another electrode. Solutions for simple problems can be obtained analytically or numerically for more generalized cases of diffusion equations including convection/transfer terms. These solutions are used as the basic ones for the explanation of the observed VAGs. However, in many cases, these “microscopic solutions” are automatically applied/extended for the case of macroscopic electrodes but the influence of heterogeneities of electrodes and the surrounding medium (including the complex structure of the DEL) up to now were not taken into account.

8.3 The Fractal Theory of Percolation and its Application in Electrochemistry

385

l L

Convenonal Way

In the suggested theory it is introduced the mathematically correct averaging procedure of transformation of elementary diffusion equations to the macroscopic and mesoscopic levels. It follows the reasonable supposition that the heterogeneities of microscopic regions, where the microscopic diffusion phenomenon takes place, have a self-similar/fractal structure. Mathematically, the influence of the generalized diffusion processes under the applied dimensionless potential z ¼ U/U0 is expressed by the “microscopic” function fl(z), which should be significant among other competitive microscopic functions/mechanisms describing the diffusion process, and could be different for every percolation channel l. As shown later, the specific form of this function is not essential. It is important its behaviour on the limits of the mesoscale region fl(zξ N), where the number of the self/similar regions is N > > 1. The proposed theory suggests a specific “bridge” between microscopic phenomena associated with the transfer of electron(s) from one chemical substance to another, and the macroscopic/measured VAG, which reflects the collective transfer process of many electrons through self-similar media covering electrodes and conducting medium. The previous scheme constitutes occupies an intermediate place in the percolation model within other theories claiming to describe the observed VAGs. As previously stated, Rl determines the geometric factor or effectiveness of the lth percolation channel tightly associated with its geometry. It can be a line, surface or volume, where the percolation process takes place. The sum in (8.40) satisfies the following property J l ðzξÞ ¼

  1  J l ðzÞ þ ðbl ðzÞÞN fl zξNþ1  ðbl ðzÞÞN1 fl zξN bl ðzÞ

ð8:41Þ

It is supposed that the function bl(z) is log-periodic and can be represented by the segment of the log-periodic series

386

8 Applications of NIMRAD in Electrochemistry

bl ðzξÞ ¼ bl ðzÞ ¼

ðlÞ Ac0

þ

Q>>1 X

AcðqlÞ

q¼1





ln ðzÞ ln ðzÞ ðlÞ cos 2πq þ Asq sin 2πq : ð8:42Þ ln ðξÞ ln ðξÞ

From (8.42) it follows that bl(lnz) ¼ bl(lnz ln ξ), and the decomposition ðqÞ ðqÞ coefficients Acl , Asl for the given function bl(z) and the final “mode” Q can be found by the Linear Least Squares Method (LLSM). If the contribution of the functions fl(z) on the ends of the percolation cluster becomes negligible at jzj < > 1 fl ðzÞ ffi

ðlÞ

ðlÞ

B1 B exp ðλl zÞ þ 22 exp ð2λl zÞ þ . . . , z z

ð8:43bÞ

then the contribution of the last two terms in (8.41) tends to zero, and from (8.41) one has approximately the following functional equation: J l ðzξÞ ffi

1 J ðzÞ: bl ð z Þ l

ð8:44Þ

Equation (8.44) has the following solution [76–79]: J l ðzÞ ¼

1 bl ð z Þ

ln ðzÞ= ln ξ

Prl ð ln zÞ  zDl ðzÞ Prl ð ln zÞ,

ln ðκl ðzÞÞ 1 Prð ln z ln ξÞ ¼ Prð ln ðzÞÞ, Dl ðzÞ ¼ , κl ð z Þ ¼ bl ð z Þ ln ðξÞ

ð8:45Þ

where Dl(z) defines the generalised fractal dimension of the l-th channel depending on the external factor z. Therefore, from the last equation, one can restore the basic fractal parameters that determine the conducting structure of the given percolation cluster. Actually, the behaviour of the total VAG is determined by many percolation clusters. For accurate identification of the number of the percolation clusters involved in the percolation process, it is necessary a preliminary analysis of the specific electrochemical experiment. Hereafter it is considered in details the case of two dominate channels J tot ðzÞ ¼ J 1 ðzÞ þ J 2 ðzÞ:

ð8:46Þ

8.3 The Fractal Theory of Percolation and its Application in Electrochemistry

387

This hypothesis is based on the following logical arguments: (a) it describes the structure of the fractal electrodes and simultaneously the fractal properties of the DEL associated with the conducting medium; (b) the final fitting function that follows from (8.46) contains the minimal number of the fitting parameters. Based on expression (8.44), it approximately holds: J tot ðzÞ ¼ J 1 ðzÞ þ J 2 ðzÞ 1 1 J tot ðzξÞ ¼ J ðzÞ þ J ðzÞ b1 ð z Þ 1 b2 ð z Þ 2  1 1 J ðzÞ þ J 2 ðzÞ J tot zξ2 ¼ 2 1 ðb1 ðzÞÞ ðb2 ðzÞÞ2

ð8:47Þ

Excluding the local currents J1 and J2(z) from the first two lines and substituting the obtained expression into the third line yields  J tot zξ2 ¼ a1 ðzÞJ tot ðzξÞ þ a0 ðzÞJ tot ðzÞ, a1 ðzÞ ¼ ðκ1 ðzÞ þ κ2 ðzÞÞ, a0 ðzÞ ¼ κ1 ðzÞ  κ2 ðzÞ, 1 ðκðzÞÞ2  a1 ðzÞκðzÞ  a0 ðzÞ ¼ 0, κ1,2 ðzÞ ¼ : b1,2 ðzÞ

ð8:48Þ

The functional equation (8.48) has the following solution [76, 78, 79]: J tot ðzÞ ¼ ðκ 1 ðzÞÞ ln ðzÞ= ln ξ Pr1 ð ln zÞ þ ðκ2 ðzÞÞ ln ðzÞ= ln ξ Pr2 ð ln zÞ:

ð8:49Þ

In (8.49), the functional roots κ1(z) and κ2(z) derives from the quadratic Eq. (8.48), the functions Pr1(lnz) and Pr2(lnz) determine the log-periodic functions and can be represented by decompositions similar to (8.42). The solution (8.49) generalizes the previous results obtained for the case of functional dependence of the roots κ1(z) and κ2(z) from z. Because of assumption (8.42), the corresponding roots κ1,2(lnz lnξ) ¼ κ1,2(lnz) are log-periodic functions, too. The theory can be generalised for any arbitrary number of channels (l ¼ 1, 2,. . ., L ). In this case, by induction, it results L1  X  J L zξL ¼ al ðzÞJ l zξl , l¼0

where the functional equation (8.50) has the solution

ð8:50Þ

388

8 Applications of NIMRAD in Electrochemistry

J L ðzÞ ¼

L X

ln z

ðκl ð ln zÞÞ ln ξ Prl ð ln zÞ

l¼1



L X l¼1

κ ð ln zÞ ln ðbl ðzÞÞ ¼ : exp ðDl ðzÞ  ln zÞPrl ð ln zÞ, Dl ðzÞ ¼ l ln ξ ln ξ

ð8:51Þ

Here, the functional “roots” derive from the polynomial ðκðzÞÞL 

L1 X l¼0

al ðzÞðκðzÞÞl ¼ 0, κl ðzÞ ¼

1 , l ¼ 1, 2, . . . , L: bl ð z Þ

ð8:52Þ

The analysis of the important dependence as the fractal dimension Dl(z) brings out the following peculiarities. If the fractal dimension becomes negative, then bl(z) >1. It implies that in the process of percolation all large “pieces” of the fractal structure are involved. In this case, the process of percolation is accelerated and for some values of the ratio ln(U/U0) one can expect the limiting minimal values in the behaviour of Dl(z). In the opposite case, when Dl(z) > 0 and bl(z) < 1, one can expect that the process of percolation is “frozen” or terminated. Therefore, in the general behaviour of the fractal dimension, they should be observed one or more limiting minimal values of Dl(z), satisfying the condition 1 < |Dl(lnz)| < 3, where the percolation process is accelerated, while for the regions bl(lnz)