173 62 14MB
English Pages 175 [176] Year 2011
De Gruyter Studies in Mathematical Physics 1 Editors Michael Efroimsky, Bethesda, USA Leonard Gamberg, Reading, USA Dmitry Gitman, Sa˜o Paulo, Brasil Alexander Lazarian, Madison, USA
Leonid A. Mironovsky Valery A. Slaev
Strip-Method for Image and Signal Transformation
De Gruyter
Physics and Astronomy Classification 2010: 07.05.Pj, 07.59.Qx, 84.40.Az, 85.70.Kh, 85.70.Li, 87.61.Hk, 02.10.Yn, 89.70.Kn.
ISBN 978-3-11-025192-0 e-ISBN 978-3-11-025256-9
Library of Congress Cataloging-in-Publication Data Mironovskii, L. A. (Leonid Alekseevich) Strip-method for image and signal transformation / by Leonid A. Mironovsky, Valery A. Slaev. p. cm – (De gruyter studies in mathematical physics ; no. 1) Includes bibliographical references and index. ISBN 978-3-11-025192-0 (alk. paper) 1. Image processing–Mathematics. 2. Signal processing–Mathematics. 3. Finite strip method. I. Slaev, Valery A. II. Title. TA1637.M57 2011 621.36 0 7–dc23 2011021425
Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available in the Internet at http://dnb.d-nb.de. 6 2012 Walter de Gruyter GmbH & Co. KG, 10785 Berlin/Boston Typesetting: RoyalStandard, Hong Kong, www.royalstandard.biz Printing and binding: Hubert & Co. GmbH & Co. KG, Go¨ttingen Printed on acid-free paper Printed in Germany www.degruyter.com
Foreword In transmitting signals over communication channels it is very important to decrease the level of channel interference (noise) and distortion introduced in various channel links or, in other words, to increase the accuracy (or decrease an error) of signal transmission by a communication channel. The present monograph deals with an analysis of the original method for increasing the noise immunity of information storage and transmission systems, named as the strip-method. Its main point consists in preliminary transformation of a signal at a transmitting end by ‘‘splitting’’ this signal into parts (strips) of equal duration (length), forming their linear combinations and then ‘‘sticking’’ them together into a single signal of the same (or greater) duration. At a receiving end a mixture of the signal and noise, received from the communication channel, is subjected to an inverse procedure which results in ‘‘stretching’’ of pulse disturbances over the whole signal duration with a simultaneous decrease of their amplitude. This gives rise to a decrease of a relative noise level and, correspondingly, to an increase of the noise immunity. The same transformation of an image comes to dissecting it into a great number of similar fragments (fragmentation), forming their linear combinations and transforming these fragments inversely (defragmentation). As a result a new image appears that does not quite outwardly similar to the original one. If this image is subjected to the influence of pulse noise resulting in distortion or complete loss of some fragments, then after restoring a whole original image will be obtained, only its quality will be a little worse. In setting out the material the authors concentrate attention on the following issues: f
f
f
development of a method of isometric pre-distortion, i.e. the strip-method, that does not change the signal ‘‘volume’’ and increase its tolerance to pulse noise having an effect in communication channels. The method considered is based on straight and inverse linear signal transformation described with the help of matrices; determination of requirements for strip-method operators, the fulfillment of which provides the continuity and ‘‘smoothness’’ of a signal transformed; uniformity of pulse noise distribution with regard to the signal duration and image area; variance stationarity of a signal being transmitted, alignment of its information density; reduction of the signal spectrum being transmitted and relative simplicity of technical realization of the method; estimation of the potential noise-immunity and efficiency of the strip-method for the case of single and r-multiple pulse noise, as well as synthesis of corresponding optimal algorithms of pre-distortion;
vi
Foreword f
f
f
investigation of the possibility to introduce some information redundancy into a signal transmitted for detection, localization, identification and correction of pulse noise; searching for invariants and optimal matrices of two-dimensional striptransformation, which makes it possible to storage and transmit images in noise-immune manner; development of technical means for realizing the strip-method of linear predistortions and filtering a signal being transmitted.
It is reasonable that the strip-method is merely one of a number of methods used for increasing the accuracy of signal and image transmission over communication channels. A great number of publications are devoted to issues of raising the noise-resistance of information transmission systems [4, 5, 9, 12, 14, 22, 24–26, 33, 39, 40, 43, 49, 53, 54, 56, 61, 62, 67, 104, 105, 112, 116, 119, 120, 123, 137–139, 144–146, 149, 157 and others]. It is also necessary to mention some works in the adjacent fields of activities such as: the works on the cluster system of message transmission and linear predistortion of signals, made by Russian researches D. V. Ageev, V. K. Marigodov, D. S. Lebedev, B. S. Cibakov, Yu. N. Babanov, S. A. Suslonov, L. P. Yaroslavsky [1, 8, 17, 60, 69–73, 136 and others]; the works on the method of redundant variables, described by M. B. Ignatyev, L. A. Mironovsky, G. S. Britov [16, 143 and others]; the works on the linear transformation and block coding of signals and images, which were done by American researches G. R. Lang, W. H. Pierce, J. P. Costas, H. P. Kramer, M. V. Mathews, W. K. Pratt, H. C. Andrews [2, 3, 19, 57–59, 108, 112, 113 and others]. The pre-distortion methods based on the linear matrix transformation are widely used for discrete and digital signals [17, 33, 36, 59, 60, 64, 108, 109, 111, 113, 115 and others]. Moreover, recently much attention has been paid to the development of various algorithms for noise-proof and anti-jamming image processing [15, 23, 48, 76, 101, 102, 129, 131, 151 and others]. Thus, the noise control based on introduction of pre-distortions at the stage of signal transmission and on optimal processing (noise correction) at the stage of signal reception is widely used in information transmission systems. However the majority of works deal with the pre-distortion methods and correction using a root-mean-square criterion, whereas the methods satisfying the requirements for optimizing the information transmission systems with the help of a minimax criterion have been developed to a significantly lesser degree. Therefore, may be it will be useful to develop and study new methods for pulse interference blanking, supported by usage of the minimax criterion and a modern computer processing for images and signals. The monograph sums up the work of many years done by the authors in the field indicated above and includes the results of activities on investigation of the
Foreword
vii
strip-method over a period of more than 30 years [77–100, 125–128 and others]. The priority of researches is confirmed by a number of the authorship inventions certificates [82–93, 127]. The basic part of the investigations was conducted at the St. Petersburg State University of Aerospace Instrumentation (former LIAP) and D. I. Mendeleyev Institute for Metrology where the authors have been working in the course of more than 40 years. The authors are grateful to missis Korzhakova T. N. for her help in translating the text into English.
Contents
Foreword. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
v
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1
Strip-method of signal transformation . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Strip-method of linear pre-distortions and problems it solves . . . . 1.2 Assurance of the transformed signal continuity . . . . . . . . . . . . . . 1.3 Equalization of non-stationary signal variance. . . . . . . . . . . . . . . 1.4 Equalization of the ‘‘informative ability’’ of a non-stationary signal . 1.5 Narrowing of the pre-distorted signal frequency spectrum. . . . . . .
. . . . . .
12 12 22 27 32 36
2
Optimal Chebyshev pre-distortion and filtration . . . . . . . . . . . . . . . . . . 2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Estimation of the potential noise immunity in case of single noise . . 2.4 Estimation of the potential noise immunity in the case of multiple noises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Synthesis of the optimal Chebyshev filter . . . . . . . . . . . . . . . . . . 2.6 Quasioptimal pre-distortions . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Introduction of redundancy in the strip-method of linear pre-distortions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.1 Decreasing the noise power in a reconstructed signal . . . . . 2.7.2 Detection, localization, identification and correction of pulse noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.3 Possibilities for applying the strip-method in steganography .
. . . .
41 41 44 47
. . .
49 51 58
. .
63 64
. .
65 77
3
Strip-method of image transformation . . . . . . . . . . . . . . 3.1 Two-dimensional strip-transformation . . . . . . . . . 3.2 Choice of optimal transformation matrices . . . . . . 3.3 Examples of the strip-transformation of images . . . 3.4 Determination of critical multiplicity of noise . . . . 3.5 Root images of the two-sided strip-transformation .
. . . . . .
79 79 84 87 97 100
4
Hardware implementation of the strip-method . . . . . . . . . . . . . . . . . . . . 4.1 Implementation of the strip-method with usage of magnetic recording – reproducing instruments . . . . . . . . . . . . . . . . . . . . . . 4.2 Implementation of the strip-method with a cyclic matrix . . . . . . . .
114
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
114 117
x
Contents
4.3 4.4
The device for equalization of the signal variance . . . . . . . . . . . . . Devices for introducing information redundancy . . . . . . . . . . . . . .
119 120
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
127
. . . . . . .
129 129 133 134 138 143 147
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
152
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
161
Appendix
Hadamard matrices and the matrices close to them . . . A.1 Hadamard matrices . . . . . . . . . . . . . . . . . . . . . A.2 Shortened Hadamard matrices . . . . . . . . . . . . . A.3 Conference-matrices . . . . . . . . . . . . . . . . . . . . A.4 Optimal matrices of the odd order (M-matrices) . A.5 Algorithm for determining optimal matrices . . . A.6 Characteristics of optimal matrices . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
Introduction
One of the central tasks of the communication theory consists in increasing the accuracy of signal transmission over the channels of information measurement systems and data transmission systems, i.e. IMS and DTS correspondingly. This task can be fulfilled, in particular, at the expense of the noise-proof transformation of signals at the time of their transmission and reception. All known methods and means for increasing the DTS noise–resistance can be classified on the basis of various attributes (see Figure 0.1). One of the most important attributes is the type of noise interference (noise), which is character for the given technical implementation of a system considered. The noise type and parameters are specified at the design stage for reasons of physics and a priori information. Moreover it is possible to use the data obtained either in the process of operation of other systems based on the same operational principles or as a result of the special investigations [134] which permit to determine statistical features of the noise. In practice, the most frequent noise interferences are of the following types: f f f
pulse noise which looks like short-term misses of signals or peaks of a high amplitude and short duration, which have a wide spectrum of frequencies; fluctuation noise which is characterized by some law of the probability density distribution of its values and the power spectrum density or correlation function; narrow-band noise, in particular, the harmonic one differing in that its spectrum is located in a narrow frequency band.
Channels of wire and radio communication circuits intended for transmitting signals of phone and telegraphy information, TV signals and telemetry data are the fields where the methods and means of noise control are used (Figure 0.2). They also include active and passive radar systems, trajectory measuring instruments and radio-navigation aids. Furthermore, the noise control methods are widely applied in the field of recording, storage and reproduction technology for information signals. Electromechanical loggers, self-recording measuring instruments, light-beam oscilloscope, magnetic recording instruments (MRI) on an immobile and mobile carrier and others can fall under the category of such devices. These devices are intended to register mainly one-dimensional signals by which the functions of one argument, most often that of time are meant. A telephone or TV signal at raster scanning can be used as an example. In case of many-dimensional or multivariate signals, to register them, a multichannel MRI, photographic systems, units of holographic recording and reproducing as well as other devices are used. The many-dimensional signals are the functions of a number of arguments. For
2
Introduction
Figure 0.1. The classification of various attributes for methods and means increasing the DTS noise-resistance.
Figure 0.2. The fields where the methods and means of noise control are used.
Introduction
3
Figure 0.3. The classification of the DTS signals and their informative parameters.
example, an immovable black and white image on a flat plane can be considered as a two-dimensional signal. A 3D graphic presentation or three-dimensional image varying with time is the four-dimensional signal and so on. According to the presentation method, the input and output signals of a date transmission system are divided into discrete (pulse, digital) and continuous time signals (Figure 0.3). The discrete signals are the pulses or d.c. ‘‘steps’’ by their appearance. Very often they are the code combinations corresponding to the values of an informative parameter of a message at fixed moments of time at a given basis of counting. When an output signal is of the continuous form an output signal of the transmission system can have either discrete or continuous form. In the first case the system contains analogue transformers (in particular, scale transformers). In the second case it is the analog-to-digital converter performing discretization, sampling and quantization of the process at the system input. When an input signal is of the discrete form an output signal of the system also can have either discrete or continuous form. In the first case the system contains digital coders. In the second case it is the digital-to-analog converter performing interpolation and smoothing of the output signal. Informative parameters of the input signal can be its instant values, an amplitude, frequency, phase, spectrum or correlation function, as well their combinations. Classification of the methods for transforming information in DTSs, reasoning from a given purpose, is shown in Figure 0.4.
4
Introduction
Figure 0.4. The classification of the methods for transforming information, reasoning from a given purpose.
One of the transformation purposes is the reduction of a volume of information transmitted. By this reduction means the product of a frequency band, time of existence and dynamic range of a signal [41]. Such a task is of current importance in transmitting signal with a great natural information redundancy [3, 106, 150 and others], e.g. telemetry data.
Introduction
5
The signal volume reduction results in an increase of the information rate. Moreover, the same effect is achieved with the decorrelation, i.e. the removal of the linear stochastic dependence of signal elements at a transmitting station and the noise at the receiving end of the information transmission system. Another important goal is the improvement of the DTS noise immunity or noise control. Methods for improving the DTS noise immunity are different and include: isolating a channel from the noise, achieved by perfection of hardware; matching the signal and channel characteristics (with regard to the power, frequency, amplitude); introducing some redundancy into a signal transmitted (using repetitions, receiving receipts, applying redundant and robust coding); optimizing reception and processing of information that means the usage of optimal filtration [2, 50] and correction of the signal received. According to operating principles, the systems of noise cancellation can be divided into six groups [32]: f f f f f f
limitation of the amplitude of a noise and signal mixture; interruption of noise and signal mixture passage at a receiving station at the time of the noise activity; compensation of the noise activity effect; reduction of the noise and signal mixture to a level the signal has at the time when the noise is active, with the help of a regulating cascade; limitation of the rate of rise of the signals being received; transformation of the signal spectra for the pulse noise abatement at the time of inverse transformation.
The last group from those indicated above is based on a sufficiently universal principle of introduction of pre-distortions at the time of signal transmission and its inverse transformation when this signal is received. This provides also the possibility to solve a part of other tasks such as the matching of signal and channel characteristics, filtration and correction of the noise, and makes easier introducing information redundancy, and so on. The effect of the noise attenuation significantly depends on a selected criterion of noise immunity estimation. In Figure 0.5 the classification of the criteria applied is shown. The choice of the criterion is determined by the tasks which are solved in the process of information transmission. Two of these tasks are basic: f f
detecting or distinguishing signals; reproducing of signal at a receiving station or estimating signal parameters (identification).
Probability criteria [22, 33, 53, 62, 117, 144 and others] are widely used for solving the first task. They are based on conditional probability density of estimates of the signal received if a definite message, a priori probability density of various messages and joint probability density of transmitted and received messages. Various
6
Introduction
Figure 0.5. The classification of criteria for noise immunity estimation.
types of criteria are known including those of Bayes, Neyman-Pearson, Glover, Kuz’min, maximum likelihood and others. So, when specifying the significance function of joint realization of a message and its estimate, the usage of a priori distribution and conventional density probability leads to the Bayes and Neyman-Pearson criteria. According to the Bayes criterion, an average risk of the incorrect decision-making is minimized (signal miss and false alarm). According to the Neyman-Pearson criterion, the conventional probability of the improper decision-making corresponding to the probability of the signal miss at a given conventional probability of the incorrect decision-making corresponding to the probability of the false alarm is minimized, and so on. In the process of signal transmission the criterion characterizing a decrease of information in a channel of the DTS [39, 52, 54, 103, 116, 137] presents a significant interest. To get out of difficulties connected with an equality of the information quantity of continuous time signals to infinity is succeeded with the help of the "-entropy concept. The most acceptable criterion for restoring the signal and estimation of its parameters is an error or uncertainty norm of transmission in a certain functional
Introduction
7
space [55]. The root-mean-square criterion characterizing the error norm in functional space L 2 is widely used: ¼ ½Eðx 0 xÞ2 1=2 ; where: – the root-mean-square deviation; E – the symbol of mathematical expectation; x 0 , x – the received and transmitted signals. Mean deviation module 1 (the mean error) 1 ¼ E jx 0 x j is the metrics in space L 1 . The optimization done according this criterion gives rise to usage of the least-module method [30]. For the algorithms used to perform processing with the help of the least-module method it is characteristic that signals or measurement results of great values are ignored. A deviation module maximum 1 ¼ max jx 0 ðtÞ xðtÞj;
0t1
is the metrics in space L1 (or C ). Frequently this criterion is named as the Chebyshev criterion. The optimization of a system according to this criterion provides a minimal permissible error and that is why it is also known as the minimax criterion. The signal/noise criterion that is widely used in practice, is interpreted as the ratio of their mean power values and characterizes the equipment resolution, i.e. the equipment ability to discriminate threshold signals against a background. The classification of the noise immune methods for transforming the signals transmitted is shown in Figure 0.6. As it has been noted, the widely used principle of increasing the DTS noise immunity is the introduction of pre-distortions into a signal transmitted and inverse transformation when it is received [1, 6, 8, 16, 17, 19, 20, 27, 31, 36, 44, 45, 57–60, 63, 64, 69–73, 108, 109, 111–113, 115, 122, 124, 130, 133, 136, 141, 147, 156]. The methods of signal pre-distortion can be divided into two large classes including linear and non-linear pre-distortions. Classical examples of the non-linear pre-distortion devices are the compressors and expanders of a dynamic range, as well the systems of automatic gain control (AGC) [105]. The compressors with a non-linear amplitude characteristic realize compression of the dynamic range. The transfer gain of the compressor varies subject to instantaneous values of the signal transmitted and at the same time a curve form of this signal changes, the high instantaneous values being ‘‘compressed’’ (limited) and low values going through the compressor almost without any distortions. At the receiving DTS terminal it is used an expander the amplitude characteristic of which is inverse with regard to the characteristic of the compressor. It should be noted, that a change of the curve
8
Introduction
Figure 0.6. The classification of the noise immune methods for transforming the signal transmitted.
Introduction
9
Figure 0.7. Block diagram of a DTS channel: 1: a coder (a pre-distortion device or a block of the straight transformation of signal x); 2: a transmitter; 3: a medium (a channel, MRI and so on); 4: a noise source; 5: a receiver; 6: a decoder (a filter or a block of the inverse transformation of received signal y 0 ).
form which takes place, results in a change of the spectrum width of the signal transmitted. There is one more way to exert influence on the dynamic range of the signal. It consists in applying the AGC, i.e. non-stationary device with a memory, which have a time constant. The AGC transfer gain changes with variation of the signal level (envelope) and does not depend on the instantaneous signal value. However the changes of the signal shape due to frequency and phase distortions in the communication channel have all disadvantages of the non-linear pre-distortion and restoration devices. It’s true even if an amplitude characteristic of restoration device (the expander) is exactly inverse to that of the compressor. Methods of the linear pre-distortions are very diverse. The most part of them are based on the following idea (Figure 0.7). At the DTS transmitting station a pre-distorting four-terminal circuit is frequently used in the form of a filter [19, 44] with such characteristic H ð!) that may permit to improve the signal-to-noise ratio at the receiving terminal of the channel having noise disturbances by choosing transfer gain coefficient Gð!) of the correcting four-terminal circuit. The simplest and widely applied method of linear pre-distortions is that of amplitude-frequency pre-distortions. For example, it is known that an amplitudefrequency characteristic (AFC) of the system ‘‘magnetic head-magnetic tape-magnetic head’’ of MRI with direct recording and induction reproducing heads linearly increases up to relatively high frequencies. To get the uniform AFC of a whole channel, in recording amplifiers of MRI they use amplitude-frequency pre-distortions [125, 132]. The aim to apply them is the ‘‘accentuation’’ of low frequencies of an input signal. The method of phase-frequency pre-distortions is based on applying two filters [61], one at the transmitting and one at the receiving stations of the DTS, with uniform amplitude-frequency characteristics and mutually conjugated phase-frequency characteristics (PFC) of an intrinsic shape. The signal passing through both filters was only delayed for a fixed time, whereas the frequency components of noise are ‘‘scattered’’ in the time domain.
10
Introduction
In [136] the method of amplitude-phase pre-distortions is used. These distortions consist of amplitude-frequency and phase-frequency distortions applied in sequence. The problem of introducing the amplitude-phase pre-distortions was considered with regard to deterministic signals, their shape being saved in the presence of noise in a specified dynamic range at a maximum signal-to-noise ratio. The method of time-frequency pre-distortions [8] is based on the man ear virtue not to notice time shifts in reproducing separate frequency components of a complicated audio signal, if only these shifts do not exceed a certain threshold. Hardware implementation of the method consists in introduction of various time delays into narrow-band signals obtained with the help of a band-pass filters from the original one. Then the signals obtained with the help of delay lines are mixed and at the output of a mixer a pre-distortion signal is obtained. The authors of [124] solve the problem of minimizing a root-mean-square value of the error of a stationary Gaussian process transmission over a communication channel. At the transmitting terminal of the channel a pre-distortion by way of the linear transformation of the convolution type with kernel K ð) is applied. This kernel is one of the variable functions in the optimization process. So we have here a pre-distortion method described by an integral equation. Frequently the pre-distortions described by differential equations (pre-distortions of the differential type) are used. As an example the magnetic tape recording of width-modulation pulses can be given. An optimal way of reconstructing a time position of wave fronts of the pulses being reproduced is the method of signal processing ‘‘by the zero of a derivative’’ [7], which assumes the twofold differentiation of the signal arriving from the head. At the same time all disturbances are ‘‘accentuated’’. In order to avoid the twofold differentiation of the signal reproduced, in the mode of recording the pulses are obtained by differentiating the width-modulated pulses of the rectangular form, i.e. using a pre-distortion of the differential type. In [57, 106] a voice coder for transmitting n continuous correlated signals over m channels (m n) is described. Each of m signals is the linear combination of original n signals. Coefficients of this linear transformation which form a matrix of dimension m n are constants. In the voice coder described, the speech is transmitted by a set of signals that are proportional to the energy of a speech signal within different frequency bands. These signals were strongly correlated and a predistortion result is the significant decrease of a number of the signals required for transmitting the intelligible speech. The similar method of pre-distortion based on the linear transformation and reception of a pre-distorted signal as a linear combination of fragments of the original signal, had been considered in [20]. The need to use a multichannel equipment of storage and transmission of signals should be referred to disadvantages of this method since the pre-distortion gives rise to m signals. Moreover, the fragments of the original signal used in the linear combination are formed by transmitting the
Introduction
11
original signal via the band-pass filters. This complicates the equipment and creates various demands of a necessary transmission band for the communication channels. Then, the main attention is paid to the strip-method of signal and image transformation [77, 98 and others]. Comparison of this method with the above considered permits to notice its following characteristic features. According to the strip-method the pre-distortions are realized owing to the linear combination of origin signal or image fragments. This leads to the fact that each fragment of the message transmitted carries information about each of all fragments of the original signal. This permits to restore the whole image without any noticeable distortions in case of loss or damages of one of the fragments. With all this going on there is no need to use some filters or pre-distorting fourterminal devices. Here it is possible to draw an analogy with the holographic transformation of images, but the role of separate points is performed by strips of the signal or image fragments into which it is cut (as in a childish games of the ‘‘puzzle’’ type). It should be emphasized that in the process of signal and image fragmentation there is no loss of information as it takes place in case of the Fourier transformation or approximation. That is why in the absence of noise the restoration of information takes place without any methodical error. Thus, the strip-method is referred to the group of methods realizing the linear pre-distortion and directed first of all toward pulse noise control. It is reasonable to estimate the depression degree of pulse noise by minimax criterion. When the strip-method is used the procedure of pre-distortion (and that of reconstruction) requires a delay for the time of signal duration. A more detailed description of this method is given below.
Chapter 1
Strip-method of signal transformation
1.1
Strip-method of linear pre-distortions and problems it solves
It is known that the concepts of memory and communication are deeply ‘‘interweaved’’ with each other. If the memory function is the time-transmission of information, then the function of communication is the transmission of information in space. Both functions have a passive character: information is only stored and propagated, but not subjected to active purposeful processing [111]. That is why the problems arising at the time when the information transmission systems (communication channels) are being developed and operated from one hand, and those occurring in the process of storing information in the recording equipment on the other hand, are in many respects similar. Particularly, in both cases the increase of noise immunity, i.e. the increase of the signal-to-noise ratio that directly influences the accuracy parameters of signal transmission, are very important. In technical applications the pulse noise fairly often occurs which has the form of noise peaks of short duration and high amplitude [24, 57, 128]. Another type of pulse noise is the signal miss of the kind of deep fading at the time of radio communication [1, 69], or the signal ‘‘dropout’’ due to the defects of a magnetic carrier in making the magnetic record-reproduction of information [119, 148]. Very often the pulse noise has a spectrum close to the spectrum of a signal being transmitted or recorded and can not be damped by traditional methods, e.g. by methods of optimal filtration [2, 27, 50, 144, 155]. Therefore, the development of a method that will make it possible to decrease the influence of pulse noise without any change of the frequency band, rate of information transmission, average power and duration of the signal at the time domain, i.e. without any change of the ‘‘volume’’ of the signal being transmitted, is important [41]. As it is shown in the Introduction, there is a universal principle of noise control in a communication channel: a signal is preliminary transformed (pre-distorted) at the time of its transmission and then transformed backwards when it is received. Distinctive features of this principle consist in the specific limitations applied to an application domain and the aims pursued. Below, consideration is given to the possibility to apply the pre-distortion principle for solving the following problem. Let xðtÞ 2 X be an original signal meant for transmission over a communication channel. The set X contains the signals characterized by the following properties:
Section 1.1 Strip-method of linear pre-distortions and problems it solves f f
13
finiteness, i.e. time and amplitude limitations (0 t T ) and (jxj D) correspondingly; continuity and differentiability (at least the lack of discontinuity of the first kind) providing ‘‘smoothness’’ of the signal.
The communication system shown in Figure 7 will be used for transmitting a signal xðtÞ. Let us apply as coder 1 (Figure 7) a pre-distortion device characterized by an operator the general requirements for which will be the following: f f f f
presence of an inverse operator 1 , providing an accurate reconstruction of the signal in the absence of noise; conservation of the continuity and ‘‘smoothness’’ of the signal; sufficient number of variable parameters for providing a convenient adjustment; comparative simplicity of technical realization.
The operator satisfying the above requirements can be used to solve various problems on matching the characteristics of the signal and communication channel, increasing the noise immunity of transmission, and so on. Let us move to the description of the class of linear pre-distortion operators satisfying the above requirements. It was suggested in [80, 81, 91, 96, 119]; ibidem the issues of technical realization of the corresponding operators are considered. A distinguished feature of this class operators is the finite-dimensional method of continuous signal transformation. According to this method the original scalar signal xðtÞ is divided into n parts of equal duration, and n its linear combinations are formed. These linear combinations compose a transformed signal yðtÞ. At that the whole duration of the signal does not change, however now each of the parts carries information about the whole signal xðtÞ. At the receiving end an inverse transformation (decoding) takes place which results in reconstruction of the original signal form. From the point of view of mathematics the operator of the described signal transformation at the transmitting end and the inverse operator 1 of the signal reconstruction at the receiving end were described by equations: ¼ S 1 AS; 1 ¼ S 1 A1 S;
ð1:1Þ
where: S is the strip-operator transforming the original signal of the T duration into an n-dimensional vector-function of the Tn duration; S 1 is the inverse operator with regard to the strip-operator; A is the constant nonsingular n n matrix. Its elements are coefficients of the linear combinations of the signal being transformed. Figure 1.1 shows the scheme illustrating the procedure of linear straight and inverse transformations of the continuous time signal x ðtÞ.
14
Chapter 1 Strip-method of signal transformation
Figure 1.1. Strip-method of linear pre-distortion, transmission and reconstruction of a signal.
xðtÞ: the original signal of the T duration; S: the strip-operator of signal ‘‘splitting’’ into parts of the h ¼ Tn duration; XðtÞ: vector-function of the h duration; A, A1 : straight and inverse operators of matrix transformation; Y ðtÞ: vector-function of the transformed signal Y ¼ AX; S 1 : the inverse strip-operator (of signal ‘‘sticking’’); yðtÞ: the signal of the T duration transmitted through the communication channel; y 0 ðtÞ: the sum of the signal yðtÞ and noise nðtÞ on communication channel output; Y 0 ðtÞ: vector-function of the signal and noise mixture; X 0 ðtÞ: vector-function of the signal after inverse linear transformation; x 0 ðtÞ: the received signal of the T duration. It is in according with a chain of equations: X ¼ Sx;
Y ¼ AX;
Y 0 ¼ Sy 0 ;
y ¼ S 1 Y ;
X 0 ¼ A1 Y 0 ;
y 0 ¼ y þ n;
x 0 ¼ S 1 X 0 :
ð1:2Þ
To simulate and study this scheme it is convenient to use the MATLAB packet [21] that contains, in particular, the strips command that acts in the same way as a strip-operator. It provides displaying a ‘‘long’’ graph of function, cutting it into strips (sections). Unfortunately, for further processing the result of this command is inaccessible. Thus, to realize the strip-operator it is necessary to write a user function. It is possible to explain the strip-method action by a simple example. Example 1.1. The function graph is illustrated by Figure 1.2, a: xðtÞ ¼ e0:1t þ 0:2 sin 6t;
0 t 8 c:
Section 1.1 Strip-method of linear pre-distortions and problems it solves
15
Figure 1.2. The example of the strip-transformation: a) original scalar signal x; b) vector signal X obtained using ‘‘strips’’ command.
By ‘‘cutting’’ the graph into 8 strips, each of the 1 s duration, the vector-function shown in Figure 1.2, b, is obtained. The graphs have been built using the MATLAB with the help of commands: t=0:.01:8; x=exp(-.1*t)+.2*sin(6*t); plot(t,x), grid, strips(x, 100).
The last command ‘‘cuts’’ original signal xðtÞ given by set of 801 samples into 8 strips with duration of 100 samples each. In the general case the application of the strip-operator S is equivalent to the ‘‘cutting’’ of the ‘‘long’’ original xðtÞ signal 0 t T into n strips of equal duration h ¼ Tn and getting n ‘‘short’’ signals of the type: x 1 ðtÞ ¼ xðtÞ;
T x 2 ðtÞ ¼ x t þ ; n
:::::::::::::::: T ; x n ðtÞ ¼ x t þ ðn 1Þ n
ð1:3Þ
0t
T : n
16
Chapter 1 Strip-method of signal transformation
From these ‘‘short’’ signals the n-dimensional function Xðt) is formed: 2
3 x 1 ðtÞ XðtÞ ¼ 4 . . . 5;
0t
x n ðtÞ
T : n
ð1:4Þ
Then the XðtÞ vector is transformed into the Y ðtÞ vector with the help of nonsingular matrix A ¼ ½a ij 1;n : 2
3 y1 ðtÞ Y ðtÞ ¼ AXðtÞ ¼ 4 . . . 5;
0t
yn ðtÞ
T : n
ð1:5Þ
Components of the Y ðtÞ vector are defined by the formulae: yi ðtÞ ¼ Ai XðtÞ;
i ¼ 1; 2; . . . ; n;
ð1:6Þ
where Ai is the i-th row of the A matrix. The operator S 1 is inverse with respect to the operator S and performs the procedure of ‘‘assembling’’ the signals y i ðtÞ; 0 t Tn ; i ¼ 1; 2; . . . ; n, into one signal yðtÞ of duration T . With this the coding procedure (pre-distortion) comes to its end. Then the signal yðtÞ is transmitted over the channel with noise, and at the receiving end it is subjected to the procedure of decoding in which the matrix A1 is used (see Figure 1.1). Since application of the strip-operator underlies the transformation described, the method of pre-distortion and reconstruction of signals has been named as the strip-method. Example 1.2. Let us illustrate the transformation described assuming that T ¼ 20, n ¼ 4 and taking as the original signal an exponentially decaying sinusoidal signal x ¼ e0:1t sin t 2 (Figure 1.3, a). As a consequence of strip-transformation, this signal turns into the vector signal X (Figure 1.3, b). Multiplying it by the matrix A with a unit determinant 2
1
6 63 A¼6 64 4 2
1 1 3 2 3 2 2 2
1
3
7 17 7; 17 5 1
Section 1.1 Strip-method of linear pre-distortions and problems it solves
17
Figure 1.3. A transformation of the original signal using matrix A (1.7): a) original signal x; b) vector signal X ¼ Sx; c) vector signal Y ¼ AX; d) transmitted signal y ¼ S 1 Y .
we obtain the signal Y (Figure 1.3, c), which after transforming with the inverse strip-operator becomes a scalar signal y (Figure 1.3, d ). This signal is transmitted over the communication channel and then at the receiving end it is transformed in the similar manner using the matrix A1 2
0
6 6 0 A1 ¼ 6 6 1 4 2
1
1
2
1
1
0
0
0
0
3
7 1 7 7: 27 5 1
ð1:7Þ
18
Chapter 1 Strip-method of signal transformation
In the absence of noise it will coincide with the original signal xðtÞ. Fulfilment of the procedure described using the MATLAB was done with the help of the following succession of commands. %Program strip1 (pre-distortion of the original signal) t=0:.01:20; tt=0:.01:5-.01; x=sin(.5*pi*t).*exp(-.1*t); % the original signal x(t) figure(1);plot(t,x),grid x1=x(1:500);x2=x(501:1000);x3=x(1001:1500);x4=x(1501:2000); % the strip-transformation X=[x1;x2;x3;x4]; figure(2);plot(tt,X) A =[1 1 1 1 3 3 2 1 4 3 2 1 2 2 2 1]; Y=A*X; plot(tt,Y), y=[Y(:)’ 0]; % the inverse transformation figure(3);plot(t,y),grid % the transmitted signal y(t)
It should be noticed that as a consequence of the transformation performed, information about the value of the signal xðtÞ at any moment of time 0 t Tn contains in each component of yi ðtÞ. It is just this fact, completely similar to holographic image recording [63], which provides a high noise immunity of information transmitted with regard to the pulse noise. Even in the case of the total loss of information from one of the strips yi ðtÞ, the original signal can be reconstructed (at a certain accuracy) in the process of transformation of the vector Y þ Y , where Y is the vector-function of noise. At the same time the error (uncertainty) X of the signal reconstruction is determined by the formula X ¼ A1 Y
or
x j ðtÞ ¼
n X
bij yi ðtÞ;
i¼1
where bij ; i; j ¼ 1; 2; . . . ; n are the elements of the A1 matrix. If under the action of pulse noise it turns out that one of the vector components Y appears to be non-zero, this in the general case leads to appearance of the X vector, all components of which are non-zero. Thus, instead of a disturbance belonging to one component, the disturbances emerge over all the components, i.e. ‘‘stretching’’ of pulse disturbance takes place over the total time duration of the signal. If matrix A is the orthogonal one than an amplitude of disturbance decreases.
Section 1.1 Strip-method of linear pre-distortions and problems it solves
19
Example 1.3. Again let us take as the original signal a damped sine wave xðtÞ ¼ e0:1t sin t 2 (Figure 1.4, a). The result of its transformation into the signal yðtÞ with the help of the Hadamard orthogonal matrix of the fourth order 2 3 1 1 1 1 1 6 1 1 1 1 7 7 ¼ A1 A¼ 6 ð1:8Þ 4 1 1 1 1 5 2 1 1 1 1 is shown in Figure 1.4, b. In Figure 1.4, c the signal y 0 ðtÞ is shown which differs from the signal yðtÞ by a complete miss of the signal at the second strip. By this time the reconstructed signal
Figure 1.4. The illustration of a strip-method (example 1.3): a) original signal x; b) transmitted signal y; c) received signal y 0 ; d) reconstructed signal x 0 .
20
Chapter 1 Strip-method of signal transformation
x 0 ðtÞ (Figure 1.4, d ) differs from the original signal xðtÞ over all strips, however the degree of its distortion is relatively small since the disturbance is uniformly spread over all four strips. The graphs shown were built using the MATLAB packet with the help of program istrip 1, the text of which is given below. % Program istrip 1 (reconstruction of the original signal at the receiving end) ye=y.*(1-(t>5&t 0, we obtain: P
yy n d jr j ij i; j¼1 da km ¼ max ; k; m ¼ 1; 2; . . . ; n: ð1:51Þ d trR yy da km
Elements of the matrix R yy can be written as: rijyy ¼
n X
a il a j l l ¼ Ai ATj ;
ð1:52Þ
l¼1
where: Ai ¼ ½a il l ¼ ða i1 1 ; a i 2 2 ; . . . ; a in n Þ;
T ATj ¼ a j l l
38
Chapter 1 Strip-method of signal transformation
and l ¼
pffiffiffiffiffi l ;
l ¼ 1; 2; . . . ; n:
Let the numerator and denominator of fraction (1.51) be calculated in the following way: ! ! n n X n X X d d yy a il a jl l rij ¼ dakm i;j¼1 da km i; j¼1 l¼1 ð1:53Þ n X
¼ 2m a pm sign Ap ATm ; k; m ¼ 1; 2; . . . ; n: p¼1
d d trRyy ¼ a km da km
n X
! aij2
j
¼ 2m a km ;
k; m ¼ 1; 2; . . . ; n:
ð1:54Þ
i; j¼1
By inserting the expressions for the derivatives in formula (1.51), we obtain:
Pn a pm sign Ap ATm p¼1 ð1:55Þ a km ¼ ; k; m ¼ 1; 2; . . . ; n:
max This means that the numbers a km do not depend on k (the number of a row of the matrix A), i.e. in any column all the elements are equal. Consequently, matrix A has to be of the following form: 2 3 a1 a2 . . . an 6 a1 a2 . . . an 7 7 A¼6 ð1:56Þ 4 . . . . . . . . . . . . 5: a1 a2 . . . an Evidently matrix (1.56) is singular (it has a rank equal to 1) and therefore, in practice, is not applicable. There is a need to seek a class of non-singular matrices sufficiently close to matrices of type (1.56) in respect to their properties. P It should be noticed that for any matrix A the sum nl¼1 a il a j l l (and consequently, the value of the criterion ) will be greater when all elements of this matrix have the same sign, e.g. all of them are positive. There exists a class of matrices in which the substitution of signs of all the elements by one sign does not lead to degeneration. Moreover, it is possible to point out the positive matrices, i.e. the matrices with positive elements [11]) which remaining to be nonsingular can verge, as much as one wants, towards the matrix with equal elements. A circulant matrix with entries represented by the final number of terms of an arithmetical progression a 1 ¼ a, a 2 ¼ a þ h, a 3 ¼ a þ 2h; . . . serves as an example of such a matrix. It is obvious that for this matrix a ij ! a at h ! 0.
Section 1.5 Narrowing of the pre-distorted signal frequency spectrum
39
Thus, the positive matrices are representatives of a class of non-singular matrices which are significantly close to matrices of type (1.56) in respect to their properties. The result obtained is a particular case of the Schoenberg theorem [29, 118], the statement sense of which consists in the following: ‘‘In order that a number of sign reversals in a signal yðtÞ obtained from the signal xðtÞ with the help of linear transformation (1.6), may be less than or equal to a number of sign reversals in the signal xðtÞ, it is necessary and sufficient for the matrix A to be sign-determined’’. Under the sign-determined matrix one means such a matrix that has all non-zero minors of the p-th order with the same "p sign. Provided this is the case and all the minors of the p-th order are non-zero, then such a matrix is called the strictly sign-determined matrix. In particular, at "1 ¼ "2 ¼ ¼ "n ¼ 1 the strictly signdetermined matrix is positive, which corresponds the above considered case. Summing up, it is possible to make a conclusion that the class of matrices which narrow the spectrum of a signal transformed in the best way, i.e. the matrices maximizing the criterion ð0 < 1Þ and remaining to be non-singular, is the class of sign-determined matrices. Unfortunately, these matrices can be ill-conditioned which may lead to problems in their inversing. In conclusion, let the issue of frequency (spectral) interpretation of the stripmethod be considered. Instantaneous values of the signal transformed are obtained here as the result of a scalar product of the i-th row of the matrix A and vector-column, composed of readouts of the original signal xðtÞ. In other words, they are obtained as coefficients of the generalized Fourier-transformation. As a system of basis functions a set of the functions obtained from linearly independent vectors is taken, the coordinates of these vectors being the rows of the transformation matrix. Thereby the basis functions are the step-constant ones and, in the general case, they are discontinuous (have breaks). A number of the first order breaks of such basis functions can reach (n 1), not counting the boundary ones. At the linear transformation of a signal a final number of terms of a series n is taken deliberately. To get a one-to-one dependency of the signals xðtÞ and yðtÞ, i.e. the original and image, coefficients of the Fourier-expansion yi ðtÞ in series have necessarily to be the functions of time and not the constant magnitudes. In this sense such an expansion represents a functional series with the final number of terms. By an example of using the Hadamard matrices it is possible to give a frequency interpretation of the linear transformation, using the conception of a generalized frequency introduced in [42, 142], where under the generalized frequency one means a half of the zero-level crossing number per second. In this case by the frequency for the Hadamard matrix we mean a half of the number of sign reversals along each row [126]. It is possible to build Hadamard matrices of the order n ¼ 2m the frequency of which are all the numbers from zero to n=2 following each other every 1=2. Such a frequency interpretation of
40
Chapter 1 Strip-method of signal transformation
the Hadamard matrix rows allows speaking about the equivalence of its rows to rectangular oscillations with an amplitude of 1 and period of 2=n. Functions of such a type are called Walsh functions [152]. They can be reduced to the Rademacher functions [114]. The Walsh functions walð0; Þ, calði; Þ, salði; ) compose a complete orthonormal basis on the interval ¼ Tt 2 12 ; 12 : The Fourier – Walsh transformation of the function xðÞ given on the interval 12 < 12 looks in this way: 1 X
x ðÞ ¼ a ð0Þ wal ð0; Þ þ
½ac ði Þ cal ði; Þ þ as ði Þ sal ði; Þ;
i¼1
where Z a ð 0Þ ¼
1=2
Z ac ði Þ ¼
1=2
1=2
Z as ði Þ ¼
1=2
1=2
1=2
x ðÞ wal ð0; Þd;
x ðÞ cal ði; Þd;
x ðÞ sal ði; Þd:
Thus, from the point of view of frequency interpretation the strip-method with usage of Hadamard matrices gives the original signal expansion in a Fourier-Walsh series with a final number of terms, and this expansion being accurate and not approximate. Summing up the aforesaid in Chapter 1, let us notice that in this chapter the resources of the strip-method for solving various problems are studied. They are related to the choice of the matrix A that provides ‘‘smoothness’’ of the signal transformed, equalization of its variance, uniform of its information capacity and narrowing of its spectrum. In all cases the necessary and sufficient conditions for which the matrix A has to satisfy have been obtained or its practical form has been established. The next Chapter is devoted to investigation and solution of a group of problems connected with the achievement of maximal noise immunity in transmitting a signal within the frames of the strip-method.
Chapter 2
Optimal Chebyshev pre-distortion and filtration
2.1
Preliminaries
Assurance of the noise immunity of message transmission or ‘‘noise control’’ is one of the main problems of the communication theory. A brilliant review of this problem and methods of its solution can be found in the classical monograph by A. A. Harkevich [40], as well in the works by Wiener, Kalman, Kotel’nikov, Fink and other outstanding specialists. Without going into details of the history of this issue, let us mention that a generally recognized approach to the problem of noise filtration has been formed. It consists in the following: at first at some assumptions and admissions concerning characteristics of noise and signals the estimation of potentially accessible noise immunity is made. Then various filter designs or algorithms of signal processing (linear or nonlinear) are proposed, after that their real noise immunity is calculated and compared with the potentially accessible one. Provided they coincide, then it serves as a proof of the optimality of the methods proposed. When they do not coincide, it is considered that this is an objective estimate of the noise immunity degree. Below we will keep to this classical scheme while considering the problem of noise filtration according to the Chebyshev criterion. Having evaluated a maximum available level of noise depression quantitatively, let us show that application of the strip-method in a number of important cases of practice results in achievement of potentially possible estimates. Therefore in these cases the strip-method is optimal for a variety of all available algorithms of filtration at a chosen criterion. Before to move to a more accurate problem definition let us consider one more argument of a general character. Potential efficiency of noise depression does not depend on the methods used, but is completely determined by available priory information about signals and noise. If there is no information of such a kind, then the noise control is not, in principle, possible. The greater is the volume of priory information, the higher are the possibility of noise depression. If, for example, the frequency spectra of a signal and noise are known, then the potential efficiency of filtration is determined by a degree of diversity or overlap of these spectra. It is known that in this case the optimal filtration on the basis of a root-mean-square criterion is provided with Wiener and Kalman filters. Provided the form of a signal transmitted is priory known and the matter concerns its selection (detection) on the background of noise, then the optimal quality of detection
42
Chapter 2 Optimal Chebyshev pre-distortion and filtration
on the basis of signal-to-noise criterion is provided with the so called matched filters. Thus the volume of priory information about properties of signals and noise (together with the chosen noise immunity estimate criterion) completely determines the potential resources for depressing noise and, consequently, optimal processing algorithms. In the problem statement that will be considered below, let us constrict ourselves to rather poor priory information about signals and noise, namely only their duration (extension in time) will be considered to be known. As in Chapter 1, duration of the signal transmitted will be designated by T and noise duration by h or d. At the same time the form of the signal transmitted, noise form, as well their spectra are considered to be priory unknown and can be of any type. Single information that is available (in a simplest case of pulse noise occurring once) is described by an inequality: T n h
or
nh T ;
where n is the known number (as a rule let us consider it to be integer). In other words, only relative duration of noise is known, it has not to strike more than 1=n-th part of the signal. As a criterion that should be minimized, the Chebyshev norm of the noise in the signal restored, i.e. its amplitude (maximal module value on the interval ½ 0; T ), is taken. At the same time it is assumed that transmission and restoration of the signal is performed in accordance with the general scheme shown in Figure 1.7, and the operators of coding (pre-distortion, transformation) and decoding (restoration, reconstruction) satisfy for two additional conditions: f f
they are isometric (do not change the signal and noise energy); in case when there is no noise in the communication channel, the accurate restoration of the signal transmitted x 0 ðtÞ ¼ xðtÞ is provided.
It should be noticed that the classical Wiener and Kalman filters do not satisfy any of these conditions. In Chapter 1 it is emphasized that the strip-method can be applied as an efficient means for pulse noise control in a communication channel. The basic idea consists in ‘‘extending’’ a short pulse disturbance along the signal without any change of its energy, which results in decreasing the noise amplitude. An example of such a ‘‘extension’’ is given in Figure 1.11, where as a pulse noise is used the complete loss of the signal on a section (strip) of the duration h ¼ Tn , where T is the duration of the signal, n is integer. If the whole energy of noise remains unchanged, its amplitude at the limit can pffiffiffi be reduced by n times. Therefore the greater n is (the shorter is noise), the greater
Section 2.1 Preliminaries
43
depression of the amplitude can be achieved. Here it is possible to make a figurative comparison with uniform buttering bread. In the present Chapter it will be shown that for a certain class of noises the stripmethod provides a maximal available decreasing of their amplitude on the set of a number of filters. To prove this, it is necessary to give a more rigorous definition of the pulse noise being considered and formulate a criterion characterizing the noise depression effect. Below we can determine single pulse disturbance with the assumption that duration of a signal, T , is given and n is an integer equal to a number of conventional sections of the interval ½0; T which they compose. Definition 1. The pulse noise f ðtÞ is called single if its duration (time of existence) does not exceed the duration of a signal strip (section) h ¼ Tn . Note 1. If the pulse noise f ðtÞ in respect of its duration does not exceed h, but strikes two adjacent signal strips (covers the boundary of them), it keeps to be single. Note 2. If the pulse noise f ðtÞ represents by itself a ‘‘packet’’ of pulses but the whole duration of the ‘‘packet’’ (from the beginning of the first pulse up to the end of the last pulse) does not exceed h, it keeps to be single. Note 3. If the pulse noise f ðtÞ represents by itself a series of pulses having summary duration h, which strike various sections of a signal, but at the coincidence of all n sections the pulses do not overlap each other, then this noise keeps to be single. In the same way a concept of multiple pulse noise is introduced. Definition 2. The pulse noise f ðtÞ is named as the multiple one, if it is represented by one or a number of pulses and its duration (or their summary duration) exceeds the duration of a signal strip h ¼ Tn . At the same time the shape of pulses and their location on the interval ½0; T are arbitrary and priory unknown. An example of a double disturbance of such a kind for n ¼ 7 is shown in Figure 2.1, a. Note 1. If a multiple disturbance f ðtÞ represented by one pulse, occupies by its duration the r strips and the duration of each of strips is h then such a disturbance is named as the r-multiple one. Note 2. If a multiple disturbance f ðtÞ is a series of pulses of equal or different duration, then its multiplicity is determined by the greatest number of pulses which are overlapped at the coincidence of initial points of all strips the duration of which is h. Let a set of single pulse noise introduced by Definition 1 be designated as N1 , and a set of r-multiple pulse noise introduced by Definition 2 as Nr .
44
Chapter 2 Optimal Chebyshev pre-distortion and filtration
Figure 2.1. The double disturbance (a) and the result of its processing (b) by Chebyshev filter with matrix (1.33).
2.2
Problem statement
Let the transmission of a signal over a communication channel be considered as a procedure performed in accordance with the block-diagram shown in Figure 1.7. Looking aside the issues of technical realization of the blocks of this block-diagram and performing the mathematical formalization, we present it in the form of series connection of three blocks shown in Figure 2.2. The continuous signal x(t) to be transmitted is transferred with the help of a predistorting linear operator B into the signal yðtÞ ¼ BxðtÞ;
0 t T:
It is just this signal is transmitted over the communication channel. At the receiving station a mixture of this signal and noise is obtained,
Section 2.2 Problem statement
45
Figure 2.2. The simplified block-diagram of a strip-method realization.
y 0 ðtÞ ¼ yðtÞ þ nðtÞ; where nðtÞ is additive noise that pertains to the given class of noise N . Reconstruction of the signal is executed with help of the inverse operator A ¼ B 1 : x 0 ðtÞ ¼ Ay 0 ðtÞ: The noise in the reconstructed signal is equal to mðtÞ ¼ x 0 ðtÞ xðtÞ ¼ A½yðtÞ þ nðtÞ xðtÞ ¼ AnðtÞ;
ð2:1Þ
i.e. the noise mðtÞ is independent of transmitted signal. Let us demand that the pre-distorting operator B will be non-singular (nondegenerated) and keep the energy of the signal being transformed without change (isometrical operator): Z
T
Z
T
x 2 ðtÞdt ¼
0
y 2 ðtÞ dt:
ð2:2Þ
0
Since the operator A will be also isometric, then from (2.1) it follows that the inverse transformation will not change the noise energy: Z
T
Z
T
m ðtÞ dt ¼ 2
0
Z
T
2
½A nðtÞ dt ¼
0
n 2 ðtÞ dt;
ð2:3Þ
0
i.e. the optimization of the communication system on the basis of root-mean-square criterion is not possible. Let us state the problem of system optimization by the uniform Chebyshev criterion, when the maximal value of an error (uncertainty) was minimized. Applying the criterion of this type is useful in a number of practically important cases such as the transmission and storage of a TV signal, recording the signal on a magnetic carrier and others. The Chebyshev norm of the function f ðtÞ, 0 t T is defined [31, 122] as: k f ðtÞk ¼ max j f ðtÞj: 0 t T
ð2:4Þ
46
Chapter 2 Optimal Chebyshev pre-distortion and filtration
The quality of filtration will be characterized by the ratio of the Chebyshev noise norms nðtÞ and mðtÞ, taken for the worst case:
J¼
max
n 2Nr ; n6¼0
kmðtÞk kA nðtÞk ¼ max ; knðtÞk n 2Nr ; n 6¼ 0 knðtÞk
ð2:5Þ
where maximum is taken on all kinds of noise nðtÞ 2 Nr . The number J points out a guaranteed level of the noise magnitude decreasing. From the point of view of mathematics, criterion (2.5) can be considered as the operator A norm matched with the Chebyshev norm of the function nðtÞ 2 Nr . The best quality of filtration will be provided by the operator A with minimal norm (2.5), since it will transform noise nðtÞ in such a way as to minimize, without any changes of energy, its maximal absolute value. The physical sense of minimizing the criterion J at the invariable noise energy consists in a even distribution of noise along the interval ½0; T . This has the most profound effect on the short-term noise, for example the noise connected with a short duration miss of signal, noise pulses and so on. ‘‘Stretching’’ in time of such noise due to filtration, leads to small errors approximately similar in amplitude over all duration of the signal. Thus a formal statement of the problem of the optimal Chebyshev filtration is reduced to the following [77]. On the interval ½ 0; T there was given a Nr set of r-multiple noise pulses nðtÞ. In accordance with Definition 2 they differ from zero by no more than r arbitrary located subintervals of this interval, the duration of each subinterval being not greater than the known value d. The location of the subintervals on the interval ½0; T and behavior of noise inside the subintervals is a priori unknown. Moreover, for the set A of linear invertible operators (filters) which implement the isometric reflection of the noise nðtÞ, 0 t T in the noise mðtÞ, 0 t T , there has been given criterion (2.5) juxtaposing each operator A 2 A and a certain number J (norm of operator). Is required: f
f
f
to find an accurate low boundary of criterion (2.5) on the set of the operators A , that is to solve the problem of estimating the potential noise immunity of the Chebyshev filter; to indicate what should be r; d; T for the noise level to be decreased in terms of criterion (2.5), that is to derive a condition of the Chebyshev filtration efficiency; to find an operator A from the set A for which the low boundary of criterion (2.5) is achieved, that is to solve the problem of an optimal Chebyshev filter synthesis.
Section 2.3 Estimation of the potential noise immunity in case of single noise
2.3
47
Estimation of the potential noise immunity in case of single noise
Let the accurate low boundary of criterion (2.5) on a set of the operators be denoted A 2 A, where A is the set of non-singular isometric operators that implement reflection of function nðtÞ 2 L 2 ð0; T Þ ! mðtÞ 2 L 2 ð0; T Þ as J 0 . The value J 0 ¼ min
max
n 2Nr ; n6¼0 A 2A
kA nðtÞk knðtÞk
ð2:6Þ
characterizes the potential noise immunity of the Chebyshev filtration. Let the operator A for which J ¼ J 0 be called as the optimal Chebyshev filter. It is obvious that 0 < J 0 1, the case J 0 ¼ 1 means that decreasing the noise amplitude (error) being impossible. Knowledge of the value J 0 dependent, in the general case, on signal and noise duration, as well on the multiplicity of noise J 0 ¼ J 0 ðr; d; T Þ is important from two points of view. First, this provides the possibility to compare the noise immunity of real filters with a principally attainable noise immunity and to estimate unused resources. Secondly, it appears to be possible to determine a priory the efficiency of applying the Chebyshev filtration, outlining in that way the field of appropriate usage of such filters. The Chebyshev filtration will be called the effective one if J 0 < 1 and the ineffective one if J 0 ¼ 1. Let us find an estimate of the value J 0 for the case of the single noise r ¼ 1. Since the operator A is linear, it is sufficient to restrict ourselves to consideration of the noise nðtÞ with a unit norm (this will not affect the ratio of noise amplitudes). In this case (2.6) takes the following form: J 0 ¼ min
max
n 2N 1 ; n6¼0 A 2A
kmðÞk1 ;
ð2:7Þ
where N 1 is the set of functions (single noise) nðtÞ determined on the interval ½0; T and equal to zero everywhere except the subinterval of the duration d. Among various linear isometric transformation of a single noise the transformation optimal in the sense of criterion (2.5) will be the transformation that converts the noise nðtÞ into the noise mðtÞ, the absolute value of which is constant (‘‘stretching’’ or ‘‘smearing’’ of the noise along the duration of the restored signal). To prove this, let the minimum kmðtÞk1 be estimated for the set of all functions mðtÞ satisfying for the restriction: Z
T
Z
T
m 2 ðtÞdt ¼
0
where c1 is the constant.
0
Z
T
½A nðtÞ2 dt ¼ 0
n 2 ðtÞdt ¼ c1 ;
ð2:8Þ
48
Chapter 2 Optimal Chebyshev pre-distortion and filtration
Let the maximal module value of the function mðtÞ be put under the sign of integration instead of the function itself
kmðtÞk1
2
T c 1:
ð2:9Þ
rffiffiffiffiffi c1 : T
ð2:10Þ
Whence it follows that kmðtÞk1
The sign of equality in (2.10) is attained for the functions mðtÞ the absolute value of which on the interval ½0; T is constant: rffiffiffiffiffi c1 : ð2:11Þ jm ðt Þj ¼ q; q ¼ T Therefore, J 0 it may be written as J0
max
n 2N1 ;n6¼0
q:
ð2:12Þ
The inequality sign reflects the fact that in the set of linear isometric operators A an operator of the indicated type q can ffiffiffiffi be absent. Let us ascertain what is the noise c
nðtÞ 2 N1 at which the value q ¼ T1 representing by itself the Chebyshev norm of the noise mðtÞ reaches the maximum, i.e. let us find the worst single noise. For the single noise of d duration and maximal value jnðtÞj ¼ 1 taking into account (2.8) it is possible to write rffiffiffiffi Z T d 2 c1 ¼ : ð2:13Þ n ðt Þdt ¼ d; q ¼ T 0 Therefore, the potential noise immunity of the Chebyshev filtration in the case of single noise is determined by the inequality rffiffiffiffiffi rffiffiffiffi c1 d max : ð2:14Þ J0 ¼ T T n 2 N1 kn k1 ¼1 Further it will be shown that the equality sign in relationship (2.14) is attainable. As a proof of this an example of the strip-operator on the basis of the Hadamard matrices can be used. Thus, for single noise there is an accurate low estimate of the noise attenuation coefficient rffiffiffiffi d J0 ¼ : T
ð2:15Þ
Section 2.4 Estimation of the potential noise immunity in the case of multiple noises
49
From (2.15) it is seen that in the case of single noise with the duration d < T , the effective Chebyshev filtration is possible. Attenuation of the noise amplitude is proportional to the square root from the relative noise duration, i.e. the smaller is noise duration at a fixed interval ½0; T the greater is the gain in the potential noise immunity. If, for example, the relative noise duration is 14 (the noise strikes no more than a quarter of the signal), then J 0 ¼ 0:5, i.e. the noise amplitude can be decreased twice.
2.4
Estimation of the potential noise immunity in the case of multiple noises
Let us move to estimation of the value J 0 in the case of the r-multiple noise. Let such a noise be assumed as a sum r of non-overlapped single noise signals nðtÞ ¼ n1 ðtÞ þ þ n r ðtÞ
ð2:16Þ
of described form. In consequence of the filter linearity, the noise on its output will be equal to: mðtÞ ¼ m1 ðtÞ þ þ m r ðtÞ;
ð2:17Þ
where m i ðtÞ is the response to the noise n i ðtÞ. In accordance with the minimax approach, for the sum to be minimized each of its term has to be minimized. Let the set of operators A be halved. We obtain two subsets A0 and A00 . Into the first of them we will insert the operators which juxtapose non-overlapped noise signals m i ðtÞ to the time non-overlapped noise signals n i ðtÞ. This subset also includes, in particular, an identity operator, operator of a cyclic shift of functions, as well strip-operators that realize ‘‘cutting’’ the interval ½ 0; T into strips with their subsequent permutation. The subset A00 will include all remaining operators, i.e. the operators able to juxtapose to the non-overlapped noise signals n i ðtÞ the overlapped noise signals m i ðtÞ. If the low boundaries are found J 00 ¼ min0 max A2A
n 2Nr
kAnðtÞk1 ; knðtÞk1
J 000 ¼ min00 max A 2A
n 2 Nr
kAnðtÞk1 knðtÞk1
ð2:18Þ
separately for the subsets A0 and A00 , then for the value J 0 it is possible to write J 0 ¼ minðJ 00 ; J 000 Þ:
ð2:19Þ
The operators A 2 A0 can not principally perform the ‘‘stretching’’ procedure, therefore the noise amplitude decrease m i ðÞ as compared to the noise n i ðtÞ is impossible and
50
Chapter 2 Optimal Chebyshev pre-distortion and filtration
J 00 ¼ 1:
ð2:20Þ
For the operators A 2 A00 the noise amplitude decrease m i ðÞ can be done by ‘‘stretching’’ n i ðtÞ along the interval ½0; T as it is shown for the single noise. Taking into account (2.15), we obtain: J 000
¼ min00 A 2A
max
n 2Nr kn k1 ¼1
kmðtÞk1
rffiffiffiffi d : r T
ð2:21Þ
Taking into account (2.19–2.21) we obtain: rffiffiffiffiffiffi ! d ;1 : J 0 ¼ min r T
ð2:22Þ
It is interesting to note that in the cases when the effective Chebyshev filtration of r-multiple noise is possible, the corresponding optimal filter coincides with the optimal filter for the single noise. This important circumstance significantly facilitates the solution of the Chebyshev filtration problem reducing it in many respects to investigation of the case r ¼ 1. From estimate (2.21) it follows that the gain in the noise immunity is possible only when rffiffiffiffi d r < 1; ð2:23Þ T and the case dr 2 ¼ T is critical. Let the intensity of noise equal to a ratio of the total noise duration to the signal duration be considered: ¼
dr : T
ð2:24Þ
Then the condition of the effective filtration (2.23) can be written as r < 1:
ð2:25Þ
Thus, attenuation of the noise level in terms of criterion (2.5) at the unchanged energy of the signal is possible only in the case when the product of the noise multiplicity and their intensity is less than 1.
Section 2.5 Synthesis of the optimal Chebyshev filter
51
The physical sense of this condition consists in the fact that any increase of a number of non-correlated noise pulses at their constant intensity makes the noise ‘‘more random’’ and the filtration more difficult. And on the contrary, the less the multiplicity of the noise is, the ‘‘more determinant’’ this noise is. Suppose for example, that at the constant intensity ¼ 0:5 we have in first case r ¼ 2 and in the second case r ¼ 1. This means that in the first case there are two noise pulses. Each of them has the duration of T =4, their positional relationship being unknown. In the second case there is a noise signal of duration T =2, and this noise can be considered as two pulses of duration T =4, which are located side by side. It is clear that in the second case more information about the noise is a priori known than in the first case. In the long run this allows the more effective filtration to be performed. The effective Chebyshev filtration of disturbancies of the white noise type acting along the entire duration of the signal is impossible since for such noise signals ¼ 1 and condition (2.25) is not fulfilled. The effective filtration becomes possible in the case when the noise strikes only a part of the signal, due to which the noise model being considered is used. Formulae (2.22) and (2.25) answer two of the three questions which have been posed. They, in particular, allow a conclusion to be made which concern principal resources of the Chebyshev filtration for the noise of various types. For example, for the case shown in Figure 1.13, a, d ¼ T =7, r ¼ 2, i.e. the intensity of noise ffi 0:29. The maximal guaranteed degree of attenuation of the noise amplitude is determined by the value of rffiffiffi ! rffiffiffi 1 4 J 0 ¼ min 2 ; 1 ¼ ffi 0:76: 7 7 This means that at any pulse shape (Figure 1.13, a) an optimal Chebyshev filter will decrease the noise amplitude by no less than J1 ¼ 1:3 times. 0
2.5
Synthesis of the optimal Chebyshev filter
Let the third problem of the posed ones be considered, namely the problem design a filter the noise immunity of which reaches a potentially possible value J 0 (2.5). Search of an operator providing the optimal filtration will be done within the frames of the strip-method, i.e. using not the whole set A of linear isometric operators, but only its subset As , described below. Let the interval ½0; T be divided into k equal strips of duration h ¼ Tk and a segment of the function nðtÞ, ði 1Þ h t ih, belonging to the i-th strip be denoted as n i ðsÞ; 0 s h.
52
Chapter 2 Optimal Chebyshev pre-distortion and filtration
From the functions obtained, n i ðsÞ, a vector function 2
n 1 ðsÞ
3
7 6 6 n ðsÞ 7 N ðsÞ ¼ 6 2 7; 4 5
0sh
n k ðsÞ is formed. Just in this way the transition from the univariate function nðtÞ determined for the interval T ½ 0; T to the k-dimensional vector-function N ðsÞ determined for the interval 0; k is performed. It should be emphasized that this is not an approximate but an accurate transition since as a component of the vector N the function strips changing in time and having duration h ¼ Tk are used rather than the readouts of the function nðtÞ in the form of constants. In a similar manner the univariate function mðtÞ will represent in the form of the vector-function 3 2 m 1 ðsÞ 7 6 6 m 2 ðsÞ 7 7 ð2:26Þ M ðsÞ ¼ 6 6 7; 0 s h; 5 4 m k ðsÞ where m i ðsÞ is the function segment mðtÞ belonging to the i-th strip of division. Let the relationship mðÞ ¼ x 0 ðÞ xðÞ ¼ A ½yðtÞ þ nðtÞ xðÞ ¼ A nðtÞ
ð2:27Þ
be represented in the form M ðsÞ ¼ A N ðsÞ;
ð2:28Þ
where A ¼ ½aij 1;k is the square numerical matrix. The class of transformation procedures described by relationship (2.28) is sufficiently wide. In particular, at k ! 1 it is possible to get differentiation operators, as well Fredholm and Volterra integral operators. Let us find out what the matrix A has to be for providing the isometric operator. In accordance with the isometric condition Z
T 0
Z
T
n 2 ðtÞdt ¼
Z
T
m 2 ðtÞ dt ¼
0
½AnðtÞ2 dt
ð2:29Þ
0
we have: Z
c 0
Z
c
N T ðsÞ N ðsÞds ¼ 0
N T ðsÞ AT A N ðsÞ ds:
ð2:30Þ
Section 2.5 Synthesis of the optimal Chebyshev filter
53
Hence, AT A ¼ I , i.e. A is the orthonormal matrix and its entries, in particular, have to satisfy for the relationships k X
aij2 ¼ 1;
i ¼ 1; . . . ; k:
ð2:31Þ
j¼1
Thus, the set of filtration operators, AS , being considered includes all operators described by relationship (2.28), where A is the numerical orthogonal matrix. A norm of the vector-function N ðsÞ corresponding to the Chebyshev norm of the function nðtÞ will be of the form: knðsÞk1 ¼ max kn i ðsÞk1 ¼ max max jn i ðsÞj: 1 2r. Within the framework of the redundant variables method it has been proved that in analog data processing it is enough to have k ¼ r þ 1 redundant components for the place and value of practically all r-multiple errors to be determined. Consider the case of a single noise signal r ¼ 1 (only one signal strip is disturbed) for k ¼ r þ 1 ¼ 2. Let two signals of residue 1 and 2 (2.53) are generated and n þ 2 linear combinations of them be formed: 1 2 0 ; i ¼ 1; 2; . . . ; n þ 2: ð2:54Þ i ¼ b2i 1 b1i 2 ¼ b1i b2i It turns out that each the residue 0i is invariant with respect to distortions of one of the signal strips yðtÞ. Indeed, provided due to distortion by noise the signal yj turns into yj þ yj , then 1 and 2 will be equal correspondingly: 1 ¼ b 1j yj ;
2 ¼ b 2j yj :
ð2:55Þ
Then for the linear combinations we obtain 0i
b 1j ¼ yj b 1i
b 2j ; b 2i
i ¼ 1; 2; . . . ; n þ 2:
ð2:56Þ
Hence it is seen that if none of the second order determinants is not equal to zero: b 1j b 2j ¼ 6 0; i 6¼ j; ð2:57Þ b b 2i 1i then all error residue control signals except 0i will differ from zero. On this basis the number of the distorted strip is determined. The noise value is found from (2.55). Let a special case of signal pre-distortion be considered when the signal predistorted is obtained as a result of adding two redundant strips to the original signal. At that, the matrix A looks like 2
1 0 6 0 1 6 6 6 A¼6 6 0 0 6 4 a1 a2 b1 b2
3 0 0 7 7 7 7 7: 1 7 7 an 5 bn
ð2:58Þ
Section 2.7 Introduction of redundancy in the strip-method of linear pre-distortions
67
Here the original signal remains without any changes, which results in significant simplification of straight and inverse transformations. Additional strips are formed in accordance with the formulae: ynþ1 ðt Þ ¼
n X
a j x j ðt Þ;
ynþ2 ðt Þ ¼
j¼1
n X
b j x j ðt Þ:
ð2:59Þ
j¼1
Since in this case we have xj ðtÞ ¼ yj ðtÞ;
j ¼ 1; 2; . . . ; n;
relationships (2.53) look like: 1 ¼ a1 y1 þ a2 y 2 þ þ an yn ynþ1 ¼ 0; 2 ¼ b1 y1 þ b2 y 2 þ þ bn yn ynþ2 ¼ 0:
ð2:60Þ
The coefficients aj and bj may be chosen, for example, in such a way: aj ¼ 1;
bj ¼ j;
j ¼ 1; 2; . . . ; n:
ð2:61Þ
Then we obtain for the error (residue) signals: 1 ¼ y1 þ y2 þ . . . þ yn þ ynþ1 ¼ 0; 2 ¼ y1 þ 2y2 þ . . . þ nyn þ ynþ2 ¼ 0:
ð2:62Þ
The number of the strips distorted by noise is determined by the ratio of control signals: N¼
2 : 1
ð2:63Þ
In order to avoid division by zero, it is more useful to rewrite (2.63) in the form 2 ¼ N 1 : On the plane ð1 ; 2 Þ this expression presets n straight lines passing through the origin of coordinates, at that N is the angular coefficient of the straight lines slope. For instance, when the first strip becomes distorted we obtain: 1 ¼ 2 ¼ y1 ;
N ¼ 1;
when the distortion of the second strip takes place we will get: 1 ¼ y 2 ; and so on up to the n-th strip.
2 ¼ 2y 2 ;
N ¼ 2;
68
Chapter 2 Optimal Chebyshev pre-distortion and filtration
For the noise to be corrected it is enough to subtract the residue control signal 1 from the distorted strip, since in the case of a single error this residue control signal will coincide with the noise 1 ¼ yN . Example 2.3. Let us consider an example of the redundant signal formation with the help of matrix (2.58) and subsequent processing of this signal with normalized coefficients (2.61). Calculations will be performed within the frames of the MATLAB packet with the help of redund program the text of which is given below. Redund program h=.01;t=0:h:8; tt=0:h:1-h; % formation of time arrays 0ata8 and 0att1&t > >
x y þ z w ¼ 0 > > : xyzþw ¼0
Section A.1
Hadamard matrices
131
Hence, x ¼ y ¼ z ¼ w ¼ n=4, i.e. n can be divided by 4. It should be noted that thereby nothing but the condition required has been proved. From this condition it does not follow that at n divisible by 4 the Hadamard matrix has to exist. The hypothesis according to which this condition is sufficient too has not yet proved. In geometry language the question concerning the existence of the Hadamard matrix of order n ¼ 4k is equivalent to the question concerning the possibility to inscribe a regular hypersymplex into a (4k 1)-dimensional cube. To obtain the Hadamard matrices in practice it is possible to use the command hadamard of the MATLAB packet. It allows the Hadamard matrices to be built for the cases when n, n/12 or n/20 are powers of 2. Unfortunately, such n as 28, 36, 44, 52, 56, 60 and others, which are divisible by 4, do not refer to these cases, though for them the Hadamard matrices have been found long ago. A list of all known Hadamard matrices which has been composed by N. Sloan can be found on the site http://www.research.att.com/~njas/hadamardt. In the Sloan’s library there are given all Hadamard matrices for n ¼ 28 and at least by one matrice for all n values divisible by 4, right up to n ¼ 256. They have names of the type: had.1.txt, had.2.txt, had.4.txt, had.8.txt, . . . , 256.syl.txt, and are arranged in the form of text files containing arrays of signs þ and , corresponding to positive and negative entries of the Hadamard matrices. Contents of several files of such a type are given in Table A.1. The system of notation is clear from the first column where both versions of a system for recording the Hadamard matrix of order 4 are shown.
Table A.1. Examples of Hadamard matrices txt-files. had.4.txt
had.8.txt
had.12.txt
had.16.0
þ þ þþ þ þ þ þ þ þ
þ þ þ þ þ þ þþ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þþ þ þ þ þ
þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þþ þ þ þ þ þ þþ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þþ
þ þ þ þ þ þ þ þ þ þ þ þ þ þ þþ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þþ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þþ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þþ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ
1 1 1 1 11 11 1 111 111 1
132
Appendix
Hadamard matrices and the matrices close to them
Table A.2. Symmetrical Hadamard matrices. 3 2 n¼4 1 1 1 1 6 1 1 1 17 7 6 4 1 1 1 15 1 1 1 1 2
n¼8
n ¼ 12
2
1 61 6 61 6 61 6 61 6 61 6 61 6 61 6 61 6 61 6 41 1
1 1 1 1 1 1 1 1 1 1 1 1
1 61 6 61 6 61 6 61 6 61 6 41 1
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
3 1 1 7 7 17 7 17 7: 7 1 7 17 7 1 5 1
1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1
3 1 1 7 7 17 7 1 7 7 1 7 7 1 7 7: 17 7 17 7 17 7 1 7 7 15 1
Let us notice that the Hadamard matrices of order 2, 4, 8 and 12 are single (accurate to the isomorphism). At n ¼ 16, there are some various Hadamard matrices. In the Sloan’s library they are denoted as: had.16.0, had.16.hed, had.16.syl, had.16.twin, had.16.1, had.16.2, had.16.3, had.16.4. Three non-equivalent Hadamard matrices for n ¼ 20 are denoted as had.20.pal, had.20.will, had.20.toncheviv. Further, in the library there are given 60 matrices of order 24 and 487 matrices of order 28, as well as the examples of Hadamard matrices for number to 256 inclusive for each n divisible by 4. In the process of designing them there were used methods proposed by Paley, Placket-Burman, Sylvester, Tourin, and Williamson. Certain information about these methods can be found in the digest [18] Contemporary Design Theory: A Collection of Essays, of Dinitz J. H. and Stinson D. R., editors, Wiley, New York. 1992, Chapter 7 Orthogonal Arrays by Hedayat, Sloane and Stufken. H. Kharaghani and B. Tayfeh-Rezaie constructed on 2004 a Hadamard matrix of order 428. The smallest order for which no Hadamard matrix is presently known is 668.
Section A.2
Shortened Hadamard matrices
133
Not all Hadamard matrices represented in Table A.1 are symmetrical. In Table A.2 versions of these matrices are given which are symmetrical relative to the main or side diagonals which, in a number of cases, are more convenient for being used in the strip-method. A Hadamard matrix named regular if one in which every row and every column contain the same number of ‘‘1’’. Such matrices have the maximum number of ‘‘1’’ entries (among all possible Hadamard matrices of a given order). For example, the 1st row of Table A.2 contains a regular Hadamard matrix of order 4.
A.2 Shortened Hadamard matrices With permutation of rows, columns and multiplication of them by 1 it is possible to provide their symmetrical form with positive entries in the first row and the first column. Discarding this row and column, we will obtain a shortened (reduced) matrix of order n 1. This matrix will no longer be orthogonal but become the circular one. All its rows are obtained with a cyclic shift of the first. This property is useful in processing signals with the strip-method since it provides ‘‘smoothness’’ of the signal transmitted. Let some properties of the shortened Hadamard matrices be analyzed. At n ¼ 4, taking as a basis the matrix from the first column of Table A.1, we obtain the shortened Hadamard matrix of the third order: 2
1 4 A3 ¼ 1 1
1 1 1
3 1 1 5; 1
1 A3
2 1 14 ¼ 0 2 1
0 1 1
3 1 1 5: 0
In the case given the inverse matrix also finds itself circulant. Consider the eigenvalues i and eigenvectors Hi of the matrix A3 : 1 ¼ 1;
2 ¼ 2;
3 ¼ 2;
2 3 2 3 2 3 1 1 1 H 1 ¼ 4 1 5; H 2 ¼ 4 1 5; H 3 ¼ 4 1 5: 1 0 2 The first vector corresponds to two-multiple noise that in filtration with the stripmethod remains unchanged; however other two-multiple noises can increase. For n ¼ 8 we obtain the following shortened Hadamard matrix of the seventh order and the one inverse to it:
134
Appendix
Hadamard matrices and the matrices close to them
A 1 7
A7 2
1 6 1 6 6 6 1 6 6 1 6 6 6 1 6 4 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1
3 1 7 17 7 17 7 1 7 7 7 17 7 1 5 1
2
1 6 1 6 6 6 0 16 6 1 46 6 6 0 6 4 0 1
1 1 1 0 1 0 0
0 1 1 1 0 1 0
0 0 1 1 1 0 1
1 0 0 1 1 1 0
0 1 0 0 1 1 1
3 1 7 07 7 1 7 7 07 7 7 07 7 1 5 1
The eigenvalues of the matrix A 7 have the form 1, 2, 2. In the general case the eigenvalues of the shortened Hadamard matrix obtained from the Hadamard matrix of order n are divided into three groups: one of them is pffiffiffi always equal to 1, a half of the rest ones is equal to n and another half is equal pffiffiffi to n .
A.3 Conference-matrices Definition. The name Conference-matrix (C -matrix) is given to any matrix C of order n with zero on the main diagonal and þ1 and 1 on the rest places satisfying the condition C T C ¼ ðn 1ÞI . Thus, rows (and columns) of C -matrices are orthogonal in pairs. The simplest C -matrices have the form: 2 3 0 1 1 1 6 0 1 0 1 0 1 17 6 1 7 ; ; 6 7; 4 1 1 0 1 0 1 0 1 5 1 1 1 0 2
0 61 6 61 6 6 61 6 41 1
1 0 1 1 1 1
1 1 0 1 1 1
1 1 1 0 1 1
1 1 1 1 0 1
3 1 17 7 1 7 7 7: 1 7 7 15
ðA:1Þ
0
The first and third of them are symmetrical, the second and fourth are skewsymmetrical. The skew-symmetric C -matrices as well as the Hadamard matrices exist only at n ¼ 2 and n, divisible by 4. From the point of view of the strip-method they in all respects are inferior to the Hadamard matrices, and therefore will not be considered below.
Section A.3
Conference-matrices
135
The symmetrical C -matrices of order n can exist only in the case when n 2 is divisible by 4, and n 1 can be presented in the form of a sum of squares of two integer numbers. For example, at n ¼ 2, 6, 10, 14, 18 they exist and for n ¼ 22 do not, since number 21 is not presented by a sum of two squares. For n ¼ 26, 30 the C -matrices exist since equalities 252 ¼ 32 þ 42 , 29 ¼ 22 þ 52 have a place. For n ¼ 34, as well as for n ¼ 22, a negative answer is obtained. For n ¼ 38, 42, 46 the answer will also be negative. Let us consider two problems where we meet the C -matrices. 1. Conference arrangement problem. Let us suppose that n directors of some company have decided to arrange a conference by telephone in such a way as to provide any director with the possibility to speak to every one of his colleagues and the rest ones could listen to their discussion. The construction of such a conference-communications is equivalent to construction of a C -matrix. 2. Problem of weighing. What is the best scheme of weighing, if it needs to weigh n objects at n procedures of weighing? The strategy of weighing is described by the C -matrix given by its entries cij : c ij ¼ 1, if in weighing i the object j is located on the left pan; c ij ¼ 1, if in weighing i the object j is on the right pan; c ij ¼ 0, if in weighing i the object j does not take part. For n divisible by 4 the best scheme of weighing is given with the Hadamard matrix and for even n which are not divided by 4 – with the symmetrical C -matrix. The normalized matrices, the order of which differs from the Hadamard ones on 2, are of the extreme quality similar to that the Hadamard matrices possess: their entry maximal in absolute value is minimal (for the class of orthogonal matrices). Further we will denote the entry maximal in absolute value as . The value of 1 this entry for the C -matrices equals a ¼ pffiffiffiffiffiffiffi , i.e. it is only a little inferior to the n1
Hadamard matrices which have a ¼ p1ffiffi . For example, for n ¼ 6 the difference is n less than 10 %. These formulae taken together describe an accurate bottom boundary of the entry, maximal in absolute value, of the orthogonal matrices of the even order: the first one for n which are not divisible by 4, in particular for 6, 10, 14, 18, 26; the second one for n divisible by 4, in particular for 4, 8, 12, 16, 20. In the table A.3 are shown C-matrices for n ¼ 10, 14, 18; cases for n ¼ 2, 6, had been considered above (table A.2). If need be, the matrix C18 (as C14 too) can be reduced to the symmetrical form with a zero diagonal. Moreover, there is an analogue having two zero diagonals disposed cross-wise:
136
Appendix 2
X18
0
6 61 6 6 61 6 6 61 6 6 61 6 6 61 6 6 61 6 6 61 6 6 61 6 ¼6 61 6 6 61 6 6 61 6 6 61 6 6 61 6 6 61 6 6 61 6 6 61 4 0
1
1
1
0
1
1 1
1
0 1 1
1
1
1
1 1
1
1
1
1
1
1 1 1
0 1 1
Hadamard matrices and the matrices close to them 1
1 1 1
1 1
1
1
1
1
1
1
1 1 1 1
1
1
0
1 1
1
0
1
1
1
0 1 1
1
1
0 1
1
0
1 1 1 1 1 1
1 1
0
1
1
1
1 1 1
1 1
0 1 1 1
1 1
1
1 1 1
0 1 1 1
1 1
1
1
1
1
1 1 1
1
1
1 1 1 1
1 1
0
1
1
1
1 1
1
1
1 1
0
0 1 1 1
1
1 1
1 1 1
0
1
1
1
1 1 1
1
1
1
1
0
1 1
1
1
0
1 1 1
1 1
1
0 1
1 1
0
1 1 1
1 1
1 1
1 1 1
1
0 1
1
0 1 1 1
1
1 1 1
1 1
1
1
1
0
1 1
0 1
1 1
0
1 1
0
1
1
1 1 1 1 1
1 1 1
1
1 1
0 1
1
0 1
1
1 1
0 1 1 1 0
1
1 1
1
1 1 1 1 1
1 1
0 1 1
1 1
1 1
1
1
1
1 1 1
1
1 1 1
1
1 1
1
0
1 1
1 1
1 1
1
0
1
1
1
0 1
1
1 1
1
1 1 1 1 1
1 1
1
1
1 1 1
3
7 17 7 7 1 7 7 7 17 7 7 17 7 7 1 7 7 7 1 7 7 7 17 7 7 1 7 7 7 17 7 7 1 7 7 7 1 7 7 7 1 7 7 7 17 7 7 17 7 7 1 7 7 7 17 5
1 1 1
1
0
This matrix is rather close to the optimal one; after normalization the value 1 of its maximal entry is equal to ¼ pffiffiffiffiffiffiffi ¼ 0:25 (for the matrix C18 we have n2 1 ¼ pffiffiffiffiffiffiffi ¼ 0:2425Þ. n1
The Hadamard matrices and C -matrices are closely connected. In particular, it is possible to construct Hadamard matrices from C -matrices [160]. Suppose C is a symmetric C -matrix of order m. Then the matrix " # C þ I m C Im A¼ C Im C Im is a Hadamard matrix of order 2m. Moreover, if C is antisymmetric C -matrix, then I þ C is a Hadamard matrix of order m. In the aggregate the Hadamard matrices and C -matrices give the solution of the orthogonal Procrustean problem (the problem to find orthogonal matrices with an entry minimal in absolute value) almost for all even n, with the exception of several values such as n ¼ 22 and n ¼ 34, the solution for which is unknown for the authors. The situation for odd n is too much worse. Here only a few optimal matrices for small values of n are known. Information about them is given below.
Section A.3
Conference-matrices
137
Table A.3. C -matrices for n ¼ 10, 14, 18. C 10 0 1 1
0 1
1 1 1 1
1
1 1 1 1 1
1 1
1
0 1
1 1 1
1 1
1
1
1 1 1 1
1 1
1
1
1 1
1
0
1 1 1
1
0
1
0
1 1
1 1 1 1
1
0
1 1 1
1
1
1 1 1 1
1 1 1 1
1
1
1
1 1
1 1
0
1
1
1
1 1 1
1
1
0
1
1 1
1 1
1 1
1
1
0
1 1
1
1
1
1
1
0
1 1
1
1
1 1
1
1 1 1
1
1
1 1
1 1
1
1
1 1
1 1 1 1
1
1 1
1 1
0
1 1 1
1 1
1 1 1
1
1
0 1
1
1
1 1
1 1
1
1 1
1
1 1
1 1
1
1
1
1 1 1 1
1
1
1 1 1
1 1 1 1 1
0
0 1
1
1 1
0 1 1
0 1
1 1 1 1
1
1
1 1
1
1
1 1 1
0
1 1
1 1 1
1
1
1 1
1 1
0
1 1 1 1
1
0 1
1
1
1 1
0
1 1
1
1 1 1
1
1
1
1
1
1 1
1 1 1
1
1 1 1
1 1
1 1 1 1 1 1
1
1
1
1
1
1 1 1
1 1
1 1 1
1 1
1
1 1 1
1
1 1
1
1 1
1 1 1
1
1 1 1 1
1 1
1
1
1
1
0
1
1 1 1
1
1 1 1 1 1 1 1 1 1
1
1
1 1
1
0 1 1 1
1
1
0 1
1 1 1 1 1
1
1 1 1 1 1
1 1 1 1 1 1
1
1 1 1
1 1
1
1
1
1 1
1 1 1 1 1
1
0
1 1
1 1 1
1
0
1
1 1
0
1
1 1 1
1
1
1 1
0 1 1 1 1
1 1
1
0
1
1
1 1 1
1 1 1
0
1
1
1
1 1
0
1 1 1
1
1
1 1
1 1 1
0
1
1
1
1
1 1
1 1 1
1
1
0 1 1
1 1
1
1 1
1
1 1 1
1 1 1 1
1 1
1 1 1
1 1 1 1 1
1 1
0
1
1
1 1 1
1
1
1
1 1
1 1
1
1
1
1
1
0
1
1
1 1
1
1 1 1 1
1
1
1 1
1
1 1
0
1 1
1 1
1 1
1 1
1
1 1 1
1 1
1
1 1 1
1 1 1 1 1
1 1 1 1 1
1
1
1 1 1 1 1 1 1
0
1 1 1 1 1 1
1 1
1
1
1 1 1
1
1
1 1 1
1
1 1 1
C 18 1
1 1
0 1 1 1
1 1 1 1
1
1 1 1
1
0 1 1 1
1
1
1 1 1 1
0
1
1
1
0 1 1 1 1
1
1
1
1 1
1 1 1
1 1
1
1 1
1 1 1
0 1 1 1 1
1 1 1 1
1 1
1 1 1 1
1 1 1
1 1 1 1 1 1
C 14
1
1
138
Appendix
Hadamard matrices and the matrices close to them
A.4 Optimal matrices of the odd order (M-matrices) Let us name the matrices providing a solution of the orthogonal Procrustean problem for odd n minimax or simply M -matrices. Their main property is the minimality of the value , i.e. the values of the entry maximal in absolute value on the class of all orthogonal matrices of a given dimension. Here it is possible to indicate three problems [159]. Problem 1. Search of particular M -matrices for various numbers n. Problem 2. Determination of an accurate bottom boundary for the value of maximal entries of M -matrices depending on n: ¼ f ðnÞ. Problem 3. Determination of the number k of entry levels in the M -matrix for different n. So, the Hadamard matrices can be called one-level since all their entries are equal in absolute value. The C -matrices are two-level, modulus of their entries is equal to 0 or 1. For an odd n the M -matrices appear to be the k-level ones; k depending on n. It should be expected that the solution of all three problems set will depend on what remainder is, when the odd number n is divided by 4 (1 or 3). Correspondingly, a set of M -matrices breaks up into two subsets that differ in bottom boundaries, number of levels k and type of matrices. Let us move to description of particular M -matrices for n ¼ 3, 5, 7, 9, 11. Searching for these matrices is performed by numerical and symbolic modelling in the MATLAB and MAPLE packets with the help of specially developed software1. As a result we have managed to determine an analytic type of entries of the optimal matrices M3 , M 5 , M 7 , M 9 , as well as to find the matrix M 11 in the numerical form, having preliminary obtained a system of non-linear algebraic equations for determining its entries. A more detailed procedure of searching is explained below by an example of the matrix M 11 . For the case n ¼ 3 the optimal matrix providing the solution of the orthogonal Procrustean problem has the form: 2 1 2 14 2 1 M3 ¼ 3 2 2
3 2 2 5: 1
ðA:2Þ
1. In carrying out computer experiments, a number of software developed by D.V. Shintyakov [158] was used.
Section A.4
Optimal matrices of the odd order (M -matrices)
139
Figure A.1. Distribution of matrix M5 entries by levels.
It is orthogonal and symmetrical, the value of its maximal entry is equal to ¼ 2=3. The matrix contains entries of two types, i.e. it has two levels. Its relation with a geometric problem of inscribing a regular octahedron into a cube of minimum size is discussed in Chapter 3 (Figure 3.5). For n ¼ 5 the optimal matrix occurs to be of three levels: 2
2
6 6 3 6 1 6 M5 ¼ 6 6 11 6 6 6 6 4 6
3
6
6
6
6 3 6
2
2
6
6
6
3
7 2 7 7 7 2 67 7: 7 6 37 5 3 6 6
ðA:3Þ
It is also orthogonal and symmetrical, the value of its maximal entry ¼ 6=11. Distribution of the absolute value of its entries by levels is shown in Figure A.1. From its 25 entries 15 are on the upper level, the rest ones by 5 are on the remaining two levels. Thus, the entries of the upper levels amounts to 60 % of the total number (67 % for the matrix M3 and 100 % for the Hadamard matrices). In investigating the case n ¼p7ffiffi there were found two matrices: the five-level 5þ7 7 matrix p Mffiffi 7 of the value ¼ 53 0:444 and two-level matrix N 7 of the value ¼
2þ3 2 14
0:446. The structure of these matrices is the following:
140
Appendix
2
d;
c;
a;
a;
a;
c;
a;
a;
a;
a;
a;
d;
a;
a;
a;
a;
a;
c;
b;
b;
a;
a;
b;
e;
a;
a;
a;
b;
a;
d;
a;
a;
a;
b;
d;
e;
a;
a;
a;
a;
b;
b;
a;
b;
b;
a;
a;
b;
a;
b;
a;
b;
b;
a;
a;
a;
b;
b;
a; a;
b;
a;
b;
a;
b;
a;
b;
b;
a;
a;
a;
a;
b;
a;
a;
b;
a;
b;
a;
6 6 d; 6 6 6 c; 6 6 M7 ¼ 6 6 a; 6 6 a; 6 6 6 a; 4 2 6 6 6 6 6 6 6 N7 ¼ 6 6 6 6 6 6 6 4
Hadamard matrices and the matrices close to them
a
3
7 a 7 7 7 a7 7 7 b7 7; 7 d 7 7 7 e 7 5 a b
3
7 a7 7 7 a7 7 7 b 7 7: 7 a 7 7 7 b7 5 a
Unlike the preceding cases the entries pffiffiffi of these matrices are irrational. For the matrix M 7 they contain 7: pffiffiffi a ¼ 3 þ 3 7;
b ¼ 9;
c ¼5
pffiffiffi 7;
pffiffiffi d ¼ 6 þ 3 7;
e ¼4þ
pffiffiffi 7:
pffiffiffi In normalizing all of them should p beffiffiffidivided byp22 ffiffiffi þ 7. Entries of the matrix N7 contain 2: a ¼ 2 þ 2, b ¼ 2. In normalizing all of pffiffiffi them should be divided by 2 þ 4 2. Let us show both of these matrices in detailed writing (without any normalization): M7 ¼
pffiffiffi 3 þ 3 7; 6 6 6 3pffiffi7ffi; 6 6 pffiffiffi 6 6 5 7; 6 6 pffiffiffi 6 3 þ 3 7; 6 6 pffiffiffi 6 6 3 3 7; 6 pffiffiffi 6 6 3 3 7; 4 pffiffiffi 3 3 7; 2
pffiffiffi 6 3 7; pffiffiffi 5 7; pffiffiffi 3 þ 3 7; pffiffiffi 3 þ 3 7; pffiffiffi 3 þ 3 7; pffiffiffi 3 þ 3 7; pffiffiffi 3 3 7;
pffiffiffi 7; pffiffiffi 3 þ 3 7; pffiffiffi 6 3 7; pffiffiffi 3 þ 3 7; pffiffiffi 3 3 7; pffiffiffi 3 þ 3 7; pffiffiffi 3 þ 3 7; 5
pffiffiffi 3 þ 3 7; pffiffiffi 3 þ 3 7; pffiffiffi 3 þ 3 7; pffiffiffi 5 þ 7; 9; 9; 9;
pffiffiffi 3 3 7; pffiffiffi 3 þ 3 7; pffiffiffi 3 3 7;
pffiffiffi 3 3 7; pffiffiffi 3 þ 3 7; pffiffiffi 3 þ 3 7;
9; pffiffiffi 4 þ 7; pffiffiffi 3 3 7; pffiffiffi 6 3 7;
9; pffiffiffi 3 3 7; pffiffiffi 6 3 7; pffiffiffi 4 7;
pffiffiffi 3 3 3 7 pffiffiffi 7 3 3 7 7 7 pffiffiffi 7 7 3þ3 7 7 7 7 7; 9 7 pffiffiffi 7 7 6 3 77 7 pffiffiffi 7 4 7 7 5 pffiffiffi 3þ3 7
Section A.4 2
Optimal matrices of the odd order (M -matrices)
pffiffiffi 2 6 pffiffiffi 6 62 þ 2 6 6 pffiffiffi 6 62 þ 2 6 6 pffiffiffi N7 ¼ 6 62 þ 2 6 6 6 2 6 6 6 2 6 4 2 2þ
2þ
pffiffiffi 2
2 2 pffiffiffi 2þ 2 pffiffiffi 2 2 2 pffiffiffi 2þ 2
2þ
pffiffiffi 2
2 pffiffiffi 2þ 2 2 2 pffiffiffi 2 2 pffiffiffi 2þ 2
pffiffiffi 2þ 2 pffiffiffi 2þ 2
2 pffiffiffi 2 2
2
2
2 pffiffiffi 2 2 pffiffiffi 2 2 2
2
pffiffiffi 2
2 pffiffiffi 2þ 2 pffiffiffi 2 2
141
2 2 pffiffiffi 2 2 pffiffiffi 2 2 pffiffiffi 2þ 2 pffiffiffi 2þ 2 2
2 pffiffiffi 2þ 2 pffiffiffi 2þ 2 2 pffiffiffi 2 2 2 pffiffiffi 2þ 2
3 7 7 7 7 7 7 7 7 7 7: 7 7 7 7 7 7 7 7 5
ðA:4Þ Distribution of the entry modulus for the normalized matrix M 7 level by level, which has been obtained in MATLAB with the help of the command "plot(sort(abs(M7(:))),’*’)", is shown in Figure A.2.
Figure A.2. Distribution of matrix M7 entries by levels.
From this figure it is seen that the bottom level contains 6 entries. The next ones contain 4, 3 and 6 entries, respectively. The most numerous upper level contains 30 entries, which amounts to about 61 % (approximately as much as in the case with the matrix M5 Þ: For n ¼ 9 the best from found matrices has four levels and the value ¼ pffiffi
3þ 3 12
¼ 0:3943. Its structure and entries are the following:
142
Appendix
2
d; 6 b; 6 6 b; 6 6 6 b; 6 M9 ¼ 6 6 b; 6 b; 6 6 6 b; 6 4 b; b;
Hadamard matrices and the matrices close to them
b; b; b; a; a; a; a; c; a; a; a; c; a; c; a; a; a; c; c; a; a; c; c; c; c; a; a; 12a ¼ 3 þ 6b ¼
b; b; b; a; a; c; c; a; a; a; c; a; a; c; a; c; a; c; a; c; c; a; a; a; c; a; a;
pffiffiffi 3;
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffi 6 3 6;
4c ¼
pffiffiffi 3 1;
pffiffiffi 3d ¼ 2 3 3;
b; c; c; c; a; a; a; a; a;
3 b c 7 7 a 7 7 7 a7 7 c 7 7 a7 7 7 a 7 7 a5 c
a ¼ 0:3943; b ¼ 0:3493; c ¼ 0:1830; d ¼ 0:1547;
pffiffiffi 3þ 3 Maximal entry ¼ 0:394337: 12 Here we deal with an irrationality of the type ‘‘a root from a root’’ arising from the solution of a biquadratic equation. Distribution of modulus of the matrix M 9 entries on levels is shown in Figure A.3.
Figure A.3. Distribution of matrix M9 entries by levels.
Section A.5
Algorithm for determining optimal matrices
143
From this figure it is seen that on the bottom level there is one entry, on the next two levels there are 34 and 16 entries, respectively. On the upper level there are 40 entries which amounts to 49 % of their total number. Unfortunately, n ¼ 9 is the final case when it has been managed to get explicit expressions for entries of the M -matrix. For n ¼ 11 the best orthogonal matrix founded in MATLAB, has a six-level structure: 3 2 b a f a a d c e a a a 7 6 6 d f a a e a b c a a a7 7 6 6 a e c a d a a a f a b7 7 6 7 6 6 a d a b a a f a e c a7 7 6 7 6 7 6 a a e a b a a d a f c 7 6 7 6 ðA:5Þ M 11 ¼ 6 a a a d a e a f c b a 7 7 6 7 6 f b d c a a a a a e a7 6 7 6 6 e a a a f c a a b d a7 7 6 7 6 a a f c a d b a a e7 6 a 7 6 6 a c b e a f a a a a d7 5 4 c a a a a b e a d a f The numerical values of its entries are as follows: a ¼ 0:34295283; d ¼ 0:2439851;
b ¼ 0:33572291; e ¼ 0:15671878;
c ¼ 0:30893818; f ¼ 0:045364966:
The index ¼ 0:3429 is equal to the value of the entry a. Let us notice that a percentage of entries maximal in absolute value amounts to 6=1154:5 %, which accurately coincides with the value of the index for the matrix M5 .
A.5 Algorithm for determining optimal matrices Let us describe by an example of the case n ¼ 11 a computer procedure that has been used for searching all above mentioned matrices. In this procedure 4 stages can be distinguished. Stage 1. Calculation of an approximate value for an optimal matrix M . The computing algorithm is constructed on the basis of iterations in which the maximal in absolute value entry a of a matrix is decreased step by step according the rule akþ1 ¼ ak k=ðk þ pÞ for the matrix entries, where k is the number of
144
Appendix
Hadamard matrices and the matrices close to them
iteration and p > 0 is a certain number. Since after this the matrix finishes to be orthogonal, it is again orthogonolized by means of calculation of polar decomposition. Let us remind of the fact, that the polar decomposition represents a given matrix in the form of a product of orthogonal and symmetrical matrices. Just the first of them is used later on. In orthogonalizing the maximal entry slightly increases but, as a rule, not to the extent to attain its former value. The iteration process is reduced to a certain orthogonal matrix after which it is repeated many times changing the initial matrix and remembering the best of the solutions obtained before. The process mentioned can be recorded in the form of the following algorithm: 1. 2. 3. 4.
Square non-singular matrix is taken as an initial approximation. Matrix is replaced with an orthogonal multiplier of its polar decomposition. Maximal entry of the matrix decreases. Return to point 2 is performed up to the moment until the process is reduced to a certain matrix.
This algorithm was implemented in the form of MATLAB program the text of which is given below. function [y,X]=procrust(n); % program find Procrustean matrix with min max(abs(a(:))) alpha=1;gam=2;k=10; for j=1:10 A=rand(n); if rank(A) 11 remains open just as the issues concerning the number of levels of these matrices for different n.
A.6 Characteristics of optimal matrices Summing up the searches of orthogonal matrices of order n > 2 with a maximum entry being minimal in its absolute value, it is possible to note the following. There
148
Appendix
Hadamard matrices and the matrices close to them
exist several classes of such matrices. Belonging to one or another class depends on a residue of dividing n by 4, i.e. on the number k ¼ n ðmod 4Þ. Altogether there are 4 classes: If k ¼ 0, i.e. n ¼ 4, 8, 12, 16, 20, 24, 28, . . . then all these matrices are the Hadamard matrices; If k ¼ 2, i.e. n ¼ 6, 10, 14, 18, 26, 30, . . . then all these matrices are the C-matrices; If k ¼ 3, i.e. n ¼ 3, 7, 11, 15, 19, 23, 27, . . . then all these matrices are the M-matrices of the 1-st type; If k ¼ 1, i.e. n ¼ 5, 9, 13, 17, 21, 25, 29, . . . then all these matrices are the M-matrices of the 2-nd type. The cases when k ¼ 2 but there exist no C -matrices, e.g. when n ¼ 22, 34, 38, 42, 46, require a special consideration. These matrices form a separate group of optimal matrices with their own characteristics. In particular, for n ¼ 22 the best orthogonal matrix M 22 (it founded in MATLAB by N. Balonin and G. Balonin), has a six-level structure: 2
M22
a; 6 6 a; 6 6 6 a; 6 6 6 d; 6 6 6 6 a; 6 6 6 a; 6 6 6 a; 6 6 6 b; 6 6 6 a; 6 6 6 a; 6 6 6 a; 6 ¼6 6 6 e; 6 6 6 a; 6 6 6 a; 6 6 6 c; 6 6 6 a; 6 6 6 a; 6 6 6 6 a; 6 6 6 a; 6 6 6 a; 6 6 6 a; 4 f;
a;
c; a; a; a; a;
a; a; a;
f;
a;
a;
a;
a; b;
a; a; a; b;
e;
c; a;
a; c; a; b; a; a;
a; a;
a;
a; a; a;
a;
a;
f ; e; a; a
a; a; e;
a; a; c; d; a;
a; a; a; d;
a; a; a;
a;
a;
a;
a; e;
a;
a;
a;
a; a;
f;
c;
a; d;
a;
a;
a;
a;
c; f ;
a; a;
e;
a;
a; a;
d;
e;
a; a;
a;
a; e;
a;
c; a; a;
a; a;
d;
a;
e; a; a;
a;
f;
a; a;
a; a; c; a;
d; a; a; a; a; a;
a;
a; c; f ;
a;
a;
a;
a; a;
a; e; a;
c;
e; a;
d; a;
a;
a; a;
a; b;
a; a; a;
a;
a;
a;
e; a; a; a;
a; a;
a; d; a;
a; b; a; a;
a; a; a; a; a; a; a;
a;
a;
a;
a;
a; a; a; a; b; a; f ; a; a; a;
a; a; b; d; a;
a; a;
a;
a; f ;
a; a;
f;
a;
a;
a;
a;
a; a;
a;
b;
f ; c;
a;
a;
a; a;
a;
a;
a; a; a;
a;
d;
e; a; a; a; b;
a; a;
f;
f ; a;
a; a; a; a;
a;
a; d; a; a; a;
a; a; a; a; e; b;
a; c; a; a;
a;
a; d;
a; a;
a;
d;
a;
a; a;
a;
a;
a;
a;
a; a;
a; ; a;
a; d;
a;
a;
a; b;
a;
a; a;
a; a;
a; a;
c;
a;
a;
a;
a;
a; b;
a;
a; d; b;
a;
a;
a;
a;
a;
a; a; a;
a;
a; c;
f;
a;
a; a;
e;
a;
a;
a; b; a; a; a;
a;
e;
a; a; d;
e; a; a; a;
a;
a; c;
a; b; f ; a;
a;
a; a; b;
c;
a; a;
d; a; a; a;
a; a;
a;
c;
a; d; a; f ; e; a;
a;
a;
a; c; a;
a; e; a;
a;
a; a;
f ; a; a;
a; a;
a; a; a;
f ; a; a; a; a;
a; a; a; a; a;
a;
f ; a;
a; a; a; a;
a; a;
a;
a;
a; a;
a;
a;
e; a; a;
a; a; a; b;
a; a; b; a;
a; a;
a;
3
7 a; a; a 7 7 7 a; a; a 7 7 7 f ; a; a 7 7 7 7 a; a; a 7 7 7 a; a; b 7 7 7 a; a; a 7 7 7 a; a; a 7 7 7 a; b; a7 7 7 b; a; a 7 7 7 c; a; a 7 7 7 7 a; ; a; d 7 7 7 a; a; e 7 7 7 a; f ; a 7 7 7 a; e; a 7 7 7 a; a; c 7 7 7 a; d; a 7 7 7 7 a; c; a 7 7 7 a; a; a 7 7 7 a; a; f 7 7 7 a; a; a 7 5 d; a; a
Section A.6
Characteristics of optimal matrices
149
The numerical values of its entries are as follows: a ¼ 0:226856387755926;
b ¼ 0:222364976237293;
c ¼ 0:177974816441021;
d ¼ 0:157082977829506;
e ¼ 0:120209983291265;
f ¼ 0:0697734099264205:
This values was find by solving of system 6 algebraic equations [158]: a f d ¼ 0; f 2 þ e 2 þ d 2 þ c 2 þ b 2 þ 17a 2 ¼ 1; dc þ ca da 2fa 2ea 2ba þ 3a 2 ¼ 0; ec þ ea þ ca þ 2fa 2da 2ba þ a 2 ¼ 0; ed ea da 2fa 2ca 2ba þ 5a 2 ¼ 0; f b fa ba 2ea 2da 2ca þ 5a 2 ¼ 0: This follows from orthogonality condition of matrix rows. Distribution of the absolute value of the matrix M 22 entries by levels is shown in Figure A.5.
Figure A.5. Distribution of matrix M22 entries by levels.
150
Appendix
Hadamard matrices and the matrices close to them
pffiffiffi Figure A.6. Dependence of value ¼ n of the optimal matrix via n.
From this figure it is seen that on the upper level there is 22 17 ¼ 374 entries, and each of the rest ones contains 22 entries. Then upper level contains approximately 77 % of the total number of entries. The listed classes of optimal matrices differ the bottom borders of the maximum elements, number of levels and structure of matrices. Some ideas related to the bottom boundary for the index (a value of the maximal entry of optimal matrices) can be obtained from Figure A.6. In this figure the dependence of the entry , maximal with respect to its absolute value, of the pffiffiffi optimal matrix multiplied by n , on the dimension n for 2 n 26: The points which lie on the level of 1 refer to the Hadamard matrices, the points which are located slightly higher refer to the C-matrices. The highest position occupy the points for odd values of n. Evidently, that with increasing n all points will be below a certain level and one of the tasks to be solved is the evaluation of its value. The diagram of the Figure A.6 is plotted on the basis of Table A.4, which contains all available results related to the optimal matrices providing the solution for n 30. The Table A.4 contains 4 columns: in the first column the matrix dimensionality is indicated, in the second one the presence of the index is shown, in the third the number of matrix levels is given and in the fourth the type of matrices is indicated. The question-mark is used to show the case for which the no solution has been found or when the optimality of the data is dubious. The matrices found in MATLAB with the help of an iterative algorithm are marked as MTL-matrices.
Section A.6
Characteristics of optimal matrices
151
Table A.4.
2
¼ maxi; j ja i j j. pffiffiffi 1= 2 ¼ 0:707
3
n
Number of levels
Type of matrices
1
Hadamard matrix
2=3
2
M -matrix
4
1=2
1
Hadamard matrix
5
6=11 ¼ 0:545454 pffiffiffi 1= 5 ¼ 0:447 pffiffiffi ð7 7 þ 5Þ=53 ¼ 0:4438 pffiffiffi 1= 8 ¼ 0:354 pffiffiffi ð3 þ 3=12Þ ¼ 0:3943
3
M -matrix
2
C -matrix
5
M -matrix
1
Hadamard matrix
4
M -matrix
10
1=3
2
C -matrix
11
0.3431 pffiffiffiffiffi 1= 12 ¼ 0:289
6
M -matrix
1
Hadamard matrix
6 7 8 9
12 13
?
MTL-matrix
14
0.3100 ? pffiffiffiffiffi 1= 13 ¼ 0:2774
2
C -matrix
15
0.2890
?
?
MTL-matrix
16
1=4 ¼ 0:25
1
Hadamard matrix
17
0.2754 ? pffiffiffiffiffi 1= 17 ¼ 0:2425
?
MTL-matrix
2
C -matrix
18 19
?
MTL-matrix
20
0.2522 ? pffiffiffiffiffi 1= 20 ¼ 0:2236
1
Hadamard matrix
21
0.2439
?
?
MTL-matrix
22
0.2269
?
6
MTL-matrix (No C -matrix)
23
?
MTL-matrix
24
0.235 ? pffiffiffiffiffi 1= 24 ¼ 0:2041
1
Hadamard matrix
25
0.226
?
A matrix is unknown
26
0.2
2
C -matrix
27
?
?
?
A matrix is unknown
28
pffiffiffiffiffi 1= 28 ¼ 0:1890
1
Hadamard matrix
29
?
?
A matrix is unknown
2
C-matrix
30
pffiffiffiffiffi 1= 29 ¼ 0:1857
Bibliography [1] Ageyev D. V., Yurlov F. F.: Dvuhelementnaya gruppovaya sistema radioveschaniya. Izvestiya VUZov SSSR, seriya ‘‘Radioelectronica’’, 1969, t. 12, No. 7. S. 712–716 (in Russian). [2] Anderson B.D.O., Moore J. B.: Optimal filtering. N.-J.: Englewood Cliffs, 1979. [3] Andrews C. A., Davies J. M., Schwarz G. R.: Adaptive data compression. Proceedings of the IEEE. 1967, V. 55, Issue 3 – pp. 267–277. [4] Andrews H. C.: Computer Techniques in Image Processing. Academic Press: 1970. – 187 p. [5] Arsenin V. Ya., Ivanov V. V.: Vosstanovleniye formi signala, svobodnoy ot iskajeniy, obuslovlennih apparaturoy i kanalom peredachi. Izmeritel’naya tehnika, 1975, No. 12. S. 25–37 (in Russian). [6] Atahanov R. M., Lebedev D. S., Yaroslavskiy L. P.: Podavlenie impul’snih pomeh v televizionnom priyemnom ustroystve. Tehnika kino i televideniya, 1971, No. 7. S. 55– 57 (in Russian). [7] Axenov V. A., Viches A. I., Gitlitc M. B.: Tochnay magnitnaya zapis’. Moscow: Energiya, 1973, 280 s. (in Russian). [8] Babanov Yu. N.: Povishenie pomehoustoychivosti priema posredstvom rastiagivaniya impul’snih pomeh vo vremeni. Izvestiya VUZov SSSR, seriya ‘‘Radiotehnika’’, 1959, t. 2, No. 2. S. 234–238 (in Russian). [9] Balalaev V. A., Slaev V. A., Siniakov A. I.: Potencial’naya tochnost’ izmereniy. Pod red. V. A. Slaeva – Sankt-Peterburg: Professional, 2005. 104 s. ISBN 5-98371-026-5 (in Russian). [10] Baskakov S. I.: Radiotehnicheskie cepi i signali. – Moscow: Vischaya shkola, 2000. – 462 s. (in Russian). [11] Bellman R.: Introduction to Matrix Analysis. 2nd edition, McGraw-Hill, 1970. [12] Bendat J. S.: Principles and Applications of Random Noise Theory. New York, John Wiley & Sons; 1958. [13] Bendat J. S., Piersol A. G.: Random Data: Analysis and Measurement Procedures. Edition Number 4. 2010. 604 p. [14] Bomshteyn B. D., Kiselev L. K., Morgachev E. T.: Metodi bor’bi s pomehami v kanalah provodnoy sviazi. – Moscow: Sviaz’, 1975. 248 s. (in Russian). [15] Burger W., Burge M. J.: Digital Image Processing: An Algorithmic Approach Using Java. Springer. 2007. ISBN 1-846-28379-5, ISBN 3-540-30940-3. http://www.imagingbook.com/. [16] Buriakov A. P.: Vraschatel’noye preobrazovanie signalov dlia bor’bi s impul’snimi pomehami v sistemah sviazi. Trudi LIAP. Leningrad, 1966, vip. 48. S. 277–286 (in Russian).
Bibliography
153
[17] Cibakov B. S.: Lineynoe kodirovanie izobrajeniy. Radiotehnika i elektronika, 1962, t. 7, No. 3. – S. 375–385 (in Russian). [18] Contemporary Design Theory: A Collection of Essays, of Dinitz J. H., Stinson D. R., editors, Wiley, New York. 1992 (Chapter 7. Orthogonal Arrays by Hedayat, Sloane and Stufken). [19] Costas J. P.: Coding with linear systems. PIRE, 1952, V. 40, No. 9. pp. 1101–1103. [20] Crowther W. R, Rader C. M.: Efficient coding of vocoder channel signals using linear transformation. Proceedings of the IEEE. Issue Date: Nov. 1966. Volume 54. Issue 11. – pp. 1594–1595. [21] D’yakonov V. P., Abramenkova I.: MATLAB. Obrabotka signalov i izobrajeniy. SanktPeterburg: Piter, 2002 (in Russian). Davis T. A. MATLAB Primer, 8th edition, CRC Press, 2011. 232 p. [22] Fal’kovich S. E.: Ocenka parametrov signala. Moscow: Sovetskoe radio, 1970. – 336 s. (in Russian). [23] Fisher R., Dawson-Howe K., Fitzgibbon A., Robertson C., Trucco E.: Dictionary of Computer Vision and Image Processing. John Wiley. 2005. ISBN 0-470-01526-8. [24] Fomin A. F.: Pomehoustoychivost’ system peredachi neprerivnih soobscheniy. Moscow: Sovetskoe radio, 1975. – 352 s. (in Russian). [25] Franks L. E.: Signal Theory, Englewood Cliffs, New Jersey, Prentice-Hall. 1969. [26] Funkcii s dvoynoy ortogonal’nost’yu v radioelektronike i optike. Perevod i nauchnaya obrabotka M. K. Razmahnina, V. P. Yakovleva. Moscow: Sovetskoe radio, 1971. – 256 s. (in Russian). [27] Gabor D.: Theory of Communication. J. IEEE, 1946, Pt. 3, T. 98. pp. 429–457. [28] Gantmacher F.: Matrix Theory. New York: Chelsea Publishing: 1959. [29] Gantmacher F. R., Kreyn M. G.: Osciliacionnie matrici, yadra i malie kolebaniya mehanicheskih sistem. Moscow – Leningrad: Fismatgiz. 1950. 360 s. (in Russian). [30] Gil’bo E. P., Chelpanov I. B.: Obrabotka signalov na osnove uporiadochennogo vibora (majoritarnoe i blizkoe k nemu preobrazovaniya). Moscow: Sovetskoe radio, 1976. 344 s. (in Russian). [31] Glover K.: All optimal Hankel-norm approximations of linear multivariable systems and their L 1 -error bounds. Int. J. Control, 1984, V. 39, No. 6. pp. 1115–1193. [32] Gol’dberg A. P.: Harakteristiki sistem podavleniya impul’snih pomeh. Elektrosviaz’, 1966, No. 2. S. 31–42 (in Russian). [33] Golomb S. W., ed.: Digital Communications with Space Applications. Prentice-Hall, Englewood Cliffs, NJ: 1964. [34] Golub G. H., van Loan C. F.: Matrix computations, 3rd edition, John Hopkins University Press. 308 p. [35] Goncharov A. V., Lazarev V. I., Parhomenko V. I., Shteyn A. B.: Tehnika magnitnoy videozapisi. Moscow: Energiya, 1970. – 328 s. (in Russian).
154
Bibliography
[36] Goodman: Redundancy removal using binary linear transformation. IEEE Trans., V. 55, No. 3. [37] Haar A.: Zur Theorie der Orthogonalen Funktionen System. Inaugural Dissertation, Math. Ann., 1955, No. 5, pp. 17–31. [38] Hadamard J.: Resolution d’une question relative aux determinants. Bull. Sci. Math., 1893, Ser. 2, V. 17, Pt. 1. pp. 240–246. [39] Hamming R.: Coding and Information Theory. Prentice Hall: 1980, 2nd edition. – 1986. [40] Harkevich A. A.: Bor’ba s pomehami. Moscow: Nauka, 1965. – 276 s. (in Russian). [41] Harkevich A. A.: Spektri i analiz. Moscow: GIFML, 1962. – 236 s. (in Russian). [42] Harmuth H. F.: A generalized concept of frequency and some applications. IEEE Trans., 1968, V. IT – 14. pp. 375–382. [43] Harmuth H. F.: Transmission of Information by Orthogonal Functions. Berlin/ Heidelberg/New York. 1972. Springer-Verlag. – 393 p. [44] Hlopotin V. S.: Ob odnom sposobe kodirovaniya informacii dlia zapisi na magnitnuyu lentu. Voprosi radioelektroniki, seriya ‘‘Elektronno-vichislitel’naya tehnika’’, 1969, vip. 3. – S. 38–43 (in Russian). [45] Hromov L. I., Resin V. I.: Informacionniy raschet lineynih prediskajayuschih ustroystv v televidenii. Radiotehnika, 1965, t. 20, No. 2. – S. 41–44 (In Russian). [46] Hurgin Ya. I., Yakovlev V. P.: Finitniye funkcii v fizike i tehnike. Moscow: Nauka, 1971. – 408 s. (in Russian). [47] Iohvidov I. O.: Gankelevi i teplicevi matrici i formi. Moscow: Nauka, 1974. 264 s. (in Russian). [48] Ja¨hne B.: Digital Image Processing. Springer. 2002. ISBN 3-540-67754-2. [49] Jeleznov N. A.: Nekotoriye voprosi teorii informacionnih elektricheskih sistem. Leningrad, LKVVIA im. A. F. Mjayskogo, 1960. – 156 s. (in Russian). [50] Kallman H. E.: Transversal filters. PIRE, 1940, V. 38. pp. 302–311. ber Linearen Methoden in der Wahrscheinlichjeitsrechnung. Ann. [51] Karhunen K.: U Acad. Sci. Fennical, Ser. A, 1946, V. 1, No. 2. [52] Kavalerov G. I., Mandel’shtam S. M.: Vvedenie v informacionnuyu teoriyu izmereniy. Moscow: Energiya, 1974. – 376 s. (in Russian). [53] Klovskiy D. D.: Teoriya peredachi signalov. Moscow: Sviaz’, 1973. 376 s. (in Russian). [54] Kl’uev N. I.: Informacionniye osnovi peredachi soobscheniy. Moscow: Sovetskoe radio, 1966. 360 s. (in Russian). [55] Kolmogorov A. N., Fomin S. V.: Elementi teorii funkciy i funkcional’nogo analiza. Moscow: Nauka, 1972. 496 s. (in Russian). [56] Kotel’nikov V. A.: Teoriya potencial’noy pomehoustoychivosti. Moscow: Gosenergoizdat: 1956. – 151 s. (in Russian). [57] Kramer H. P.: The covariance matrix of vocoder speech. Proceedings of the IEEE. Issue Date: March 1967, Volume 55. Issue 3. pp. 439–440.
Bibliography
155
[58] Kramer H. P., Mathews M. V.: A linear coding for transmitting a set of correlated signals. IRE Trans., 1956, V. IT–2. pp. 41–46. [59] Lang G. R.: Rotational transformation of signals. IEEE Trans., 1963, V. IT – 9, No. 3. pp. 191–198. [60] Lebedev D. S.: Lineyniye dvumerniye preobrazovaniya izobrajeniy, uvelichivayuschie pomehoustoychivost’ peredachi. V sb. ‘‘Ikonika’’. Moscow: Nauka, 1968. S. 15–7 (in Russian). [61] Lectures on Communication System Theory, ed. by E. Baghdady. McGraw-Hill: New York, 1961. [62] Lee R.: Optimal Estimation, Identification and Control. Publisher: The MIT Press: 1964. – 152 p. [63] Leith E. N., Upatnieks J.: Reconstructed wavefronts and communication theory. J. of the Optical Society of America, 1962, V. 52, No. 10. pp. 1123. [64] Levenshteyn V. I.: Primenenie matric Adamara k odnoy zadache kodirovaniya. Problemi kibernetiki, 1961, vip. 5. S. 123–136 (in Russian). [65] Linnik Yu. V.: Metod naimen’shih kvadratov i osnovi teorii obrabotki nabl’udeniy. Moscow: Fizmatgiz, 1958. – 334 s. (in Russian). [66] Loeve M.: Probability theory I, 4 ed., Springer: 1977. – 438 p. Loeve M.: Probability theory II, 4 ed., Springer: 1978 – 427 p. [67] Manovcev A. P.: Osnovi teorii radiotelemetrii. Moscow: Energiya: 1973. – 592 s. (in Russian). [68] Marcus M., Minc H.: Survey of matrix theory and matrix inequalities. Prindle, Weber & Schmidt: 1964 Dover, 1992. – 198 p. [69] Marigodov V. K.: Effektivnost’ prediskajeniy pri additivnih pomehah s proizvol’nim energeticheskim spektrom. Izvestiya VUZov SSSR, seriya ‘‘Radiotehnika’’, 1969, t. 12, No. 7. S. 746–749 (in Russian). [70] Marigodov V. K.: K voprosu o pomehozaschischennosti AM i ChM signalov pri nalichii prediskajeniy. Radiotehnika, 1970, t. 25, No. 1. S. 21–24 (in Russian). [71] Marigodov V. K.: O pomehoustoychivosti metoda prediskajeniy neprerivnih soobscheniy. Izvestiya VUZov SSSR, seriya ‘‘Radioelektronika’’, 1971, t. 14, No. 8. S. 875–880 (in Russian). [72] Marigodov V. K.: Ob umen’shenii veroyatnosti oshibki v kanalah s ochen’ glubokimi zamiraniyami. Resp. mejved. sb. ‘‘Otbor i peredacha informacii’’. Kiev, Naukova dumka, 1973, vip. 34. S. 56–59 (in Russian). [73] Marigodov V. K.: Optimal’niye prediskajeniya radioimpul’snih signalov. Resp. mejved. sb. ‘‘Otbor i peredacha informacii’’. Kiev, Naukova dumka, 1973, vip. 34. S. 53–56 (in Russian). [74] Medianik A. I.: O postroenii matric Adamara. Matem. fizika, analiz, geometriya, 1995, t. 2, No. 1. – S. 87–93 (in Russian). [75] Medianik A. I.: Vpisanniy v kub pravil’niy simpleks i matrici Adamara polucirkuliantnogo tipa. Matem. fizika, analiz, geometriya, 1997, t. 4, No. 4. – S. 458–471 (in Russian).
156
Bibliography
[76] Metodi komp’yuternoy obrabotki izobrajeniy. Pod red. V. A. Soyfera. Moscow: Fizmatlit. 2001 – 780 s. (in Russian). [77] Mironovskii L. A., Slaev V. A.: Optimal Chebishev preemphasis and filtering. Measurement Techniques, V. 45. No. 2, 2002. – pp. 126–136. [78] Mironovskii L. A., Slaev V. A.: The strip method of noise-immune image transformation. Measurement Techniques, V. 49. No. 8, 2006. – pp. 745–754. [79] Mironovskii L. A., Slaev V. A.: The strip method of transforming signals containing redundancy. Measurement Techniques, V. 49. No. 7, 2006. – pp. 631–638. [80] Mironovsky L. A.: Functional diagnosis of dynamic systems – A survey. Automation and Remote Control, 1980, V. 41, pp. 1122–1143. [81] Mironovsky L. A.: Funkcional’noe diagnostirovaniye dinamicheskih sistem. Moscow: MGU: 1998. – 304 s. ISBN 5-88141-040-8 (in Russian). [82] Mironovsky L. A., Slaev V. A.: Avt. svid. No. 572936, MKI H04B 1/10. Priyemnik analogovih signalov. No. 1968655/09; zayavleno 10.10.73; opubl. 15.09.77. – UDK 621.396:621.5(088.8) Otkritiya. Izobreteniya. 1977, No. 34. – S. 158–159 (in Russian). [83] Mironovsky L. A., Slaev V. A.: Avt. svid. No. 355648, MKI G11B 5/02. Sposob magnitnoy zapici i vosproizvedeniya analogovogo signala. No. 1383642/18-10, zayavleno 10.12.69; opubl. 16.10.72. – UDK 681.327.63(088.8) Otkritiya. Izobreteniya. 1972, No. 31. – S. 173 (in Russian). [84] Mironovsky L. A., Slaev V. A.: Avt. svid. No. 432684, MKI H041 1/00. Ustroystvo dlia lineynogo prediskajeniya signala. No. 1762858/26-9; zayavleno 27.03.72; opubl. 15.06.74. – UDK 621.394.5(088.8) Otkritiya. Izobreteniya. 1974, No. 22. – S. 186– 187 (in Russian). [85] Mironovsky L. A., Slaev V. A.: Avt. svid. No. 433526, MKI G08c 19/16. Ustroystvo dlia peredachi analogovogo signala. No. 1819586/18-24; zayavleno 11.08.72; opubl. 25.06.74. – UDK 621.398:654.94(088.8) Otkritiya. Izobreteniya. 1974, No. 23. – S. 142 (in Russian). [86] Mironovsky L. A., Slaev V. A.: Avt. svid. No. 604161, MKI H04B 3/04. Ustroystvo dlia peredachi analogovogo signala. No. 2194423/09; zayavleno 26.11.75; opubl. 25.04.78. – UDK 621.391.15:621.397(088.8) Otkritiya. Izobreteniya. 1978, No. 15. – S. 207–208 (in Russian). [87] Mironovsky L. A., Slaev V. A.: Avt. svid. No. 932637, MKI H04l 3/02. Ustroystvo dlia podavleniya impul’snih pomeh v signale s informacionnoy izbitochnoct’yu. No. 2962781/18-09; zayavleno 16.07.80; opubl. 30.05.82. – UDK 621.391.837: 681.3(088.8) Otkritiya. Izobreteniya. 1982, No. 20. – S. 293 (in Russian). [88] Mironovsky L. A., Slaev V. A.: Avt. svid. No. 349113, MKI H04l 3/02. Ustroystvo dlia polucheniya signala s postoyannoy dispersiyey. No. 1474793/26-9; zayavleno 24.09.70; opubl. 23.08.72. – UDK 621. 373.43(088.8) Otkritiya. Izobreteniya. 1972, No. 25. – S. 204 (in Russian). [89] Mironovsky L. A., Slaev V. A.: Avt. svid. No. 476677, MKI HO3k 13/02. Ustroystvo dlia preobrazovaniya analogovogo signala. No. 1860752/26-9; zayavleno 19.12.72; opubl. 05.07.75 – UDK 621.376.56(088.8) Otkritiya. Izobreteniya. 1975, No. 25. – S. 161 (in Russian).
Bibliography
157
[90] Mironovsky L. A., Slaev V. A.: Avt. svid. No. 477549, MKI H04l 3/02. Ustroystvo dlia preobrazovaniya signala. No. 1906139/26-21; zayavleno 12.04.73; opubl. 15.07.75. – UDK 681.3.055(088.8) Otkritiya. Izobreteniya. 1975, No. 26. – S. 149 (in Russian). [91] Mironovsky L. A., Slaev V. A.: Avt. svid. No. 339922, MKI G06k 7/00. Ustroystvo dlia zapisi i vosproizvedeniya informacii na lentochnom nositele. No. 1384136/18-24; zayavleno 10.12.69; opubl. 24.05.72. – UDK 681.327.11(088.8) Otkritiya. Izobreteniya. 1972, No. 17. – S. 158 (in Russian). [92] Mironovsky L. A., Slaev V. A.: Avt. svid. No. 417837, MKI G11B 5/86. Zapominayuscheye ustroystvo. No. 1474801/18-24; zayavleno 24.09.70; opubl. 28.02.74 – UDK 681.327(088.8) Otkritiya. Izobreteniya. 1974, No. 8. – S. 149 (in Russian). [93] Mironovsky L. A., Slaev V. A., Finojenok G. A.: Avt. svid. No. 1080715. Ustroystvo dlia peredachi analogovogo signala. No. 3509222; zayavleno 03.11.82; opubl. 15.11.83 Otkritiya. Izobreteniya. 1984, No. 10. – S. 208 (in Russian). [94] Mironovsky L. A., Slaev V. A.: Invariants in metrology and technical diagnostics. Measurement Techniques, V. 39. No. 6, 1996. – pp. 577–593. [95] Mironovsky L. A., Slaev V. A.: Metod lineynih prediskajeniy s izbitochnost’yu dlia korrekcii pomeh. Trudi metrologicheskih institutov SSSR ‘‘Issledovaniya v oblasti teorii i tehniki izmeritel’nih system’’. – Leningrad, 1975, vip. 170 (230). – S. 31–34 (in Russian). [96] Mironovsky L. A., Slaev V. A.: Optimal’naya fil’traciya po minimaksnomu kriteriyu. Trudi mejdunarodnoy NTK ‘‘Aktual’niye problemi elektronnogo priborostroeniya’’. Novosibirsk, 1992, t. 6, Ch. 2. – S. 9–14 (in Russian). [97] Mironovsky L. A., Slaev V. A.: Sposob pomehoustoychivogo hraneniya i peredachi analogovoy informacii. Dokladi IV Simpoziuma po probleme izbitochnosti v informacionnih sistemah. Leningrad, 1970, Ch. II. – S. 689–697 (in Russian). [98] Mironovsky L. A., Slaev V. A.: Umen’sheniye impul’snih pomeh v analogovih signalah. Avtometriya, 1973, No. 6. – S. 49–54 (in Russian). [99] Mironowsky L. A., Slayev V. A.: Equalization of the Variance of a Nonstationary Signal. Telecommunications and Radio Engineering, 1975, V. 29 – 30, No. 5. [100] Mironovsky L. A., Slayev V. A.: Technical Diagnostics of Dynamic Systems on the Basis of Algebraic Invariants. Proceedings of the III-rd Symposium of the IMEKO, TC on Technical Diagnostics, IMEKO Secretariat, Budapest, 1983. pp. 243–251. [101] Morris T.: Computer Vision and Image Processing. Palgrave Macmillan. 2004. ISBN 0-333-99451-5. [102] Nachtegael M., Van Der Weken D., Kerre E. E.: Soft computing in image processing: recent advances. Springer, 2007. 500 p. [103] Novickiy P. V.: Osnovi informacionnoy teorii izmeritel’nih ustroystv. Leningrad, Energiya: 1968. – 248 s. (in Russian). [104] Ol’hovskiy Yu. B., Novoselov O. N., Manovcev A. P.: Sjatie dannih pri teleizmereniyah. Moscow: Sovetskoe radio, 1971. – 304 s. (in Russian). [105] Ott G.: Noise Reduction Techniques in Electronic Systems. Wiley-Interscience: 1976. [106] Palermo G. J., Palermo R. V., Horwitz H.: The use of data omission for redundancy removal. Rec. Int. Space electronics and telemetry Symp., 1965. P. (11) D1–D16.
158
Bibliography
[107] Paley R.E.A.C.: On orthogonal matrices. J. Math. Phys., 1933, V. 12. pp. 311–320. [108] Pierce W. H.: Linear – real codes and coders. Bell Syst. Techn. J., 1968, 47, No. 6. pp. 1067–1097. [109] Pierce W.: Linear-real coding. IEEE. Internat. Conv. Rec., 1966, pt. 7, Vol. 14. – pp. 44–53. [110] Polya G., Szego G.: Isoperimetric inequalities in mathematical physics. Princeton University Press: 1951. – 279 p. [111] Ponsen: Ispol’zovaniye preobrazovaniya Adamara dlia kodirovaniya i sjatiya signalov izobrajeniya. Zarubejnaya radioelektronika, 1972, No. 3. – S. 30–56 (in Russian). [112] Pratt W. K.: Digital Image Processing. (Fourth Edition) Wiley. 2007. – 807 p. [113] Pratt W. K., Andrews H. C.: Application of Fourier-Hadamard Transformation to Bandwidth Compression. In: ‘‘Picture Bandwidth Compression’’, 1972. pp. 515–554. [114] Rademacher H.: Einige Sa¨tze von allgemeinen Orthogonal Functionen. Math. Ann., 1922, V. 87. pp. 122–138. [115] Rao K. R., Narasimhan M. A., Revuluri K.: Image Data Processing by Hadamard – Haar Transforms. IEEE Trans. Computers, C – 23, 9, 1975. pp. 888–896. [116] Rozi A. M.: Teoriya informacii v sviazi. Moscow: Energiya, 1971. – 184 s. (in Russian). [117] Sage E., Mels J.: Theory of Estimation and its Applications in Communications and Control. New York: McGraw Hill. 1971. [118] Schoenberg I. J.: An isoperimetric inequality for dozed curves convex in even – dimensional Euclidean spaces. Acta Math., 1954, 91. pp. 143–164. [119] Seliakov I. S.: Analiz i komp’yuternoe modelirovanie STRIP-preobrazovaniya izobrajeniy. (Diss. na soisk. uchen. stepeni magistra) Sankt-Peterburg: GUAP. 2005 (in Russian). [120] Shirman Ya. D.: Razreshenie i sjatie signalov. Moscow: Sovetskoe radio, 1974. – 360 s. (in Russian). [121] Shirokov S. M., Grigor’yev I. V.: Metod podavleniya impul’snih pomeh pri obrabotke signalov i izobrajeniy s ispol’zovaniem nelineynih fazovih fil’trov. SGAU, jurnal ‘‘Komp’yuternaya optika’’, No. 16. 1996 (in Russian). [122] Shteyn V. M.: O raschete lineynih prediskajayuschih i korrektiruyuschih ustroystv. Radiotehnika, 1956, t. 11, No. 1. – S. 60–63 (in Russian). [123] Siebert W.: Circuits, Signals and Systems. MIT Press: Cambridge, Massachusetts, London, England, 1986. [124] Sinay Ya. G.: Naimen’shaya oshibka i nailuchshiy sposob peredachi stacionarnih soobscheniy pri lineynom kodirovanii i dekodirovanii v sluchae Gaussovskih kanalov sviazi. Problemi peredachi informacii. Moscow: Izd. AN SSSR, 1959, vip. 2. – S. 40– 48 (in Russian). [125] Slaev V. A.: Metrologicheskoe obespechenie apparaturi magnitnoy zapisi: Nauchnoe izdanie. – Sankt-Peterburg: Mir i Sem’ya, 2004. – 174 s. ISBN 5-94365-060-1 (in Russian). [126] Slaev V. A.: Razrabotka sredstv izmereniy dlia ocenki metrologicheskih harakteristik apparaturi magnitnoy zapisi i issledovanie metodov povisheniya ee tochnosti. Leningrad, VNIIM, 1973. 201 s. (Diss. na soisk. uchen. stepeni kand. tehn. nauk) (in Russian).
Bibliography
159
[127] Slaev V. A., Mironovsky L. A., Ignat’yev M. B., Civirko G. P.: Avt. svid. No. 598255, MKI H04B 1/10. Ustroystvo dlia selekcii signalov s izbitochnost’yu. No. 2348898/18-09; zayavleno 15.04.76; opubl. 15.03.78. – UDK 621.391.837: 621.397(088.8) Otkritiya. Izobreteniya. 1978, No. 10. – S. 211 (in Russian). [128] Slaev V. A., Naletov V. V.: Ispol’zovanie vvedeniya izbitochnosti dlia povisheniya ustoychivosti sistem peredachi informacii k impul’snim pomeham. Dokladi VIII Simpoziuma po probleme izbitochnosti v informacionnih sistemah. Leningrad, 1983, Ch. 3. – S. 161–164 (in Russian). [129] Sonka M., Hlavac V., Boyle R.: Image Processing, Analysis, and Machine Vision. PWS Publishing. 1999. ISBN 0-534-95393-X. [130] Sorokin V. N.: Raspoznavanie rechi pri pomoschi analiza ee izobrajeniya. Izvestiya AN SSSR. Tehnicheskaya kibernetika, 1966, No. 5. – S. 93–98 (in Russian). [131] Soyfer V. A.: Komp’yuternaya obrabotka izobrajeniy. Sorosovskiy obrazovatel’niy jurnal. No. 2. 1996 (in Russian). [132] Spravochnik po tehnike magnitnoy zapisi. Pod red. O. V. Porickogo, E. N. Travnikova. Kiev, Tehnika, 1981. – 319 s. (in Russian). [133] Stahov A. P., Lihtcinder B. Ya., Orlovich Yu. P., Storojuk Yu. A.: Kodirovanie dannih v informacionno-izmeritel’nih sistemah. Kiev, Tehnika, 1985. – 127 s. (in Russian). [134] Statistika oshibok pri peredache cifrovoy informacii. / Sbornik perevodov pod red. S. I. Samoylenko. Moscow: Mir, 1966. – 304 s. (in Russian). [135] Stinson D.: Combinatorial designs. Constructions and analyses. Springer, 2004. [136] Suslonov S. A.: Sintez signalov s fazoamplitudnimi prediskajeniyami. Izvestiya VUZov SSSR, seriya ‘‘Radioelektronika’’, 1971, t. 14, No. 8. – S. 881–888 (in Russian). [137] Teoriya informacii i ee prilojeniya (sbornik perevodov). Pod red. A. A. Harkevicha. Moscow: GIFML, 1956. – 328 s. (In Russian). [138] Teoriya peredachi elektricheskih signalov pri nalichii pomeh. Sb. perevodov. Pod red. N. A. Jeleznova. Moscow: IIL, 1953. – 288 s. (in Russian). [139] Tihonov V. I.: Statisticheskaya radiotehnika. Moscow: Sovetskoe radio, 1966. – 678 s. (in Russian). [140] Toeplitz matrices, translation kernels and related problem in probability theory. Duke Math. J., 1954, 21. pp. 501–509. [141] Totty R. E., Clark G. C.: Reconstruction error in waveform transmission. IEEE Trans., 1967, IT. pp. 333–338. [142] Trahtman A. M.: Vvedenie v obobschennuyu spektral’nuyu teoriyu signalov. Moscow: Sovetskoe radio, 1972. – 352 s. (in Russian). [143] Upravlenie vichislitel’nimi processami. Pod red. M. B. Ignat’yeva. Leningrad, LGU, 1973. – 298 s. (in Russian). [144] Van Trees H. L.: Detection, Estimation, and Modulation Theory, Part I. New York: Wiley, 1969. Van Trees H. L.: Detection, Estimation and Modulation Theory. Part II. New York: Wiley. 1971.
160
Bibliography
[145] Varakin L. E.: Teoriya system signalov. Moscow: Sovetskoe radio, 1978. 304 s. (in Russian). [146] Vasilenko G. I.: Teoriya vosstanovleniya signalov. Moscow: Sovetskoe radio, 1979. 272 s. (in Russian). [147] Vaynshteyn G. G.: Ocenka effektivnosti lineynogo prediskajeniya pri peredache koordinatnih signalov. V sb. ‘‘Ikonika’’. Moscow: Nauka, 1968. S. 8–14 (in Russian). [148] Viches A. I., Smirnov V. A.: Issledovanie vliyaniya nestabil’nosti kontakta na parametri vihodnogo signala pri magnitnoy zapisi s VCh podmagnichivaniem. Radiotehnika, 1977, No. 1. S. 70–76 (in Russian). [149] Viterbi A. J.: Principles of coherent communication. McGraw-Hill. 1966. [150] Vittih V. A.: Sjatiye mnogomernih signalov s ispol’zovaniem ih funkcional’nih sviazey. Trudi UPI, Ul’ianovsk, Radiotehnika, 1972, t. 8, vip. 3, S. 419–424 (in Russian). [151] Vorobel’ R. A., Juravel’ I. M.: Povishenie kontrasta izobrajeniy s pomosch’yu modificirovannogo metoda kusochnogo rastiajeniya. Otbor i obrabotka informacii. – No. 14 (90). 2000. – S. 116–121 (in Russian). [152] Walsh J. L.: A clozed set of orthogonal functions. Am. J. Math., 1923, V. 55. pp. 5–24. [153] Williamson J.: Hadamard’s Determinant Theorem and the sum of four squares. Duke J. of Math., 1944, 11. pp. 65–81. [154] Yudovich S. V.: Pomehoustoychivaya komp’yuternaya obrabotka signalov i izobrajeniy metodom Chebishevskoy fil’tracii s ispol’zovaniem preobrazovaniya Adamara. SanktPeterburg: GUAP, 2001. – 64 s. (Diss. na soisk. uchen. stepeni magistra) (in Russian). [155] Zadeh L. A., Ragazzini J. R.: Optimum filters for detection of signal in noise. PIRE, 1952, V. 40. pp. 1223–1231. [156] Zlotnikov S. A., Marigodov V. K.: Ocenka tochnosti approksimacii i optimizacii harakteristik prediskajeniy. Trudi uchebnih institutov sviazi, 1968, No. 344. S. 90–96 (in Russian). [157] Z’uko A. G.: Pomehoustoychivost’ i effektivnost’ sistem sviazi. Moscow: Sviaz’, 1972. 360 s. (in Russian). [158] Shintyakov D. V.: Algoriphm finding of Hadamard matrices of odd order. Science session GUAP: Trans.: Part 2, Technical Science. SPb: GUAP. 2006. S. 207–211 (in Russian). [159] Balonin N. A., Mironovsky L. A.: Hadamard matrices of odd order. Informacionnoupravliajuschie sistemy, 2006, No. 3. S. 46–50 (in Russian). [160] Arasu K. T., Chen, Y. Q., Pott A.: Hadamard and Conference Matrices. Journal of Algebraic Combinatorics, 14 (2001), pp. 103–117.
Index
algorithm – for determining optimal matrices, 143, 145 amplitude-frequency characteristic (AFC), 9 automatic gain control (AGC), 7 boundary – problem, 22 – conditions, 24, 26 breaks, 23, 25, 26, 39 channel, V, VI, 1, 5, 6, 9–14, 16, 17, 20– 22, 27, 36, 42, 44, 63, 64, 74, 76–79, 81–83, 85, 92, 93, 97, 100, 103, 114, 116, 118, 120, 121, 124–127 characteristic – matching of signal and channel, 5 counter, 123, 124 criterion, 5, 6, 33–39, 43, 62, 127 – functional space metric, 6, 7 – maximum of deviation module (Chebyshev), VI, 7, 11, 41, 42, 45–47, 50, 53, 55, 62, 63, 75, 127, 128 – mean deviation module, 7, 75 – probability, 6 – Bayes, 6 – Neyman-Pearson, 6 – root-mean-square, VI, 7, 32, 45, 62, 63, 75 – signal to noise ratio, 7, 42 data transmission systems (DTS), 1 decoder, 9, 123, 124 defragmentation, V, 82 delay line, 114, 118–126 divider, 121, 123, 124 efficiency – potential, 41
eigenvalue, 32, 101, 104, 110, 133, 134 entry, 34, 56, 85, 86, 135, 136, 138, 139, 141–145, 147, 150 expansion – Fourier, 39 – Fourier-Walsh, 40 – Karunen-Loe`ve, 32, 36 filter, filtering, VI, 9–11, 43 – Chebyshev, 44, 46, 47, 49 – Kalman, 41, 42 – optimal, 50–58 – Wiener, 41, 42 fragmentation, V, 11, 34, 53, 82, 84 gate, 120, 122–126 gluing operator, 77 image – processing, VI, 87 – transformation, 11, 79–97, 129 information – amount of, 32, 76 – capacity, 32–34, 36, 40, 127 – compression, 7, 32 – decorrelation, 5, 32 – measurement system (IMS), 1 informative ability – equalization, 27, 32–36 integrator, 124, 125 magnetic head, 9, 74, 114–118 magnetic recording instruments (MRI), 1 MAPLE, 108, 112, 138, 145–147 MATLAB, 14, 15, 18, 20, 28, 30, 59, 68, 78, 87, 97, 104, 106, 111, 131, 138, 141, 143, 144, 147, 148, 150 matrix – autocorrelation, 32, 33, 37
162 – conference (C-matrix), 58, 59, 99, 134, 135, 136, 151 – cyclic, 78, 117, 119 – Hadamard, 28, 30, 33, 39, 40, 57–59, 62, 65, 78, 86, 88, 89, 91–93, 97, 98– 100, 102–104, 106, 108, 110, 111, 113, 129–134, 135, 136, 151 – ill-conditioned, 39 – non-singular, 101, 129, 144 – optimal, 54, 87, 138, 139, 143, 145, 147, 150 – orthogonal, 19, 33, 37, 53, 79–81, 85, 100, 110, 130, 143, 144, 148 – permutation, 110, 111 – shortened Hadamard, 30, 59, 133, 134 – Toeplitz, 24 measurement ruler, 114 noise – abatement, 5 – attenuation, 5, 48, 86, 98 – critical multiplicity, 97–100, 128 – decreasing, 21, 42, 43, 46, 47, 54, 63, 64, 86, 117 – depression, 11, 41, 43, 63 – extension, 42, 59 – immunity, V, 5–7, 12, 13, 18, 22, 40– 42, 47–51, 57–59, 63, 65, 75, 77, 127, 128 – potential, V, 46–49, 57–59, 127 – pulse, 62, 68, 72, 105, 113, 125 – resistance, VI, 1, 2, 116 – smearing, 47, 88, 100 operator – filtration, 53, 56 – Fredholm, 21, 52 – isometrical, 45 – pre-distorting, 21, 45 – Volterra, 52 pan, 135 phase-frequency characteristic (PFC), 9 pre-distortion, V, VI, 5, 7, 9, 10–19, 21, 22, 41, 42, 58, 63, 66, 75, 77, 112, 115, 127 problem – Procrustean, 136, 138 – root-matrix, 101, 113
Index pulse – correction, VI, 5, 22, 63, 65, 76, 77, 121, 123, 124 – detection, VI, 22, 41, 61, 76, 77, 121, 123 – disturbance, V, 9, 10, 18, 20, 22, 42–44 – identification, VI, 5, 22, 65, 121, 123 – localization, VI, 22, 65, 77 – noise, V, VI, 1, 5, 11, 12, 18, 22, 42, 43, 60, 62, 63, 65, 70, 74, 75, 79, 82– 85, 95, 97, 121, 125–127, 129 – r-multiple, 46, 49, 50, 75, 76, 98 – single, 47–50, 59, 66, 73, 74, 76, 85, 86, 91, 97, 125 realization, implementation, V, 1, 13, 27, 44, 117–120, 127 – hardware, 10, 114, 126, 128 – software, 126–128 real number, 62, 101, 102 reconstruction, 11, 13, 14, 16, 18, 20, 21, 42, 45, 115 redundancy – information, 85, 87, 99, 159–162, 165 – introduction, 84, 101, 159 – natural, 99 – removal, 199 residue, 65–74, 124, 148 root images, 100–113 rows of a matrix, 24, 29, 39, 40, 57, 62, 64, 72, 78, 87, 98, 109, 112, 129, 130, 133, 134, 149 rupture, 22, 23, 25, 26 section, 21, 36, 41, 46, 58, 85, 93 sector, 14, 27, 29, 34, 42, 43, 64, 69 segment, 51, 52, 59, 76, 77, 79, 84, 100, 122–124, 127 shaper, 125, 126 signal – continuity, V, 13, 22–24, 26, 27 – cutting, 14, 15, 49, 69, 77, 79 – dimension, 1–3, 10, 13, 19, 20, 22, 52, 77–81, 84, 85, 88, 91, 92, 127 – discretization, 3 – non-stationary, 27, 29–32, 36, 119
Index – processing, VI, 5, 7, 10, 12, 14, 41, 42, 44, 66, 68, 75, 79, 127, 133 – quantization, 3 – stretching, V, 18, 22, 46, 47, 49, 50 – variance, V, 22, 27–31, 34, 36, 40, 119, 120, 127 steganography, 77, 113, 128 strip-method, V–VII, 11, 12, 14, 16, 19, 22, 27, 30, 40–42, 45, 51, 63–65, 74, 75, 77, 79, 84, 91–93, 97, 114, 115, 117, 120, 126–129, 133, 134 – efficiency, V, 41, 45, 47, 59, 78, 97, 127 strip-operator, 13–17, 48, 49, 77 strip-transformation, 15, 16, 18, 20, 30, 32, 33, 68–70, 89–83, 85, 87, 89, 90, 92–94, 97–100, 103, 118, 127, 128 subtraction block, 123–126 summator, 114–125 switch, 115, 118, 119, 121 synchronizer, 118–120, 122–126 synthesis of the optimal filter, V, 46, 51, 53, 55, 57, 127 tap, 114, 118, 120–124 tape, 9, 10, 74, 75, 114–118 tape driving mechanism, 114, 115, 117
163 theorem – Schoenberg, 39 – Wiener-Hinchin, 36 transformation – bilateral, two-sided, 82, 83, 85, 91, 93, 98–100, 106, 113 – congruent, 32, 37 – Fourier, 11, 36, 39 – Fourier-Walsh, 40 – holographic, 11 – inverse, 5, 7, 9, 13, 18, 22, 45, 67, 77, 79, 82, 83, 88, 91, 93, 100, 115, 117, 119, 120, 129 – isometric, 20, 47, 79, 80, 129 – one-sided, 81, 82 – Rademacher, 40 – similarity, 33, 105 – straight, 9, 21, 77, 115, 122, 129 – Walsh, 40 uniform distribution, 22, 59, 62, 73, 79, 85, 86, 89, 90, 129 value – absolute, 28, 34, 36, 46–48, 54, 56, 62, 86, 98, 127, 129, 135, 136, 138, 139, 143, 145, 147, 149, 150 vernier, 114