THE CHAPTER 4. A PROLONGATION. THE KILL
All above-stated concerns case, code a parameter without
noise. If it has a noise parasite, the kill is necessary. Obviously, that anomalous measurements should be are removed before kill, differently they can be smoothed to such a extend, that
will not look like any more anomalous measurement, but distortion in an information will introduce invalid.
In all this problem there is one vagueness. If the number of anomalous measurements in a package is great, for want of their restoring a
method of an interpolation of conventional true values of a parameter on an interpolating section can be rather large and much more errors of digitization. It is quite natural, that their
use can be generally inexpedient. Therefore here there can be three solutions:
- anomalous
measurements generally to throw without restoring;
- to suppose restoring anomalous
measurements by a method of a linear interpolation in the limited number;
- to suppose restoring
unlimited number of anomalous measurements.
It is uneasy to notice, that the circumscribed algorithm
has and filtering property. Really, let's assume, that there is a parameter deformed by a noise of large amplitude (Fig. 30).
We have a criterion - Dä. And, if the
given measurement falls outside the limits Dä, we substitute
it by significance: ai+1 =ai+D. The restored
points will look like (point - 2 in a Fig. 30). Obviously, that the noise sharply decreases. If there is a redundancy of frequency of inquiry, it is possible to apply kill circumscribed
above. Again obtained points - (3). From Figure it is visually visible, that the parameter is filtered. For want of it the single anomalous measurements (point - 4) are removed also. The
advantages of the given method consist that both kill, and the deleting of anomalous measurements is made in one algorithm. The defect here that in case of a pack of anomalous measurements
or rupture of function this algorithm becomes less effective. Really, this algorithm not that other, as the tracer, which has inertia.
Watching a point of anomalous measurements, it deviates from a true value of a parameter. After ending a pack the filter rather slowly returns to expectation of a parameter (Fig. 31). In an
outcome the large error in low-frequency area is received which in any way to remove not performed. The output, probably, is, that on case of anomalous measurements, which number is more m
(for example - 3) filter before their vanishing is switched off.
Thus, as a whole algorithm of kill and deleting of anomalous measurements can be executed as one whole as separate blocks, which combination and choice of criterions can be various. As
processing of algorithm for each case rather is labour-consuming, it is meaningful in the beginning of algorithm to have a word a circumscribing status it. In this case there can be
identifiers, defining availability those, or other criterions, These words can be stored in the program and the call them can be determined by identifiers of a parameter.
3. Similarly to package of anomalous measurements can arise and sequence of unipolar sample of a noise parasite. Obviously, that for want of operation of the above described filter the operation unipolar samples generates the same effect, as anomalous measurements (Fig. 31). Here there is a possibility of an error, which on dynamic characteristics is similar to a measured parameter and consequently is hereinafter unremovable.
It is understandable, that the narrow-band noise were close to area of frequencies of a measured parameter, will generate a
sequence of inquiries with alternating signs (Fig. 32). For want of it the efficiency of a filter can be increased, reducing D of a filter, but
not to such a extend, that it would affect an exactitude of restoring of a parameter. For want of it follows, that the multiplicity unipolar samples is proportional to the relation fô/fä.
For example, for want of fô/fä = 4, we can have till 4 unipolar samples. For want of it kill of averaging and kill of tracking algorithm are independent, that allows
sharply to increase efficiency of kill, for want of which the amplitude of a noise basically is limited. But with growth of amplitude of a noise the amplitude it high-frequency component,
therefore, and probability of packages unipolar samples, and, therefore, and measurement error will increase also. But for want of it efficiency of a filter very high.
For want of availability of parasites of large amplitude and frequency, the probability of a determination samples in a zone D is small and can not be taken into account. In this case, each sample will have significances either +D, or -D. Such model allows to consider separate samples independent. The model of such case can be represented as a coin-tossing. Let's construct such realization (Tab. 1), the Table is constructed in two sessions of experience till 100 throws. Let's define distribution of a multiplicity (Fig. 33) to 200 throws. As the speech goes about unipolar measurements, two branches of a Fig. 33 can be averaged, considering them for 400 throws.
![]() |
|
Tabl.1 | Fig.33 Fig.34 |
Probability of emerging of a pack of measurements are determined together with probability of events have probability 0,5. Therefore: Ð
= (0,5)n. Accordingly: Ð1 = 1/2, Ð2 = 1/4, Ð3 =1/8, Ð4 = 1/16, Ð5 = 1/32, Ð6 = 1/64, Ð7 = 1/128, Ð8 =
1/256. Such ideal curve is shown in a Fig. 34. From here, the probability of packages unipolar âûáîðîê allows to define(determine) probability of an error. Let's define it on a
quantile 0,95:
Function f(P) =1/2n, P[x<0,95] = ?
òf(P)dP = ò(1/2n) | then n ®¥ | = 1.
We haw:
(1/2n) dn =
2-ndn =
e-n ln2dn = -1/ln2 + 2-n/ln2;
then n®¥ , ò0 = 1/(-ln2), bat P = 1, from here 0,95 Û 0,95 * 1/ln2.
Consequently: 1/ln2 - 2-n/ln2 = 0,95 /ln2 or 2-n = 1 - 0,95 = 0,05,
that is 2n = 20. from here n = 4,3 » 4.
Thus, number in a package exceeding 4 us can not interest, as the probability of such event is small enough.
The circumscribed above approach bases on some idealization. In an actual situation the noise frequently have a periodic character. It results that the distribution of a Fig. 34 is a little
distorted. And it results that will increase probability of readout of a smaller multiplicity.
Further, we have considered process of kill with D to equal length of a pitch in relative magnitudes. Now it is necessary to take into account, that the frequencies of inquiries undertake above fä in 4 times. For want of it the error of digitization decreases in guadrate, that is in 16 times, therefore D of the following filter can be in 16 times less. Due to this length of a package can be in 16 times more. It allows here by possibilities of multiple parasites to neglect up to 16*4 = 64 on an interval of digitization. But for want of it it is necessary to take into account that the error of digitization with magnification of an interval of digitization is increased quadratically, and error of a filter linearly. Therefore, saving D of a filter of a constant it is necessary to take it in N of time less errors of digitization, where N = fô/ fä. Therefore for want of to length of a package equal N, the error will not exceed an error of digitization. In particularly, for want of N = 4, the probability of packages is disregardly small. Besides further kill, and also property of suppression of parasites for want of digitizations make a filter as a whole effective enough.
4. The performances of a filter can be improved. Really, the package unipolar samples, the more longly, the is less probablis. Obviously, that than more longly package, especially becomes to probablis emerging of sample of other polarity. It is usually considered, that each from independent samples probability has independent. It correctly for want of volume a condition, when significances previous samples we neglect. But soon we have a possibility to store and to analyze the previous samples, we and have a possibility to assume significances consequent. For a case history it not of the so known phenomenon, to perform the game as the model, using the Table 2.
Tabl.2 |
Essence of game in the following. Let's assume, that we throw a coin. If to suppose, that the emerging averse (0) and reverse (1) is equiprobable, that is sense to consider as a prize “1” and penalty “0” , having expressed it is by function U = (±1). For want of it we enter some number the rate “Ñ” . In case of a prize, quantitatively it expresss by a product U*C. And we shall consider two strategies.
- First - as U is equiprobable, it is possible to enter the rate as constant number and it to not change - for example, U = 1;
- Second - the probability of the givenn sample depends from previous, and ambassador, for example U = +1 should drop out U = - 1, therefore rate on a sign to return previous significance U.
What is longer a package unequivocal samples U, the more rate of expected sample. Quantitatively we define it under the formula Ñ = 2(n - 1), where n - length of the previous
package.Let's consider game completed for want of fulfilment beforehand of specific number of experience, for example - 50. Than it is more, the better, but already for want of N = 10
tendencies begin quite to be looked through. Having completed game, we compare outcomes of both strategies. That strategy benefits, at which the sum of a prize will be more. Quantitatively
outcome is determined as a residual of prizes.
So, having conducted 50 throws of a coin, we shall fill in the table (Tab. 2). In an outcome the following is obtained:
a prize of the first strategy - Â1 = 6;
a prize of the second strategy - Â2 = 24.
A general prize of the second strategy - 18 glasses.
Let's represent process
graphically (fig. 35).As to the first strategy, it is understandable, that the total sum should only change near zero, as the number samples of both signs, with magnification N, should be
aimed what to be identical. In it and sense that the probabilities of events in the given experience are equiprobable and are equal 1/2. As to the second strategy, that on Figure
is visible, that there is a tendency to growth of a prize. And the rate of this growth is equal N/ Â2 and for want of growth N ad infinitum, tends to 2. From here prize tends to Ð*N, or in
this case - N/2.
The Essence of a problem is, that in theory it is considered probabilities, that the events the following one after another in time are incompatible. That is, how many we would not throw a coin, before the next throw we suppose, that the probability that will drop out reverse, is equal 0,5. And it is correct. But on the other hand, if we have memory and sequentially throwing coins we store an outcome, becomes, generally speaking, all the same, whether we at once have thrown ten coins, sequentially each from coins or ten times one and same coin.The outcomes will be almost identical, and for want of rushing of number of experience to infinity, the residual in outcomes will be aimed to zero.
Proceeding from this, if we threw a coin and at us ten times will drop out averse, it will surprise to as, and we shall think,
that it is time, probably, to drop out and for reverse. And if to put the rates, any normal person will deliver on reverse. Certainly, it is possible and to lose, but probability it small.
It is equal 1/11. And probability of a prize - 10/11. That is we suppose, that the probability of shedding averse is equal the next throw 1/11, and shedding reverse - 10/11.And this
inconsistency with concept of a classical probability theory is explained only to that by the memory we join the last experience - throws in one experiment and the incompatible events become
for want of these conditions joint.
We also can develop algorithm, which, storing the
previous anomalous readout, can more fast after ending a package of anomalous measurements proceed to normal.
For this purpose we can enter a condition, that a measurement recognized normal, if it the significance hits in the field of an allowable error, which is determined under the formula D*2(n-1) where n equally to
number of the previous anomalous measurements in a pack. The character of process is shown in a Fig. 36.
The tracking filter under an operation of a package of a unipolar error accumulates an error, watching anomalous measurements. When the pack terminates, it returns to normal measurements (zone Õ). If the error of a tracking filter is increased, for want of ending of a pack of anomalous measurements limited on dynamics measured parameter will appear in a zone of allowable significances and is seized by a tracking filter. For want of it the energy of a parasite from anomalous measurements essentially decreases and does an error from effect of anomalous measurements more acceptable.
5. Thus, we consider a population of methods of struggle with parasites of a various nature. These methods I can be incorporated in one computing algorithm, which example we shall consider below.
Basic data for algorithm are an error of digitization D and number of anomalous measurements in a pack Ì, more which the signal "Failure" is produced. Besides the number N, defined by volume of buffer memory ¹1 (ÁÏ1) is set. This memory is necessary for information processing by a tracking filter and volume it is determined by number of allowable anomalous measurements in one package. That is roughly N can be accepted equal Ì.
Conditional labels in algorithm:
xi - next measurement;
i - number of the next measurement;
j - number of an anomalous measurement;
m
- total number of anomalous measurementts;
Dx - module of a
difference between the next and anomalous measurement;
x - function equal “1”
if xi > xi-1 and “0” if xi < xi-1.
Depending on a situation, the time for a solution of algorithm can be any other business and achieves a maxima for want of T=Dt ´ M, where t
an interval between measurements. Besides the information processing is made on several channels and, at last, for acceleration of input of an information in the COMPUTER for want of
obturating an information as files of the certain magnitude is necessary for full processing. By virtue of it the recording of an information after kill owes is made in the two-page buffer
STORAGE, in which at first information from a filter is noted on one page and input in the COMPUTER with other. Then the pages vary.
So, the algorithm works as follows: For want of deriving of the first measurement xi on a condition i = 1 significance x1
is noted in the RAM for further use in algorithm and in buffer memory ÁÏ1 for further kill and recording in ÁÏ2.
Further, for want of receipt of a measurement x2 the comparison with x1 is made it. In case if x2 > x1, xi = 1,
on the contrary - 0. The significance xi is stored, for
use in consequent, for want of it the latter xi
for a comparison with next xi+1 is
stored. In a case i = xi
, xi
with xi+1
is not compared, as xi
from the previous measurement simply is not present, transition to the following block - determination of a difference between the next measurement xi and previous xi-1
therefore happens, that is the formula Dx = | xi-xi-1|is
decided.
The significance of an error of digitization Dx£ê2Dï
is further compared Dx with specific
double. If this condition is executed, the measurement is considered normal and arrives on recording in the RAM in quality xi and in ÁÏ1. If Dx>ï2Dï, a
measurement anomalous. For want of it the following is made. At first, is noted in counter (Si=m)
fact of an anomalous measurement and is compared to number of m. In case if m < Ì, is calculated and is stored for further use in algorithm m´4D. After
these manipulations the further processing of a measurement is made.
If xi+1>xi that fitted significance xi+1 is received by addition to xi of magnitude 2D, if on the
contrary, 2D
from xj is deducted. As it is not known, whether the sequence of a pack of anomalous measurements {xj} was terminated, (And j can be equal and "1" ),
added xi+1 on an exit of a filter do not move, and are used in the filter as readout for a comparison with the next measurement. That is passed in the block deciding a problem s: xi=xi>xi-1
and Dx=|xi-xi-1|.
It proceeds until the package of anomalous measurements will be terminated, or the condition m > M will be defaulted, that is the signal "Failure" will not be
produced, on which the process of input of an information owes before clearing up of the reasons of failure will be stopped.
The fact of the ending of a sequence of anomalous measurements is determined on a modification of a sign of a difference of the next measurements, that is if xi¹xi-1.
The transition to other branch of the program is in this case made. At first, the determination of the module of a difference and comparison it with magnitude 4mD is again made.
This magnitude is determined as an increased zone indeterminacies from the last normal measurement, that is 2D´m´2, where m
number of anomalous measurements in a package. The doubling is made whereas the zone indeterminacies is symmetric concerning the last normal measurement.
In the event that Dx£4mD, this measurement
is considered normal and the algorithm passes to an interpolation of a zone of a package of anomalous measurements, that is is received xj xi+j=(xi-xi+m+1)/(m+1).
The account is made so long as j does not become more m+1, that is not will reach up to the last measurement. After that the interpolation is finished also significance m will be
nullified . Simultaneously with an interpolation the obtained significances of measurements arrive on recording in the RAM in quality xi and in ÁÏ1.
In the event that Dx>4mD that is made correction of obtained significance xi by calculation xi*=xi+m+4mD if xi=1 and xi*=xi+m-4mD if xi=0. Further interpolation is made in the same way. However probability of such event is smallest.
After shaping an array of an information in ÁÏ1 of a difiniendum by number of information words Si=N the information arrives on kill under the formula (xi-1+2xi+xi+1)/4. Such kill provides unbiasedness on time of measurements and high quality of kill. The filtered information arrives in buffer memory ÁÏ2.
The circumscribed algorithm can be changed adding, for example, normalization of arrays in ÁÏ2 on the certain law etc. But it already problems going out for a framework of the present work.
6. So, the problem of kill with the help of of computing means is considered. It is necessary to mark the following. The struggle with
parasites has a complex character. it should be made at all stages of deriving, transfer, processing and registration of a measuring information. Methodically it is possible will
define as the following directions:
- introduction of primary converters stable against
parasites and, especially to anomalous measurements;
- correct accommodation of gauges qualitative
calculations of damping platforms;
- screening of gauges and analog kill in gauges;
- application of noise-resistant discreete elements in digital parts of the channel of transfer of an information;
- introduction of noise-resistant commuunication lines, in particularly with a frequency modulation, and also fibre-optical, laser etc. of communication lines;
- application of noise-resistant codingg, for example because of Hemming codes;
- correct
choice of frequency bands of gauges in the correspondence with specificity of soluble problems;
- realization of digital kill on each ffrom measured parameters;
- the realization of kill on a
population of parameters (is especially mutual correlated), because of, for example, discrete Kalman filters.
Each from these directions has the advantages and defects, and also demands of the certainly costs. The greatest effect can be reach by optimum choice of the set of methods.
(Site "Through thorns to stars". Anatol Grigorenco)
Copyright©2001Feedback