LOCAL DIFFERENCE THRESHOLD LEARNING IN FILTERING NORMAL WHITE NOISE

. The article was aimed at studying the process of learning by the local difference threshold when filtering normal white noise. The existing learning algorithms for image processing were analyzed and their advantages and disadvantages were identified. The influence of normal white noise on the recognition process is considered. A method for organizing the learning process of the correlator with image preprocessing by the GQP method has been developed. The dependence of the average value of readings of the rank CCF (RCCF) of GQPs of the reference and current images, representing realizations of normal white noise, on the probability of formation of readings of zero GQP is determined. Two versions of the learning algorithm according to the described learning method are proposed. A technique for determining the algorithm efficiency estimate is proposed.


Introduction
Pattern recognition is a relevant and promising area in the field of information technology, the scope of its application is expanding every year. In turn, recognition [4,5] has separate tasks that should be solved in order to obtain the most accurate result. This includes the recognition process itself, image preprocessing to detect and eliminate noise, segmentation, real-time recognition, etc.
The choice of the value ψ of the local difference threshold (LDT) in order to maximally suppress the additive normal white noise in the image is relevant for the generalized Q-transformation (GQT) of the image or after its partial Q-summation (in the latter case, when creating wide-field correlators) [3,17]. Partially, for the case of correlation detection of a signal of constant amplitude in the presence of the specified type of noise, this issue is covered in [2,6,13].
To implement image recognition, a high-precision and fast learning algorithm is required. Existing algorithms have a number of shortcomings, which creates the need to create new teaching methods. Good recognition accuracy is provided by preprocessing using the GQP method. Thus, in the article will be developed a method based on such preprocessing. The method will allow effective system training for recognition under the influence of normal white noise on the image [8,10,13].

Purpose and tasks of the study
This article describes a method for organizing the learning process of a correlator with image preprocessing using the GQP method, carried out in order to select the value ψ opt of LDT, which minimizes the average value of reports of the rank cross-correlation function (RCCF) + ( ⃗) of the reference (`,`) = { , } and the current ( , ) = { , } images, which are realizations of normal white noise [7,14,16].

Materials and research methods
Consider the graph of the probability density ( ) of the white noise amplitude distribution. It is shown on Fig. 1 and is expressed by the formula: where σstandard deviation (SD) of the noise signal amplitude. Expression (1) is valid for describing the noise of the transmission channel of a television image or the noise of a television photodetector, but cannot be used to describe a unipolar signal from a random background image (background) with a normal distribution of the brightness probability density [8,15].
The distribution law of the probability density (∆ ) of the difference ∆ of samples of normal white noise (Fig. 1b) has the following form: This law is also valid for the difference between the signal samples from a random background with a normal law of distribution of the brightness probability density. Therefore, the conclusions regarding the suppression of the contribution of the additive normal white noise in the RCCF can be extended to the suppression of the contribution of the random background signal to this correlation function [2,9,12].
The quantization of the difference signal ∆ , = , − + , + , where ( , ) = ⃗ is the rank of the generalized Q-preparation, leads to replacing the continuous distribution function (∆ ) on of quantization levels, this function is not limited in length, with a lattice function (∆ ), where ∆ = ∆ √2• , limited ( Fig. 2) in length by the number , which is related to the confidence interval 2 • √2 ( is the coefficient) and the confidence probability as follows: where INT[ ]integer part of a number, where Ф 0 ( )tabulated probability integral.

Fig. 2. Lattice function graph
From (3) and (4) follows, that: or Functional dependency т b (Р ) in case of equality in (3) is reflected in the table 1. Since preparation usually follows quantization, then (3) has the form: From (7) it follows that with generalized contour preparation, the bit depth of the analog-to-digital converter (ADC) should be twice as large as with conventional correlation processing.
According to the table 1 it follows that a small difference ( ≤ 64) of high-speed ADCs (for example, type 1107 PVZ) provides processing of the prepared image with a confidence probability Р ≤ 0.9844. Let is the length of the GQP if the one-dimensional reference noise image ( ′ , ′ ): = + (8) For the preparated images, we omit the index r for brevity and instead of the designations and , we mean the designations and , respectively. The number 0, +, − of possible reference images having the same number of samples of zero ( ,0 ), positive ( ,+ ) and negative ( ,− ) GQPs for a given length , is defined as follows: The probability of each such combination ( ,0 , ,+ , ,− ): here Р 0sample probability of zero GQP: The total probability of all possible combinations: Maximum (12)  The initial moment of the first order will be equal to: Distribution ∑ is mainly concentrated in the range [1, 2 3 ] of values , being nearly symmetrical with relatively to = 3 . For this range, the initial moment of the first order is equal to half, i.e. 3 .
After transformations, we get: The conclusion is explained by the data of Let us consider the case 0 + 1, 0 = 1 − 2 , = 1 . In this way, 1,0 = . With this in mind: Let us find the value m corresponding to the maximum probability density for the sum (21): From (22) it follows: Using the Stirling formula for the approximate calculation of z!, we obtain: [ln ] ′ = − (ln + is much less than the change of , and also that, as can be shown, it follows that: We accept that 1 ≈ . This conclusion is explained by the data in table 3  For a case 0 > :   For the existence of a single extremum A, it is necessary to fulfill the condition (at 0 = 1 3 ): ( g 0 ) = g 0 this implies: g = 3 (31) where ∈ + (set of natural numbers).
To implement an ideal LDT learning process, it is necessary that, with a minimum change in the threshold Н by one conventional unit (per quantization discrete) in the vicinity of Н = 0.435, there was a change per unit of the numbers ,0 , or , : Let us put ∆ = 00 ( ≥ 2): The specific distribution of samples within the rank mask of the reference should be symmetrical. In order to sharpen the peak of the rank autocorrelation function of the GQP standard [10,11], for example, ] with a discrete 1 6 . The block diagram of the algorithm of the learning process based on the calculation of the RCCF of the GQP of the noise image of the current frame (line) and the generated GQP.
The formation of a generalized contour preparation at the second step of the algorithm is not fundamental. It is possible to use and preprocess according to the method of rank sign delta modulation [10]. However, since in this case 0 ( ) = 0 (values + ( ) and − ( ) are not being changed), the appropriate adjustment of the formulas (32-34) is necessary. LDT training stops when the specified error is reached > 0. If those conditions are met: (35) Despite the advantage of such an algorithm, which consists in the uniformity of the LDT learning process (the difference lies in the reference images), this algorithm is difficult to implement and can be significantly simplified. The simplification is based on the fact that for the current noise image takes place ) = (36) in this case, this formula corresponds to formula (29) for 1 , if we accept that = is the reference noisy image length. The main advantage of this algorithm is its extreme simplicity, since it does not require the calculation of the RCCF at all and can be combined with the output and preparation of the frame of the current image.
The learning process of the LDT is reduced to finding the average ∆ module of sample mean values 00 of positive and negative differences in the difference noise or background image and setting the LDT = ∆ . At the same time, taking into account the stationarity of the noise in adjacent image frames, to save memory, it is advisable to combine the calculation of the estimates ∆ with the input of the current image frame, and the threshold = ∆ is advisable to be used in preprocessing by the LDT method of the subsequent frames following it (the number of which is determined by the time of the stationarity of noise or background, respectively) [1,11,12].
For the confidence probability 0 < < 1 (also works for = 1) such an estimate will be calculated by the formula:

Research results
As a result of the study, it was determined that the dependency 1 ( 0 ) of the average value of the samples of the rank CCF (RCCF) of the GQP of the reference and current images, representing the realizations of normal white noise, on the probability 0 of the formation of samples of the zero GQP, which, in turn, depends from LDT ψ, has a clearly defined extremum at the point 0 = 00 = 0.33. In the extremum region, the dependences 1 ( ) are close to the asymptotes 1 ( ) = 0.5 (1 − √ 2 ) and 1 ( ) = √ 2 . In this case, the value of is 0.33√ 2 standard deviation of the difference noise image. The extremum of the dependence 1 ( 0 ) corresponds to the maximum of the total absolute margin of the reference image, which ensures minimization of the amplitude at average of the false peak amplitude of the total CCF of the GQP of the reference and current images when working on arbitrary backgrounds.
Algorithms for the learning process have been developed, they minimize the average value of the RCCF samples of the reference and current noise images preprocessed by the GQP method. One algorithm is based on the calculation of the RCCF and requires small measurements of the performance of the GQP correlator.
Another algorithm is simpler to implement and is based on counting the number of single samples of the current image GQP. There was shown the definition of the estimates ∆ ≈ , which requires minimal calculations and equal to the average modulus of the sample avarage means of the positive and negative differences of the current image.

Conclusions
In this article was considered the recognition of discretized images in the conditions of noise on the image, namely, training in the recognition of such images. The corresponding learning algorithms were analyzed and their advantages and disadvantages were presented. A method was developed for organizing the learning process of the correlator with image preprocessing according to the GQP method, which is carried out in order to select the value ψ opt of LDT. The dependency 1 ( 0 ) has been identified and the extremum of this dependence of the average value of the samples of the rank CCF of the GQPs of the reference and current images, representing realizations of normal white noise, on the probability 0 of the zero GQP samples formation.
Two variants of the learning algorithm according to the described learning method were proposed. One algorithm is based on the calculation of the RCCF and requires small measurements of the performance of the GQP correlator. Another algorithm is simpler to implement and is based on counting of the number of single samples of the current image GQP. A definition of the algorithm's estimate, which is equal to the average modulus of the sample mean values 00 of the positive and negative differences of the current image, is proposed. The proposed method is characterized by a small number of calculations and ease of implementation. Thus, the obtained method can be used for training in the processing of sampled images and the influence of normal white noise, while increasing the recognition efficiency.