KR-102961950-B1 - APPARATUS OPERATING METHOD FOR IMAGE PREPROCESSING BASED ON NEURAL NETWORK AND APPARATUS OF THEREOF
Abstract
According to one embodiment, an input image preprocessing method using a neural network—the neural network comprising a first neural network, a second neural network, and a third neural network—may include the steps of: applying an input image to a first neural network which is an encoder to obtain a first vector which is a latent vector containing feature information of the input image; applying the first vector to a second neural network which is a decoder to generate an output image; applying the first vector to a noise removal filter to obtain a second vector—the second vector which is a vector in which noise of the input image has been removed based on boundary information of the input image; and applying the second vector to a third neural network which is an identifier—the third neural network which is trained to receive a vector containing feature information of the input image and determine whether the vector is a vector generated by the first neural network—to determine whether the second vector is a vector generated by the first neural network.
Inventors
- 김선영
- 강창호
Assignees
- 국립군산대학교산학협력단
Dates
- Publication Date
- 20260507
- Application Date
- 20220705
Claims (11)
- A method for preprocessing an input image using a neural network in an electronic device for preprocessing an input image—the neural network comprising a first neural network, a second neural network and a third neural network—, A step of applying an input image to a first neural network acting as an encoder to obtain a first vector, which is a latent vector containing feature information of the input image; A step of generating an output image by applying the first vector to the second neural network, which is a decoder; A step of applying the first vector to a noise removal filter to obtain a second vector—the second vector being a noise-removed vector of the input image based on boundary information of the input image; and A step of determining whether the second vector is a vector generated by the first neural network by applying the second vector to the third neural network, which is an identifier—the third neural network is trained to receive a vector containing feature information of the input image and determine whether the vector is a vector generated by the first neural network; Includes, The above noise removal filter is, It includes a guided filter that removes noise included in the input image based on boundary information, and A filter for removing noise included in the input image based on at least one of a patch size which is at least part of the input image and a normalization parameter for determining the coefficients of the guided filter, Input image preprocessing method.
- In paragraph 1, The above input image an image converted from a color image to a grayscale image Input image preprocessing method.
- delete
- delete
- In paragraph 1 The above-mentioned first neural network is a neural network trained based on a first loss function, and The above first loss function is Determined based on at least one of the similarity between the input image and the output image and the output of the third neural network Input image preprocessing method.
- In paragraph 5 The above similarity is Determined based on the similarity between the distribution of the input image and the distribution of the output image. Input image preprocessing method.
- In paragraph 6 The above similarity is A value including a value in which the error is corrected based on the error of the similarity calculated based on the difference between the sample mean of the distribution of the input image and the sample mean of the distribution of the output image, Input image preprocessing method.
- In paragraph 5, The above similarity is Determined based on the amount of data change for transforming the distribution of the above input image into the distribution of the output image Input image preprocessing method.
- In paragraph 1, The above third neural network is a neural network trained based on a second loss function, and The above second loss function is A loss function that trains the neural network to lower the probability of correctly predicting whether the vector input to the third neural network is a vector generated by the first neural network, and to increase the probability of correctly predicting whether the vector input to the third neural network is a vector based on an actual image. Input image preprocessing method.
- A computer program stored on a computer-readable medium in combination with hardware to execute the method of any one of claims 1 to 2 and 5 to 9.
- In an electronic device for input image preprocessing, One or more processors; and Memory; Includes, The above processor An input image is applied to a first neural network, which is an encoder, to obtain a first vector, which is a latent vector containing feature information of the input image, and The above first vector is applied to a second neural network, which is a decoder, to generate an output image, and The above first vector is applied to a noise removal filter to obtain a second vector—the second vector being a noise-removed vector of the input image based on boundary information of the input image—and, The second vector is applied to a third neural network that is an identifier—the third neural network is trained to receive a vector containing feature information of the input image and determine whether the vector is a vector generated by the first neural network—to determine whether the second vector is a vector generated by the first neural network, and The above noise removal filter is, It includes a guided filter that removes noise included in the input image based on boundary information, and A filter for removing noise contained in the input image based on at least one of a patch size, which is at least a part of the input image, and a normalization parameter for determining the coefficients of the guided filter. Electronic device.
Description
Apparatus operating method for image preprocessing based on a neural network and apparatus of the same The embodiment relates to an image preprocessing method based on a neural network and an apparatus for performing the same. As it has been experimentally proven that artificial neural network structures can richly learn the feature space of data, they are being actively used in various research fields such as image classification, object detection, and image generation. In particular, due to its strengths in processing image information, it is widely applied to object recognition and identification algorithms in systems utilizing image information, such as autonomous vehicles and drones. Furthermore, in the defect inspection of manufactured products—a key technology for smart factories—it demonstrates more efficient and effective detection performance compared to traditional machine learning defect inspection. While various factors affect the quality of image measurements, representative examples include Gaussian noise, impulse noise, and clutter. If the quality of the input image is poor due to these factors, the accuracy of identifying objects contained within the image can significantly decrease. Consequently, research on filtering methods (or image preprocessing methods) to remove noise from images is becoming increasingly active. Furthermore, there is a growing trend of developing various AI-based image preprocessing techniques to overcome the performance limitations of existing filtering methods. Figure 1 is a flowchart of an input image preprocessing method using a neural network in an embodiment. Figure 2 illustrates a neural network structure for input image preprocessing in an embodiment. FIG. 3 is a diagram illustrating a guided filter in an embodiment. FIG. 4 is a block diagram illustrating an electronic device in various embodiments. Hereinafter, embodiments are described in detail with reference to the attached drawings. However, various modifications may be made to the embodiments, and thus the scope of the patent application is not limited or restricted by these embodiments. It should be understood that all modifications, equivalents, and substitutions to the embodiments are included within the scope of the rights. The terms used in the embodiments are for illustrative purposes only and should not be interpreted as intended to be limiting. Singular expressions include plural expressions unless the context clearly indicates otherwise. In this specification, terms such as "comprising" or "having" are intended to indicate the existence of the features, numbers, steps, actions, components, parts, or combinations thereof described in the specification, and should be understood as not precluding the existence or addition of one or more other features, numbers, steps, actions, components, parts, or combinations thereof. Unless otherwise defined, all terms used herein, including technical or scientific terms, have the same meaning as generally understood by those skilled in the art to which the embodiments pertain. Terms such as those defined in commonly used dictionaries should be interpreted as having a meaning consistent with their meaning in the context of the relevant technology, and should not be interpreted in an ideal or overly formal sense unless explicitly defined in this application. In addition, when describing with reference to the attached drawings, identical components are assigned the same reference numeral regardless of drawing symbols, and redundant descriptions thereof are omitted. In describing the embodiments, if it is determined that a detailed description of related prior art could unnecessarily obscure the essence of the embodiments, such detailed description is omitted. In addition, terms such as first, second, A, B, (a), (b), etc., may be used when describing the components of the embodiments. These terms are intended merely to distinguish the components from other components, and the nature, order, or sequence of the components is not limited by these terms. Where it is stated that a component is "connected," "combined," or "connected" to another component, it should be understood that while the component may be directly connected or connected to the other component, another component may also be "connected," "combined," or "connected" between each component. Components included in any one embodiment and components having common functions shall be described using the same names in other embodiments. Unless otherwise stated, the description in any one embodiment may also apply to other embodiments, and specific descriptions shall be omitted to the extent of overlap. Figure 1 is a flowchart of an input image preprocessing method using a neural network in an embodiment. As adversarial generative neural networks demonstrate successful performance in image generation and feature extraction tasks, many algorithms based on their training have been proposed. However, since traini