Search

CN-116485749-B - Self-encoder-based method for identifying dirt in lens module

CN116485749BCN 116485749 BCN116485749 BCN 116485749BCN-116485749-B

Abstract

The invention relates to the technical field of camera detection, in particular to a self-encoder-based method for identifying dirt in a lens module. The problem that manual intervention is needed when the lens image is acquired and the automatic production line is not suitable for production line is solved. The technical scheme includes that the method comprises the following steps of S1, collecting sample pictures of a plurality of clean lenses, S2, constructing a self-coding neural network, wherein the neural network structure comprises an encoder and a decoder, S3, preprocessing an input image, intercepting lens areas in the pictures, S4, training the self-coding network, and S5, training a lens dirt classifier. The invention has the beneficial effects that whether the lens in the finished product module is dirty or not can be automatically identified, and manual intervention is not required.

Inventors

  • HU BIN
  • Yao Zhangyan
  • LI HAO
  • LI YUEHUA

Assignees

  • 南通大学

Dates

Publication Date
20260505
Application Date
20230423

Claims (1)

  1. 1. A method for identifying dirt in a lens module based on a self-encoder is characterized by comprising the following steps: s1, collecting sample pictures of a plurality of clean lenses; S2, constructing a self-coding neural network, wherein the neural network structure consists of an encoder and a decoder; s3, preprocessing an input image, and intercepting a lens area in the image; s4, training a self-coding network; S5, training a lens dirt classifier; the step S2 specifically comprises the following steps: The input x passes through an encoder E to obtain f=E (x), and then passes through a decoder D to obtain an output x' =D (f), wherein the encoder and the decoder are in symmetrical structures; wherein x is a picture input into the encoder E, x' is a picture output from the decoder D; The step S3 specifically comprises the following steps: S3.1, aiming at an input picture, performing an edge detection algorithm on the image; S3.2, adopting Hough transformation to solve the round area in the graph for the result of the step S3.1, and selecting and predefining a circle The image with the smallest distance is used as the lens area image, and the circle where the lens is positioned is obtained as ; Wherein (x o ,y o ) represents the center coordinates of the circle in which the lens is located, and r o represents the radius; Obtaining a set S of a group of circles with the number of C through a circular fitting algorithm, traversing the set, and aiming at one element in the set Wherein Representing the center coordinates of the circle, The radius is indicated as such, (1) Initialization of , ; Wherein: the radius difference value is measured, and the circle center difference value is used for D; (2) Calculation of Absolute value of (2) Calculation of If (3) And is also provided with Updating , ; D C denotes the distance between the center of a circle in the set and the predefined center; (3) Traversing the set S, wherein the parameter corresponding to the minimum value calculated in the step (2) is a circle where the lens is located, and the circle where the lens is located is obtained ; Wherein (x d ,y d ) represents the center coordinates of the circle in which the lens is located, and r d represents the radius; s3.3, cutting out a lens area picture from the picture according to the result of the step S3.2, wherein the picture width and height are as follows Wherein Setting 0 to the pixels outside the circular area in the graph for predefined parameters, and scaling the width and height to Wherein ; Wherein W is the picture width and H is the picture height; The step S4 specifically comprises the following steps: s4.1, reading an input picture from the sample set, and inputting the picture x intercepted after the processing of the step S3 into a network to obtain an output x'; s4.2, calculating a reconstruction error, wherein Loss= IIx-x'; the II x-x' refers to the L1 loss function, and the calculation formula is as follows: ; N is the number of training samples and, And Respectively representing the value of the ith pixel in x and x' and P represents the pixel set; S4.3, updating parameters of an encoder and a decoder by adopting a gradient descent method; s4.4, repeating the steps S4.1-S4.3 until the model converges to obtain an encoder parameter E_theta; The step S5 specifically comprises the following steps: S5.1, randomly selecting M pictures from a sample set, inputting the encoder parameters E_theta obtained in the step S4, and obtaining feature sets { f_1, f_2, & gt, f_M }; M is the number of pictures in the sample set, and f_i represents the characteristics of the ith sample picture; S5.2, training a classifier by taking the feature set in the step S5.1 as the input of the one-class SVM 。

Description

Self-encoder-based method for identifying dirt in lens module Technical Field The invention relates to the technical field of camera detection, in particular to a self-encoder-based method for identifying dirt in a lens module. Background The application discloses a method for detecting whether an optical lens in a solution has dirt adhesion or not, which comprises the steps of defining an optical lens, focusing an image capturing unit on the peripheral area of the optical lens to generate a first original image, focusing an image capturing unit on the optical area of the optical lens to generate a second original image, respectively performing image homogenization on the first original image and the second original image to obtain a first homogenized image and a second homogenized image, respectively performing image processing to obtain first image data and second image data, respectively comparing each pixel gray scale value of the first image data and the second image data with a first threshold value, and judging whether the edge detection area and the central detection area corresponding to the optical lens have dirt. The application relates to a camera lens dirt processing method, a mobile terminal and a computer storage medium, and discloses a camera lens dirt processing method, a mobile terminal and a computer storage medium, wherein the publication date of the camera lens dirt processing method, the mobile terminal and the computer storage medium is 2021-07-20. By the mode, dirt of the camera lens can be found in time and corresponding treatment is carried out, and shooting effect and efficiency are improved. " At present, the lens module may cause a certain dirt on the lens in the production and assembly process, thereby affecting the quality of the finished product. Conventional lens smudge detection is simply a test of individual lenses and is often difficult to determine on the finished lenses. The prior art mainly adopts the traditional image processing technology to judge whether the lens is dirty or not, such as CN103245676B, and the lens is easy to be influenced by the environment by calculating the average value of pixels for analysis. For example, CN113141462a needs manual intervention when acquiring lens images, and is not suitable for the automatic production of the production line. Disclosure of Invention The invention aims to provide a self-encoder-based method for identifying dirt in a lens module. In order to achieve the aim of the invention, the technical scheme adopted by the invention specifically comprises the following steps: s1, collecting sample pictures of a plurality of clean lenses; S2, constructing a self-coding neural network, wherein the neural network structure consists of an encoder and a decoder; s3, preprocessing an input image, and intercepting a lens area in the image; s4, training a self-coding network; s5, training a lens dirt classifier. The step S2 specifically comprises the following steps: The input x passes through an encoder E to obtain f=E (x), and then passes through a decoder D to obtain an output x' =D (f), wherein the encoder and the decoder are in symmetrical structures; Where x is the picture input to the encoder E and x+' is the picture output by the decoder D. The step S3 specifically comprises the following steps: S3.1, aiming at an input picture, performing an edge detection algorithm on the image; S3.2, solving a circular area in the image by adopting Hough transformation on the result of the step S3.1, selecting the image with the smallest distance from a predefined circle (x 0,y0,r0) as a lens area image, and obtaining the circle (x d,yd,rd) where the lens is located; wherein (x o,yo) represents the center coordinates of the circle in which the lens is located, and r o represents the radius; a set S of circles is obtained through a circular fitting algorithm, the number is C, the set is traversed, for one element (x c,yc,rc) in the set, wherein (x c,yc) represents the center coordinates of the circle, r c represents the radius, (1) Initialization of delta = + and infinity of the two points, d= + infinity is provided Traversing the fitted set of circles to find a circle with the smallest radius difference and circle center difference, namely a circle closest to the predefined circle position; (2) Calculating the absolute value delta c of r 0-rc, calculating If Δ c < Δ and D c < D, update Δ=Δ c,Dc =d Traversing each circle in the set, calculating a radius difference value and a circle center distance from a known circle, and finding a circle with the smallest difference value from the known circle in the set; (3) Traversing the set S, wherein the parameter corresponding to the minimum value calculated in the step (2) is a circle where the lens is located, and the circle where the lens is located is (x d,yd,rd); wherein (x d,yd) represents the center coordinates of the circle in which the lens is located, and r d represents the radius; S3.3, cu