Search

CN-122023248-A - Generalized detection method for lens defects

CN122023248ACN 122023248 ACN122023248 ACN 122023248ACN-122023248-A

Abstract

The invention discloses a generalization detection method for lens defects, which comprises the steps of collecting the existing lens images with defects of a plurality of models and corresponding defect labels as an original data set, and merging and unifying the defect labels in the original data set. The method solves the problem of difficult model training caused by inconsistent labeling of multi-type lens defect data in the field of lens defect detection by unifying the defect labels, utilizes the intensity-controllable defect generation model to generate a defect map, supplements the data set of the color image lens defect detection model and the black-and-white image lens defect detection model so as to optimize the influence caused by unbalanced sample number in the model training process, and simultaneously carries out channel separation processing on the color image and the black-and-white image based on the double-channel detection of the color image lens defect detection model and the black-and-white image lens defect detection model, thereby greatly improving the generalization capability of the model to the characteristics and effectively realizing the generalization detection of the novel lens defects.

Inventors

  • YU HUAFU
  • HONG GE
  • WEN YONGFU

Assignees

  • 江西高瑞光电股份有限公司

Dates

Publication Date
20260512
Application Date
20251224

Claims (8)

  1. 1. The lens defect-oriented generalization detection method is characterized by comprising the following steps of: Collecting the existing lens images with defects in multiple models and corresponding defect labels as an original data set, merging and unifying the defect labels in the original data set, and forming a first data set by all lens images in all original data sets and the defect labels after merging and unifying; Adding defect intensity control parameters into the diffusion model, and training to obtain an intensity-controllable defect generation model; Establishing a poisson counting model of defect type-quantity according to the first data set, and a space probability model based on kernel density estimation; generating a defect map of the lens based on the defect-free lens image, the intensity-controllable defect generation model, the defect type-quantity distribution model and the space probability model, wherein the defect maps of all lenses form a second data set; Combining the first data set and the second data set to obtain a third data set, dividing the third data set into a color channel data set and a black-and-white channel data set according to the image colors, and training a detection model by utilizing the color channel data set and the black-and-white channel data set to obtain a trained color image lens defect detection model and a trained black-and-white image lens defect detection model; The method comprises the steps of respectively obtaining a color image and a black-and-white image of a lens to be detected, taking the color image as input of a color image lens defect detection model to obtain a first detection result, taking the black-and-white image after pretreatment operation as input of a black-and-white image lens defect detection model to obtain a second detection result, screening each detection result, combining the screened first detection result and the screened second detection result to obtain a final detection result of the lens to be detected, wherein each detection result comprises a defect type, a position of a defect detection frame and a confidence level.
  2. 2. The method for generalizing and detecting lens defects according to claim 1, wherein said merging and unifying defect labels comprises: The specification and unification are carried out on each defect label; generating a corresponding text vector for the defect label after unified specification by adopting a word vector model; calculating the inter-class divergence of each type of defect label in the original data set, normalizing the inter-class divergence, and calculating the inter-class divergence according to the following formula: ; Wherein, the Representing the first of the original dataset Inter-class divergence of the types of defective labels, Representing the first of the original dataset The number of text vectors corresponding to the type of defective label, Representing the first of the original dataset The centroid of the text vector corresponding to the type of defective label, Representing the centroids of text vectors corresponding to all defect labels in the original data set in a vector space; comparing the normalized inter-class divergence with a preset inter-class divergence threshold, and removing the defect label corresponding to the inter-class divergence larger than the inter-class divergence threshold from the original data set; aiming at text vectors corresponding to each defect label in the removed original data set, carrying out combination calculation on the first similarity between the text vectors; generating corresponding visual feature vectors for each defect label in the removed original data set by adopting ResNet-50 network, and calculating second similarity between each visual feature vector in a pairwise combination manner; And aiming at each first similarity and each second similarity, carrying out comprehensive scoring calculation, comparing the current comprehensive scoring with a preset scoring threshold, and classifying two defect labels corresponding to the current comprehensive scoring as defects of the same type when the current comprehensive scoring is larger than the scoring threshold, wherein the calculation formula of the comprehensive scoring is as follows: ; Wherein, the Represent the first Defective labels and the first The composite score between the individual defect labels, And Weights representing the first similarity and the second similarity respectively, Represent the first Defective labels and the first A first similarity between text vectors corresponding to the defective labels, Represent the first Defective labels and the first The second similarity between the visual feature vectors corresponding to the defect labels is cosine similarity; resetting the defect label of the removed original data set according to the classification result to obtain a first data set, wherein the type of the defect label in the first data set comprises A kind of module is assembled in the module and the module is assembled in the module.
  3. 3. The method for generalizing and detecting lens defects according to claim 1, wherein the adding of the defect intensity control parameters into the diffusion model and the training to obtain the intensity-controllable defect generation model comprises the following steps: Collecting lens images with different models and no defects, forming an intensity-controllable defect generation model data set, and dividing the data set into a training set, a verification set and a test set; Adding defect intensity control parameters into the diffusion model, wherein the defect intensity control parameters comprise defect intensity and defect scale; and training the diffusion model added with the defect intensity control parameters by using the intensity-controllable defect generation model data set to obtain a trained intensity-controllable defect generation model.
  4. 4. The method for generalizing detection of lens defects according to claim 2, wherein the spatial probability model based on kernel density estimation is represented as follows: ; Wherein, the As a probability density value for the predicted point, For the total number of defective labels in the first dataset, Pixel coordinates for the predicted points in the lens image, Is the first data set The coordinates of the center point of the area corresponding to the defect label, Is the first data set The covariance matrix of the defect class is used, 。
  5. 5. The generalized lens defect oriented detection method of claim 1 wherein the generating a defect map for the lens based on the defect-free lens image, the intensity-controllable defect generation model, the defect type-number distribution model, and the spatial probability model, and wherein the defect maps for all lenses comprise a second dataset comprising: generating each non-defective lens image in the test set in the model dataset using the intensity-controllable defects as a base sample; randomly sampling defect types and prior probabilities of the defect-free lens images by using a poisson count model of defect types-quantity aiming at each defect-free lens image in a test set, constructing a defect type-prior probability combination of the current defect-free lens image according to a sampling result, wherein the defect types in the defect type-prior probability combination belong to the types of defect labels in a first data set, comparing each prior probability in the defect type-prior probability combination with preset probabilities, reserving the defect types corresponding to the current prior probability when the prior probability is larger than the preset probability, counting the quantity of each reserved defect type, and taking all reserved defect types and the quantity of each reserved defect type as the defect types and the quantity of each defect type required by the current defect-free lens image; Substituting a spatial probability model based on kernel density estimation into each pixel point on the current defect-free lens image as a prediction point to obtain a probability density value corresponding to each pixel point, and carrying out normalization processing on the probability density value corresponding to each pixel point to obtain a probability weight value, wherein the probability weight values of all the pixel points form a spatial probability weight map consistent with the current defect-free lens image in size; The method comprises the steps that defect types required by a current non-defective lens image and the number of each defect type are subjected to first sorting according to the sequence from large to small of corresponding prior probability, each defect required by the current non-defective lens image is sequentially selected from a space probability weight graph according to the first sorting to serve as the center position of the defect, the defect types required by the current non-defective lens image, the number of each defect type and the center position of each defect required by the current non-defective lens image are transmitted into an intensity-controllable defect generation model in a prompting word mode, the current non-defective lens image serves as input of an intensity-controllable defect generation model, and the intensity-controllable defect generation model outputs noise distribution consistent in size of the current non-defective lens image; Modulating the space probability weight map through a double-threshold function to obtain a modulation coefficient, wherein the modulation coefficient is expressed as follows: ; Wherein, the Representing the modulation factor(s), A probability weight value representing each pixel point on the current non-defective lens image, And A low threshold and a high threshold representing probability weight values, respectively; weighting each pixel point on the current non-defective lens image and the space probability weight map to obtain a weighted result of the current non-defective lens image and the space probability weight map, wherein the weighting formula is as follows: ; Wherein, the Representing the weighted result of each pixel point on the current non-defective lens image and the space probability weight graph, Representing noise in the noise distribution corresponding to the current pixel point on the current non-defective lens image, Representing the original noise of the current pixel point on the current defect-free lens image; And overlapping the weighted result of the current non-defective lens image and the spatial probability weight map with a corresponding basic sample to obtain a defect map, and forming a second data set by the defect maps corresponding to all the non-defective lens images in the test set.
  6. 6. The generalized lens defect oriented detection method of claim 1 wherein said detection model, color image lens defect detection model and black and white image lens defect detection model are each implemented using a YOLO11 network.
  7. 7. The generalized lens defect oriented detection method of claim 1 wherein said color image lens defect detection model and said black and white image lens defect detection model each have a loss function according to the following formula: ; Wherein, the ; ; ; Wherein, the Representing a loss of the color image lens defect detection model or the black-and-white image lens defect detection model, Indicating the regression loss of the test frame, Indicating a loss of confidence in the target, Representing a focus class loss, 、 And All of which represent the weight parameters, The aspect ratio penalty term is represented, A direction consistency penalty term is represented, The prediction detection box is represented by a frame, The true box is represented by a representation of the true box, Representing the width of the prediction detection frame, Indicating the height of the prediction detection box, Indicating the angle of the predicted detection frame, Representing the width of the real frame, Representing the height of the real frame, The angle of the real frame is indicated, The gaussian weighting factor is represented as such, Representing the local contrast of the image, Representing the mapping smoothing factor(s), Represent the first A weight parameter for the type of defect, and , Representing the parameters of the modulation factor, The representation represents the aspect ratio penalty term weight, The color image lens defect detection model or the black-and-white image lens defect detection model is identified as the first The probability of a type of defect, Represent the first And (5) defect types.
  8. 8. The generalized detection method for lens defects according to claim 1, wherein when each detection result is screened, screening is performed according to a preset confidence threshold, and detection results with confidence degrees greater than the confidence threshold corresponding to each detection result are retained, otherwise, the detection results are removed.

Description

Generalized detection method for lens defects Technical Field The invention belongs to the technical field of optical lens defect detection, and particularly relates to a generalization detection method for lens defects. Background With the rapid development of the optical manufacturing industry, the appearance detection of the lens of the optical lens has a key effect on quality improvement. The lens defect detection model mainly depends on a supervised deep learning model, and the training efficiency and the performance of the model are highly dependent on the data labeling quality and sample distribution. However, the prior art faces the following prominent problems in practical production: 1. The defect label systems of different types of lenses are not uniform, the lenses of different types are marked by different people or groups, the labels have serious inconsistent phenomena, such as dirt, spot dirt, block dirt, pit, sheet dirt and the like pointing to the same type of defects, and the labels such as reverse, mouth watermark and the like are only effective on specific lenses, so that training data are difficult to unify, and model training is seriously influenced. 2. The defect type distribution is extremely unbalanced, the sparse type samples are insufficient, the dirty type defect samples in the lens defect data set are more, the defects such as scratches, bubbles and the like are fewer, and the model recall rate on the sparse type is low. 3. The source of the lens images is various (color/black and white), so that the model characteristic expression difference is obvious, wherein the color images contain RGB color information, the black and white images mainly depend on texture and brightness gradient, and a single model is difficult to process two types of images simultaneously. Disclosure of Invention The invention aims to solve the problems in the background art and provides a generalization detection method for lens defects. In order to achieve the above purpose, the technical scheme adopted by the invention is as follows: the invention provides a generalization detection method for lens defects, which comprises the following steps: Collecting the existing lens images with defects in multiple models and corresponding defect labels as an original data set, merging and unifying the defect labels in the original data set, and forming a first data set by all lens images in all original data sets and the defect labels after merging and unifying; Adding defect intensity control parameters into the diffusion model, and training to obtain an intensity-controllable defect generation model; Establishing a poisson counting model of defect type-quantity according to the first data set, and a space probability model based on kernel density estimation; generating a defect map of the lens based on the defect-free lens image, the intensity-controllable defect generation model, the defect type-quantity distribution model and the space probability model, wherein the defect maps of all lenses form a second data set; Combining the first data set and the second data set to obtain a third data set, dividing the third data set into a color channel data set and a black-and-white channel data set according to the image colors, and training a detection model by utilizing the color channel data set and the black-and-white channel data set to obtain a trained color image lens defect detection model and a trained black-and-white image lens defect detection model; The method comprises the steps of respectively obtaining a color image and a black-and-white image of a lens to be detected, taking the color image as input of a color image lens defect detection model to obtain a first detection result, taking the black-and-white image after pretreatment operation as input of a black-and-white image lens defect detection model to obtain a second detection result, screening each detection result, combining the screened first detection result and the screened second detection result to obtain a final detection result of the lens to be detected, wherein each detection result comprises a defect type, a position of a defect detection frame and a confidence level. Preferably, the merging and unifying the defect labels includes: The specification and unification are carried out on each defect label; generating a corresponding text vector for the defect label after unified specification by adopting a word vector model; calculating the inter-class divergence of each type of defect label in the original data set, normalizing the inter-class divergence, and calculating the inter-class divergence according to the following formula: ; Wherein, the Representing the first of the original datasetInter-class divergence of the types of defective labels,Representing the first of the original datasetThe number of text vectors corresponding to the type of defective label,Representing the first of the original datasetThe centroid of the text vector