Search

CN-115761241-B - Image enhancement method and application thereof

CN115761241BCN 115761241 BCN115761241 BCN 115761241BCN-115761241-B

Abstract

The application discloses an image enhancement method and application thereof, wherein the image enhancement method carries out constant brightness processing on an acquired image through the enhancement of a constructed camera response model, and the enhanced image is converted into a gray matrix image and then is subjected to denoising processing. The application further provides an edge extraction method based on the image enhancement method, which uses a new SED model based on dynamic feature fusion to carry out edge extraction, namely, the edge extraction is realized by carrying out semantic segmentation on the denoised gray image, converting the gray image into a binary image, normalizing the amplitude scale of the multi-layer features of the binary image, and carrying out dynamic feature fusion. The application upgrades the image enhancement method and the edge extraction method, has wide application prospect, can especially meet the requirements of large product size detection batch and high precision, and lightens the manual pressure.

Inventors

  • LI JUN
  • GAO YIN
  • LI QIMING
  • XIE YINHUI

Assignees

  • 闽都创新实验室

Dates

Publication Date
20260508
Application Date
20221108

Claims (12)

  1. 1. An image enhancement method, comprising: (1) Acquiring an acquired image, selecting a camera response model and calculating model parameters to obtain an enhanced image, wherein the step (1) comprises: (11) Decomposing the acquired image into a reflection component and an illumination component to obtain a reflection image; (12) Calculating an exposure rate diagram and a reflectivity diagram according to the reflection image; The reflectivity map is the reflected component of the image, the image is reflected by the following Is obtained by inverse calculation of the expression formula of (a): Wherein, the Is a reflectivity graph; Is illumination; , In order to be of logarithmic reflectance, And Is a positive parameter, the parameter is a positive parameter, In order to supplement the variables, in addition to the variables, Is an error; The iteration times; for the image, as the weight coefficient The segmentation threshold of foreground and background is noted as The calculation formula of the exposure rate graph is as follows: Wherein, the Is a reflected image; is the illumination minimum; (13) Obtaining logarithmic reflectivity according to the reflectivity graph, and calculating a space change function; (14) Calculating a probability density function of the reconstructed image according to the spatial variation function; (15) Calculating a mapping function according to the probability density function of the reconstructed image; (16) Inputting the acquired image into the mapping function, and enhancing according to the exposure rate graph and the image intensity to obtain an enhanced image; (2) And converting the enhanced image into a gray matrix image, and denoising the gray matrix image.
  2. 2. The image enhancement method according to claim 1, wherein the calculation formula of the spatial variation function is as follows: , Wherein, the Representing the coordinates of one pixel, Is that Is used to determine the position of the coordinate system, Is a factor of The up-sampling operator is used to sample the data, In order to be of the resolution level, As a total number of levels, The number of terms representing the variable when it is geometrically averaged.
  3. 3. The image enhancement method according to claim 2, wherein the probability density function of the reconstructed image The following calculation formula is adopted: wherein the probability density function Is marked as , Representing the Kronecker function (Kronecker delta), Is each pixel Is used for the strength of the steel sheet, Is the total number of intensities.
  4. 4. The image enhancement method according to claim 3, wherein the mapping function uses the following calculation formula: wherein the cumulative distribution function Is marked as , Representing the cumulative distribution function of the output image.
  5. 5. The method for enhancing an image according to claim 4, wherein the enhancement is performed according to the exposure rate map and the image intensity by using the following formula, and the enhanced image is finally obtained: Wherein, the For the intensity of the image(s), In order to provide a map of the exposure rate, Representing division by element.
  6. 6. The image enhancement method according to claim 1, wherein the denoising process uses a physical-based very low-light raw denoising noise formation model.
  7. 7. An edge extraction method comprising, in addition to the image enhancement method according to any one of claims 1 to 6, the steps of: Carrying out semantic segmentation on the denoised gray image; Converting the segmented image into a binary image; Normalizing the amplitude scale of the multi-layer features of the binary image, and then carrying out dynamic feature fusion to obtain the required edge features, thereby extracting the image edge.
  8. 8. The edge extraction method according to claim 7, wherein the semantic segmentation of the denoised grayscale image includes: Sequentially inputting the denoised gray images into a depth convolutional neural network DCNN and an Atrous convolutional, and extracting features; Post-processing is carried out by adopting a full-connection CRF model to obtain a semantic image segmentation result, which comprises the following steps: Atrous convolving to output a rough segmentation result; restoring the rough segmentation result to the resolution of the original image by bilinear interpolation; Inputting the semantic image segmentation result into a full-connection CRF model to obtain a semantic image segmentation result; and the segmented image is converted into a binary image and is segmented by adopting an adaptive threshold value.
  9. 9. The method of claim 7, wherein the dynamic feature fusion is to predict adaptive fusion weights for different positions of the multi-layer feature map using machine learning, and the method comprises at least one of a position-invariant fusion weight and a position-adaptive fusion weight, For the fusion weights with unchanged positions, all positions in the feature map are treated equally, and the universal fusion weights are adaptively learned according to input; And for the fusion weight of the position self-adaption, the fusion weight is self-adaption adjusted according to the image position characteristics, and the contribution of the low-layer characteristics to the accurate positioning of the edge along the target contour is improved.
  10. 10. A method for applying the edge extraction method according to any one of claims 7-9 in product specification detection, comprising: Collecting an image of a product to be detected, and calibrating a camera; obtaining the outline of a product to be detected by an edge extraction method; and converting the specification of the outline of the product to be detected according to the scale of the calibration plate calibrated by the camera to obtain the actual specification of the product to be detected.
  11. 11. The method of claim 10, wherein the capturing the image of the product to be tested comprises: Image acquisition is carried out on the product to be detected by using a monocular industrial camera; and (3) according to the single-point undistorted camera imaging model, calibrating the acquired image of the product to be measured.
  12. 12. The application method according to claim 10, wherein the actual specification of the product to be measured includes at least one of an actual area of the product to be measured, and actual length and width data of the product to be measured; the actual area of the product to be measured is calculated by the following steps: calculating the area in the maximum outline of the product to be measured to obtain the area represented by the pixel value of the product to be measured in the image, and carrying out scaling according to the area of the scale of the calibration plate calibrated by the camera to obtain the actual area of the product to be measured; The actual length and width data of the product to be measured is calculated by the following steps: determining the center point of the minimum external moment of the rotation outline of the product to be measured and the coordinates of four corner points of the calibration frame, and drawing the calibration frame; And taking the standard size of the calibration plate as a scale to obtain the actual length and width of the product to be measured.

Description

Image enhancement method and application thereof Technical Field The application belongs to the field of intelligent industry, and particularly relates to an image enhancement method and application thereof. Background At present, in the target screening and sorting process, the detection of the area and the size of a product mainly depends on a die, and manual intervention is needed, and a final detection calibration is finished by using an instrument with scales, so that the problems of low efficiency and low accuracy exist. The contrast enhancement and histogram equalization in the image enhancement method cannot suppress noise, and the smooth mode can remove noise, but the noise removal can simultaneously lead to the change of the edge position of the result image and the blurring and even loss of details, and can further blur the edge of the image. Nonlinear filtering can better preserve image edge position and detail, but implementation of the algorithm is relatively difficult with linear filtering. The Sobel operator detection method in the edge extraction method is not very high in accuracy, while the Laplacian operator method is relatively sensitive to noise. Disclosure of Invention According to one aspect of the present application, an image enhancement method is provided that is capable of revealing details hidden therein while preserving their naturalness so that these images appear more visually attractive and scientifically useful. The image enhancement method comprises the following steps: (1) Acquiring an acquired image, selecting a camera response model, and calculating model parameters to obtain an enhanced image; (2) And converting the enhanced image into a gray matrix image, and denoising the gray matrix image. Preferably, step (1) comprises: (11) Decomposing the acquired image into a reflection component and an illumination component to obtain a reflection image; (12) Calculating an exposure rate diagram and a reflectivity diagram according to the reflection image; (13) Obtaining logarithmic reflectivity according to the reflectivity graph, and calculating a space change function; (14) Calculating a probability density function of the reconstructed image according to the spatial variation function; (15) Calculating a mapping function according to the probability density function of the reconstructed image; (16) And inputting the acquired image into the mapping function, and enhancing according to the exposure rate graph and the image intensity to obtain an enhanced image. Preferably, the reflectivity map is a reflection component of the image, and is obtained by inverse calculation of an expression formula of the reflection image I as follows: s.t.rn≤0and t≤in. Where R is the reflectance map, i=log (I) is the illumination, t=log (T), i=log (I), r=log (R) is the logarithmic reflectance, c 1 and c 2 are positive parameters, u is the supplemental variable, and v is the error. n is the iteration number, lambda is the weight coefficient; Preferably, the calculation formula of the exposure rate map is as follows: Wherein I is a reflected image, and I min is an illuminance minimum; Preferably, the calculation formula of the spatial variation function is as follows: Wherein q represents the coordinates of a pixel, N (q) is a set of adjacent coordinates of q, U (·) is an upsampling operator with a factor of 2 i, i is the resolution level, L is the total number of levels, and L represents the number of terms of the variable when it is geometrically averaged; preferably, the probability density function pdf of the reconstructed image uses the following calculation formula: wherein the probability density function pdf is noted as Delta represents the Kronecker function (Kronecker delta), a (q), K e 0, K is the intensity of each pixel q, and K is the total number of intensities. Preferably, the mapping function uses the following calculation formula: Where the cumulative distribution function cdf is denoted as P a(k),Pb (k) representing the cumulative distribution function of the output image. Preferably, the enhancement is performed according to the exposure rate graph and the image intensity, wherein J (T) is enhanced by using the following formula, and the enhanced image is finally obtained: Wherein P is the image intensity, S is the exposure rate graph, Representing division by element. Preferably, the denoising process uses a physical-based very low-light raw denoising noise-forming model. According to still another aspect of the present application, there is provided an edge extraction method, further comprising, after the above-mentioned image enhancement method: Carrying out semantic segmentation on the denoised gray image; Converting the segmented image into a binary image; Normalizing the amplitude scale of the multi-layer features of the binary image, and then carrying out dynamic feature fusion to obtain the required edge features, thereby extracting the image edge. Preferably, the seman