Search

CN-121258962-B - PLC vision detection linkage control method and system based on AI image recognition

CN121258962BCN 121258962 BCN121258962 BCN 121258962BCN-121258962-B

Abstract

The invention belongs to the technical field of industrial vision detection, and discloses a PLC vision detection linkage control method and system based on AI image recognition; the method comprises the steps of automatically adjusting exposure parameters of a camera based on an evaluation result of real-time brightness and contrast of an industrial product image to be detected to obtain a corrected industrial product image, extracting illumination invariant features from the corrected industrial product image to generate illumination invariant feature vectors, inputting an original image of the same industrial product image to be detected and the corresponding illumination invariant feature vectors into a pre-trained feature optimization model, outputting optimized illumination invariant feature vectors, inputting industrial product images shot under different exposure conditions into a pre-trained neural network model for fusion to obtain a shared feature vector, and solving the problem of missed detection caused by illumination change in high-speed production of a traditional vision algorithm.

Inventors

  • GUO CHENGGUANG
  • DONG XUEJIAO

Assignees

  • 广成工业技术(苏州)有限公司

Dates

Publication Date
20260512
Application Date
20251024

Claims (9)

  1. 1. The PLC vision detection linkage control method based on AI image recognition is characterized by comprising the following steps: based on the evaluation result of real-time brightness and contrast of the industrial product image to be detected, automatically adjusting the exposure parameters of a camera to obtain a corrected industrial product image; extracting illumination invariant features from the corrected industrial product image to generate illumination invariant feature vectors; Inputting an original image of the same industrial product image to be detected and a corresponding illumination invariant feature vector into a pre-trained feature optimization model, and outputting the optimized illumination invariant feature vector, wherein the training method of the feature optimization model comprises the following steps: constructing a neural network model, which comprises an input layer, a hidden layer and an output layer; Collecting a plurality of groups of industrial product images shot under different illumination conditions to obtain training data sets comprising different exposure conditions, and dividing the training data sets into a training set, a verification set and a test set; Denoising and normalizing each image in the training set to obtain a preprocessed image and a corresponding illumination invariant feature vector; Based on different illumination conditions, carrying out enhancement processing on each training image to obtain enhanced images, extracting illumination invariant features from each enhanced image, and generating enhanced illumination invariant feature vectors; the illumination invariant feature vectors of the original image and the enhanced image are used as input and input into a neural network model for forward propagation; Calculating a loss function through the difference between the output of the neural network model and the real label, wherein the objective of the loss function is to minimize the similarity error of the feature vectors between the enhanced images; updating weight parameters of the neural network model according to the calculated loss value by using a back propagation algorithm; evaluating the training process through the verification set, and adjusting the super parameters of the neural network; obtaining a trained feature optimization model through multiple rounds of training and optimization; Inputting the industrial product images shot under different exposure conditions into a pre-trained neural network model for fusion to obtain a shared feature vector, wherein the shared feature vector is a vector obtained by respectively extracting intermediate features at a hidden layer of the neural network model and fusing the intermediate features at an output layer, and the intermediate features are extracted from the hidden layer of the neural network model and comprise edge shape information, texture continuity information and corner feature information which are shared by different exposure images; inputting the optimized illumination invariant feature vector and the shared feature vector into a pre-trained defect detection model, and outputting the confidence level and defect position of the detected defect.
  2. 2. The AI-image-recognition-based PLC vision detection coordinated control method according to claim 1, wherein: the edge shape information is pixel characteristics reflecting the outer edge and contour change of the same industrial product to be detected through gradient amplitude and direction, the texture continuity information is pixel characteristics describing the gray level distribution rule of adjacent pixels through a local binary pattern to represent the trend of the surface fine texture, and the corner characteristic information is pixel characteristics identifying the position of a geometric boundary point through a corner response value and representing the neighborhood gray level change.
  3. 3. The AI-image-recognition-based PLC vision inspection coordinated control method according to claim 1, wherein the method of automatically adjusting the exposure parameters of the camera comprises: converting an industrial product image to be detected from an RGB color space to a gray color space to obtain a gray image; averaging all pixel gray values of the gray image to obtain a brightness evaluation value; obtaining a contrast evaluation value based on the standard deviation of all pixel gray values of the gray image; Subtracting the brightness evaluation value from a preset target brightness value to obtain a brightness error; Subtracting the contrast evaluation value from a preset target contrast value to obtain a contrast error; Respectively calculating products of a preset linear proportion coefficient, a brightness error and a contrast error to obtain an exposure time parameter and a gain parameter, and obtaining an exposure adjustment parameter; the exposure adjustment parameters are applied to camera exposure control to obtain corrected industrial product images.
  4. 4. The AI-image-recognition-based PLC vision detection coordinated control method of claim 1, wherein the method of generating the illumination-invariant feature vector comprises: Denoising and normalizing the industrial product image to be detected to obtain a preprocessed image; Performing gradient operation on the preprocessed image to obtain a gradient map; carrying out local binary pattern calculation on the preprocessed image to obtain a texture feature map; carrying out Harris corner detection on the preprocessed image to obtain a corner feature map; and connecting each pixel value of the gradient map, the texture feature map and the corner feature map in series according to a fixed sequence to obtain the illumination invariant feature vector.
  5. 5. The AI-image-recognition-based PLC vision detection coordinated control method of claim 4, wherein the method for obtaining the gradient map comprises the following steps: Performing convolution operation on the preprocessed image by using a Sobel operator, and calculating the horizontal gradient and the vertical gradient of the preprocessed image; And obtaining the gradient amplitude and direction of the preprocessed image by combining the horizontal gradient and the vertical gradient to form a gradient map.
  6. 6. The AI-image-recognition-based PLC vision inspection coordinated control method of claim 4, wherein the method of obtaining a texture feature map comprises: dividing the preprocessed image into a plurality of small blocks, and comparing the pixels in each small block with adjacent pixels to generate binary values; And calculating the binary value of each small block to generate a texture feature map.
  7. 7. The PLC vision detection linkage control method based on AI image recognition as set forth in claim 4, wherein the method for obtaining the corner feature map comprises: calculating a corner response value of each pixel point of the preprocessed image by using a Harris corner detection algorithm; and selecting pixel points with response values larger than a set response threshold as corner points, and forming a corner point feature map.
  8. 8. The AI-image-recognition-based PLC vision detection coordinated control method according to claim 1, wherein the training method of the neural network model comprises: Collecting a plurality of groups of industrial product images shot under different exposure conditions, and dividing each group of images into a training set, a verification set and a test set; Constructing a neural network model framework, and determining the structures and parameters of an input layer, a hidden layer and an output layer of the neural network model; Forward propagation is carried out on the neural network model by using the training set image, and an initial loss value between the output of the neural network model and the real label is calculated; Updating weight parameters of the neural network model according to the initial loss value through a back propagation algorithm; In the training process, periodically using the verification set image to evaluate the performance of the neural network model, and adjusting the super parameters of the model according to the evaluation result; continuously repeating the steps of forward propagation, backward propagation, updating weight parameters and evaluating performance until the performance of the neural network model on the verification set reaches a preset convergence condition; and testing the trained neural network model by using the test set image to obtain the trained neural network model.
  9. 9. The AI image recognition-based PLC vision detection coordinated control system for implementing the AI image recognition-based PLC vision detection coordinated control method according to any one of claims 1 to 8, characterized by comprising: the illumination correction module is used for automatically adjusting exposure parameters of the camera based on the evaluation result of the real-time brightness and contrast of the industrial product image to be detected, and obtaining a corrected industrial product image; The feature extraction module is used for extracting illumination invariant features from the corrected industrial product images and generating illumination invariant feature vectors; The feature optimization module is used for inputting an original image of the same industrial product image to be detected and a corresponding illumination invariant feature vector into the pre-trained feature optimization model and outputting the optimized illumination invariant feature vector, and the training method of the feature optimization model comprises the following steps: constructing a neural network model, which comprises an input layer, a hidden layer and an output layer; Collecting a plurality of groups of industrial product images shot under different illumination conditions to obtain training data sets comprising different exposure conditions, and dividing the training data sets into a training set, a verification set and a test set; Denoising and normalizing each image in the training set to obtain a preprocessed image and a corresponding illumination invariant feature vector; Based on different illumination conditions, carrying out enhancement processing on each training image to obtain enhanced images, extracting illumination invariant features from each enhanced image, and generating enhanced illumination invariant feature vectors; the illumination invariant feature vectors of the original image and the enhanced image are used as input and input into a neural network model for forward propagation; Calculating a loss function through the difference between the output of the neural network model and the real label, wherein the objective of the loss function is to minimize the similarity error of the feature vectors between the enhanced images; updating weight parameters of the neural network model according to the calculated loss value by using a back propagation algorithm; evaluating the training process through the verification set, and adjusting the super parameters of the neural network; obtaining a trained feature optimization model through multiple rounds of training and optimization; The feature fusion module is used for inputting the industrial product images shot under different exposure conditions into a pre-trained neural network model for fusion to obtain a shared feature vector, wherein the shared feature vector is a vector obtained by respectively extracting intermediate features at a hidden layer of the neural network model and then fusing the intermediate features at an output layer; The defect detection module is used for inputting the optimized illumination invariant feature vector and the optimized sharing feature vector into a pre-trained defect detection model and outputting the confidence coefficient and the defect position of the detected defect.

Description

PLC vision detection linkage control method and system based on AI image recognition Technical Field The invention relates to the technical field of industrial vision detection, in particular to a PLC vision detection linkage control method and system based on AI image recognition. Background In the current high-speed discrete manufacturing scene, illuminance, light source angle and shadow distribution continuously change along with production beats, and the traditional vision algorithm based on fixed threshold or single preprocessing is difficult to continuously output reliable judgment in the PLC periodic scanning constraint, so that omission and false detection alternately occur, and the product percent of pass and the production efficiency are affected. The traditional Chinese patent with the authority bulletin number of CN119205777B provides an automatic detection method and an automatic detection system for the surface defects of a flexible touch screen, which comprise the steps of S1, compensating illumination of a surface image of the flexible touch screen by using a Retinex method, converting to an HSV color space to obtain a converted image, S2, calculating a directional characteristic diagram based on the converted image, S3, constructing a Gaussian mixture model based on the directional characteristic diagram and clustering the characteristics by using an expected maximization algorithm to obtain an initial defect area, S4, optimizing the initial defect area by using morphological processing to obtain an optimized defect area, S5, splicing the converted image and the optimized defect area to obtain an input image, and performing defect detection on the input image by using a multi-scale characteristic fusion network based on deep learning. Although Retinex can balance global brightness to a certain extent, parameters of Retinex need to be manually set according to field illumination, and when the angle of a light source or local shadow is changed severely, the compensation result is easy to be enhanced excessively or the detail is lost, so that the judgment of the HSV threshold value is inaccurate. In addition, the patent adopts serial CPU operation, the whole reasoning delay can not be synchronous with the PLC4ms level scanning period, and an interface scheme which is cooperated with the real-time control logic is not provided. The prior art publication No. CN116030002A provides an image defect classification method and system based on a high dynamic technology, wherein the method comprises capturing a mode field image of a material film to be detected when the material film is in a moving state by an industrial camera, respectively extracting a bright field image and a dark field image from the mode field image, splicing the bright field image into a bright field group image, splicing the dark field image into a dark field group image, identifying a bright field defect area in the bright field group image, performing defect detection on the bright field group image according to an illumination field defect area and a preset defect detection rule to obtain a defect detection area, and performing defect classification on the dark field group image according to the defect detection area and the preset defect classification rule to obtain a defect classification result. The method can expand the gray scale range of the image and relieve local overexposure or underexposure. However, the multi-exposure frames are realized by mechanical shutter switching or light source switching modes, the sampling window is often more than 30ms and is not suitable for high-speed production lines with the density of more than 120ppm, and the HDR results are directly classified after being input into a traditional convolution network by the system, and depth feature level fusion is not carried out on different exposure views, so that the degree of distinction of an algorithm on complex texture defects is limited. Also, this solution lacks a real-time data write-back mechanism for the PLC, making it difficult to construct closed loop rejection control. It can be seen from the comprehensive comparison that the patent CN119205777B focuses on single-frame global compensation, and is difficult to process local dynamic shadows, and the patent CN116030002a focuses on hardware multi-exposure, and has long sampling period and can not fuse the features deeply. Both rely on fixed threshold or static network weight, lack an adaptive correction strategy for online production, and do not consider cooperative implementation of illumination invariant feature embedding, consistency regularization training and OPC UA/PLC edge reasoning. Therefore, the instability problem caused by illumination change to the industrial visual detection system in the prior art cannot be effectively solved, and especially in a high-speed production line, the phenomena of missing detection and false detection caused by illumination fluctuation are not ful