Search

CN-122024257-A - Digital ID recognition system for components on automatic assembly line

CN122024257ACN 122024257 ACN122024257 ACN 122024257ACN-122024257-A

Abstract

The invention discloses a digital ID recognition system for parts on an automatic assembly line, and relates to the technical field of digital recognition. The automatic assembly line moving device comprises a vision sensor module, an industrial personal computer, a training picture digital identification model and a digital identification module, wherein the vision sensor module is arranged on a fixed support right above an automatic assembly line and is used for shooting a part shell picture of a moving part on the assembly line in real time, the part shell picture comprises a digital ID of the part, the industrial personal computer is used for preprocessing the obtained part shell picture to obtain a plurality of preprocessed segmentation blocks and corresponding position codes, the preprocessed segmentation blocks are input into the training picture digital identification model to be identified to obtain the numbers corresponding to the segmentation blocks, and the digital IDs of the parts contained in the part shell picture are obtained by combining the identified numbers corresponding to the segmentation blocks and the corresponding position codes. And (5) accurately and efficiently dynamically identifying the component marks on the assembly line, and carrying out automatic feeding.

Inventors

  • LI SHUAI
  • YU WENHUA
  • WANG JUN
  • ZHAO KAI
  • SHI KAIXUAN
  • ZHANG HUAWEI
  • CUI XIAOYAN
  • SUN SHILONG
  • WANG ZHI

Assignees

  • 北京华航无线电测量研究所

Dates

Publication Date
20260512
Application Date
20241125
Priority Date
20241111

Claims (10)

  1. 1. A digital ID identification system for components on an automated assembly line, comprising: the visual sensor module is arranged on a fixed bracket right above the automatic assembly line and is used for shooting a part shell picture of a moving part on the assembly line in real time, wherein the part shell picture comprises a digital ID of the part; The industrial personal computer is used for preprocessing the obtained part shell picture to obtain a plurality of preprocessed segmented blocks and corresponding position codes, inputting the preprocessed segmented blocks into a trained picture digital recognition model to recognize, obtaining numbers corresponding to the segmented blocks, and combining the numbers corresponding to the recognized segmented blocks and the corresponding position codes to obtain the digital IDs of the parts contained in the part shell picture.
  2. 2. The system of claim 1, wherein the system further comprises: the data storage module is used for storing the part shell picture shot by the vision sensor module in real time, the digital ID of the identified part and the system operation log; and the user visualization module is used for setting a main product ID and a digital ID set of a part affiliated to the main product ID, matching the digital ID of the part identified in real time with the data in the digital ID set, if the matching is successful, automatically feeding the part by the assembly line and displaying the part successfully, otherwise, entering the assembly line isolation section by the part failed in the matching, and displaying warning information.
  3. 3. The system of claim 1, wherein the vision sensor module transmits the real-time acquired pictures of the component housing to the industrial personal computer in real time through a built-in Wi-Fi module.
  4. 4. The system of claim 1, wherein the digital picture recognition model recognizes the component housing picture based on a residual convolution network, comprising an input layer module, a convolution calculation module, a residual calculation module, a flattening module, a full connection module, and an output layer module; inputting the preprocessed segmented blocks into a trained picture digital recognition model, wherein the method comprises the following steps of: Inputting the preprocessed segmented blocks into the convolution calculation module through an input layer module, and sequentially extracting convolution extraction features of pictures through a plurality of convolution layers, wherein Swish activation function layers and pooling layers are arranged behind each convolution layer; Inputting the convolution extraction characteristics into the residual calculation module, and calculating residual extraction characteristics of an extracted picture through a plurality of residual blocks; Inputting the residual extraction features into the flattening module, and flattening the multidimensional feature map of the residual extraction features into one-dimensional feature vectors; classifying the one-dimensional feature vectors through the full connection module; The output layer module converts the classified one-dimensional feature vectors into probability distributions for each class using a Softmax function.
  5. 5. The method of claim 4, wherein the convolution calculation module comprises a first, a second and a third convolution layers in series, each of the first, the second and the third convolution layers being followed by Swish activation functions and pooling layers; The first convolution layer receives the preprocessed segmented blocks, extracts primary features of pictures by using 32 convolution kernels, and obtains first convolution features after Swish activation and pooling of the primary features; The number of convolution kernels of the second convolution layer is increased to 64, more complex local features of the first convolution feature are further extracted, and Swish activation and pooling are carried out on the local features to obtain second convolution features; the number of convolution kernels of the third convolution layer is further increased to 128, deep features of the second convolution features are extracted, swish activation and pooling are carried out on the deep features to obtain third convolution features, and the third convolution features are used as convolution extraction features output by the convolution calculation module; Wherein: The Swish activation function dynamically adjusts the activation strength based on the input features; the pooling layer is used to reduce the spatial dimension of the input features.
  6. 6. The method of claim 4, wherein the residual calculation module comprises first, second, and third residual blocks in parallel; Each residual block sequentially comprises a first convolution layer, a ReLU activation layer, a second convolution layer, a BN batch normalization layer, a summation layer and a ReLU activation layer, wherein the input of each residual block is connected with the summation layer in a jumping manner; Each convolution layer of the first residual block comprises 128 convolution kernels with a size of 3×3, and the convolution kernels are used for extracting preliminary complex features of the convolution extracted features as first residual features; each convolution layer of the second residual block includes 256 convolution kernels of size 5×5 for extracting deeper features of the first residual feature as second residual features; Each convolution layer of the third residual block includes 512 convolution kernels of 7×7 size for extracting high-level features of the second residual feature as third residual features; splicing the first residual error feature, the second residual error feature and the third residual error feature to obtain a spliced feature; And matching the convolution extraction feature with the dimension of the splicing feature by using 1 multiplied by 1 convolution adjustment channel number, and then performing jump connection with the splicing feature to obtain the residual extraction feature of the picture.
  7. 7. The method of claim 6, wherein x is input to the deep residual network, stitching the first, second and third residual characteristics of the outputs of the first, second and third residual blocks, expressed as follows; Wherein y 1 、y 2 and y 3 are the first, second and third residual characteristics respectively, the splice results of the three are y, and F 1 (x)、F 2 (x) and F 3 (x) are the processing results of the first, second and third residual blocks normalized by the first convolution layer, the ReLU activation layer, the second convolution layer and the BN batch respectively.
  8. 8. The method according to claim 1, wherein the picture digital recognition model is trained by: Constructing a training data set of the picture digital identification model, wherein the training data set comprises sample pictures and corresponding sample labels; Training the picture digital recognition model based on the training data set; And performing gradient fusion by using the cross entropy loss and the mean square error loss as a loss function to train a picture digital recognition model, and continuously adjusting parameters by using a counter-propagation and gradient descent optimization algorithm to minimize the loss function until the training is finished after the loss function is converged, and storing the parameters of the picture character recognition model.
  9. 9. The method of claim 8, wherein the cross entropy loss L CE and the mean square error loss L MSE are gradient fused as a loss function as follows: Wherein L CE 、▽L MSE is the gradient of the cross entropy loss and the mean square error loss relative to the picture digital recognition model parameter, and L CE ||、||▽L MSE is the norm of the gradient of the cross entropy loss and the mean square error loss; The cross entropy loss is: The mean square error loss is: Wherein N is the total number of samples, C is the total number of categories, y i is the true value of the ith sample, Y ic is an indicator variable, if sample i belongs to class c, y ic =1, otherwise y ic =0;p ic is a probability that model prediction sample i belongs to class c.
  10. 10. The method of claim 1, wherein preprocessing the part shell picture comprises: converting the RGB color picture of the component shell picture into a gray picture; Performing binarization processing on the gray level picture to obtain a binarized picture; Carrying out connected region analysis in the denoised binarized picture, and identifying and marking all connected white regions; Screening out candidate areas conforming to digital characteristics based on the area and the shape of the connected white areas and comparing with a predefined digital characteristic threshold; Taking each screened candidate digital region as a segmentation block to obtain a plurality of segmentation blocks containing numbers, wherein the central position coordinates of the segmentation blocks are used as the position codes of the segmentation blocks; and unifying the sizes of the divided blocks to be the predetermined picture size.

Description

Digital ID recognition system for components on automatic assembly line Technical Field The invention belongs to the technical field of automatic assembly lines, and particularly relates to a digital ID recognition system for components on an automatic assembly line. Background With the rapid development of industrial production, in the field of automated assembly lines, digital identification in pictures of digital IDs on components is a key technique for ensuring correct assembly and traceability of components. The traditional image recognition technology is mainly based on manually designed feature extraction and matching methods, such as Hough transformation, edge detection and the like. These methods require manual design of features and are inefficient and less accurate for digital identification tasks in complex images. Along with the development of pattern recognition and artificial intelligence, the digital recognition technology in images has advanced to a certain extent, mainly based on machine learning methods such as neural networks, decision trees and the like, such as multi-layer perceptrons, regression trees and the like. These methods can automatically learn features, but for large-scale, high-dimensional image data, the real-time and accuracy of component digital ID identification on a pipeline still cannot be satisfied. The existing machine learning method has poor effect when processing large-scale and high-dimensional image data, especially when a plurality of automatic assembly lines work simultaneously. On an automated assembly line, it is necessary to process images and identify digital IDs in real time, but the prior art cannot meet the real-time requirements, especially on high speed assembly lines. Disclosure of Invention In view of the above analysis, the embodiment of the invention aims to provide a digital ID identification system for components on an automated assembly line, which is used for solving the technical problems of low identification efficiency and low accuracy of the numbers on the component shells on the automated assembly line in the prior art. In order to solve the technical problems, the main technical scheme adopted by the invention comprises the following steps: a digital ID identification system for components on an automated assembly line, comprising: the visual sensor module is arranged on a fixed bracket right above the automatic assembly line and is used for shooting a part shell picture of a moving part on the assembly line in real time, wherein the part shell picture comprises a digital ID of the part; The industrial personal computer is used for preprocessing the obtained part shell picture to obtain a plurality of preprocessed segmented blocks and corresponding position codes, inputting the preprocessed segmented blocks into a trained picture digital recognition model to recognize, obtaining numbers corresponding to the segmented blocks, and combining the numbers corresponding to the recognized segmented blocks and the corresponding position codes to obtain the digital IDs of the parts contained in the part shell picture. Further, the system further comprises: the data storage module is used for storing the part shell picture shot by the vision sensor module in real time, the digital ID of the identified part and the system operation log; and the user visualization module is used for setting a main product ID and a digital ID set of a part affiliated to the main product ID, matching the digital ID of the part identified in real time with the data in the digital ID set, if the matching is successful, automatically feeding the part by the assembly line and displaying the part successfully, otherwise, entering the assembly line isolation section by the part failed in the matching, and displaying warning information. Further, the vision sensor module transmits the part shell pictures acquired in real time to the industrial personal computer in real time through the built-in Wi-Fi module. Further, the picture digital identification model identifies the picture of the component shell based on a residual convolution network and comprises an input layer module, a convolution calculation module, a residual calculation module, a flattening module, a full connection module and an output layer module; inputting the preprocessed segmented blocks into a trained picture digital recognition model, wherein the method comprises the following steps of: Inputting the preprocessed segmented blocks into the convolution calculation module through an input layer module, and sequentially extracting convolution extraction features of pictures through a plurality of convolution layers, wherein Swish activation function layers and pooling layers are arranged behind each convolution layer; Inputting the convolution extraction characteristics into the residual calculation module, and calculating residual extraction characteristics of an extracted picture through a plurality of re