Search

CN-121616591-B - Adhesive quality detection method and device based on image recognition

CN121616591BCN 121616591 BCN121616591 BCN 121616591BCN-121616591-B

Abstract

The application discloses an adhesive quality detection method and device based on image recognition, and belongs to the technical field of adhesive quality detection. The detection method comprises the steps of obtaining a digital image of a target adhesive to be subjected to quality detection, obtaining a trained identification network, wherein the identification network is configured to receive the digital image and generate category information of the digital image so as to finish the quality detection of the target adhesive, a feature capture module is arranged in the identification network and comprises a mutation capture unit, a semantic capture unit and an adjacent fusion unit which are connected with each other, and the mutation capture unit and the semantic capture unit are both provided with convolution layers. The identification network can simultaneously capture local mutation defects and overall semantic distribution defects, and greatly improves the accuracy and reliability of adhesive quality identification.

Inventors

  • HE BIN
  • LIU RENLONG
  • Teng Xurong
  • FAN LONG
  • LUO YONGLI

Assignees

  • 重庆大学

Dates

Publication Date
20260512
Application Date
20260202

Claims (8)

  1. 1. The adhesive quality detection method based on image recognition is characterized by comprising the following steps of: Acquiring a digital image of a target adhesive to be subjected to quality detection, and acquiring a trained identification network, wherein the identification network is configured to receive the digital image and generate category information of the digital image so as to finish the quality detection of the target adhesive; The recognition network is provided with a feature capture module, the feature capture module comprises a mutation capture unit, a semantic capture unit and an adjacent fusion unit which are connected with each other, the mutation capture unit and the semantic capture unit are both provided with convolution layers, the mutation capture unit is used for capturing image mutation information and generating a mutation feature map, the semantic capture unit is used for capturing image semantic distribution information and generating a semantic feature map, and the adjacent fusion unit is used for matching and fusing the mutation feature map and the semantic feature map; The calculation process of the mutation capturing unit comprises the following steps: respectively carrying out first capturing processing, second capturing processing and third capturing processing on the feature images received by the mutation capturing unit, correspondingly generating a first feature image, a second feature image and a third feature image, wherein a common convolution layer is arranged in the first capturing processing, a point-by-point convolution layer is arranged in the second capturing processing, and the third capturing processing is used for mapping feature values in the feature images into weight values; Generating a first jump feature based on the first feature map and the second feature map; Performing fourth capturing processing on the first feature map to generate a fourth feature map, and performing fifth capturing processing on the second feature map to generate a fifth feature map; generating a second transition feature based on the fourth feature map and the fifth feature map; generating the abrupt feature map based on the third feature map, the first transition feature, and the second transition feature; The calculating process of the semantic capturing unit comprises the following steps: Respectively extracting the spatial distribution characteristics of the first transition characteristics and the second transition characteristics to generate corresponding first distribution characteristics and second distribution characteristics; the fourth feature map is multiplied by the corresponding element of the second distribution feature to generate a first structural feature; the fifth feature map is multiplied by the corresponding element of the first distribution feature to generate a second structural feature; semantic capturing processing is carried out on the first structural feature and the second structural feature respectively, and corresponding third structural feature and fourth structural feature are generated; and fusing the third structural feature with the fourth structural feature to generate the semantic feature map.
  2. 2. The method of claim 1, wherein one or more of the following conditions are met: A. The first acquisition process includes a common convolution layer and an active layer process in series; B. the second capture process includes a point-by-point convolutional layer and an active layer process in series; C. the third capture process includes a normal convolution layer and softmax layer process in series; D. The fourth acquisition process includes using a normal convolutional layer and an active layer process in series; E. the fifth acquisition process includes a point-wise convolutional layer and an active layer process using a concatenation.
  3. 3. The method of claim 1, wherein one or more of the following conditions are met: F. Generating a first transition feature based on the first feature map and the second feature map, wherein the first transition feature is generated through activation of a Tanh function after the first feature map and the corresponding element of the second feature map are differenced; G. Generating a second transition feature based on the fourth feature map and the fifth feature map, wherein the second transition feature is generated through activation of a Tanh function after the fourth feature map and the fifth feature map are subjected to difference; H. The mutation characteristic map is generated based on the third characteristic map, the first transition characteristic and the second transition characteristic, and comprises the steps of activating after multiplication of corresponding elements of the third characteristic map, the first transition characteristic and the second transition characteristic, and generating the mutation characteristic map.
  4. 4. The method of claim 1, wherein one or more of the following conditions are met: I. The method comprises the steps of respectively extracting the spatial distribution characteristics of the first jump characteristic and the second jump characteristic, wherein the spatial distribution characteristics of the first jump characteristic and the second jump characteristic are respectively processed by utilizing a channel global pooling layer and an activation layer which are connected in series; J. the semantic capture processing comprises processing by using a dimension reduction convolution layer and an activation layer which are connected in series; K. the third structural feature is fused with the fourth structural feature, and the method comprises the steps of splicing the third structural feature and the fourth structural feature, and then processing through a common convolution layer and an activation layer which are connected in series.
  5. 5. The method of claim 1, wherein the calculation of the adjacency fusion unit comprises: generating a first adjacency feature based on the first distribution feature and a second distribution feature; the first adjacent feature is multiplied by the corresponding element of the mutation feature map and then activated, and a second adjacent feature is generated; and adding the second adjacent features with corresponding elements of the semantic feature map after pooling treatment to generate a matching feature map, and completing matching fusion of the mutation feature map and the semantic feature map.
  6. 6. The method of claim 5, wherein the first distribution feature and the second distribution feature correspond to elements that are added and then activated to generate the first contiguous feature.
  7. 7. The method according to claim 1, wherein a classification module is further provided in the identification network, and the classification module generates category information of the digital image based on the matching fusion result of the adjacent fusion unit.
  8. 8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1 to 7 when executing the computer program.

Description

Adhesive quality detection method and device based on image recognition Technical Field The invention belongs to the technical field of adhesive quality detection, and particularly relates to an adhesive quality detection method and device based on image recognition. Background The adhesive is used as an important functional material and is widely applied to a plurality of core fields of electronics, aerospace, building materials, medical health, automobile manufacturing and the like, and the quality stability of the adhesive directly determines the bonding reliability, structural safety and service life of an application product. However, in the actual production process of the adhesive, various quality defects including insufficient material uniformity, particle agglomeration, gas phase inclusion, heterogeneous impurity embedding and the like are easily generated due to the influence of various factors such as the quality of raw materials, the parameter control of stirring process, temperature and pressure and the like, the cleanliness of the production environment and the like. For a long time, the quality detection of the adhesive mainly depends on the traditional mode of 'manual visual observation and off-line sampling', so that the cost is high, the efficiency is low, and the judgment is easy to miss. Disclosure of Invention In view of the above, the invention provides an adhesive quality detection method and device based on image recognition, which uses a computer vision technology to recognize the acquired adhesive image, thereby realizing automatic real-time detection of the adhesive quality and providing a brand-new technical path for the intelligent production and detection of the adhesive industry. The adhesive quality detection method based on image recognition comprises the following steps: Acquiring a digital image of a target adhesive to be subjected to quality detection, and acquiring a trained identification network, wherein the identification network is configured to receive the digital image and generate category information of the digital image so as to finish the quality detection of the target adhesive; The recognition network is provided with a feature capture module, the feature capture module comprises a mutation capture unit, a semantic capture unit and an adjacent fusion unit which are connected with each other, the mutation capture unit and the semantic capture unit are both provided with convolution layers, the mutation capture unit is used for capturing image mutation information and generating a mutation feature map, the semantic capture unit is used for capturing image semantic distribution information and generating a semantic feature map, and the adjacent fusion unit is used for matching and fusing the mutation feature map and the semantic feature map. In some possible embodiments, the calculation process of the mutation capture unit comprises: respectively carrying out first capturing processing, second capturing processing and third capturing processing on the feature images received by the mutation capturing unit, correspondingly generating a first feature image, a second feature image and a third feature image, wherein a common convolution layer is arranged in the first capturing processing, a point-by-point convolution layer is arranged in the second capturing processing, and the third capturing processing is used for mapping feature values in the feature images into weight values; Generating a first jump feature based on the first feature map and the second feature map; Performing fourth capturing processing on the first feature map to generate a fourth feature map, and performing fifth capturing processing on the second feature map to generate a fifth feature map; generating a second transition feature based on the fourth feature map and the fifth feature map; the abrupt feature map is generated based on the third feature map, the first transition feature, and the second transition feature. In some possible embodiments, the above method satisfies one or more of the following conditions: A. The first acquisition process includes a common convolution layer and an active layer process in series; B. the second capture process includes a point-by-point convolutional layer and an active layer process in series; C. the third capture process includes a normal convolution layer and softmax layer process in series; D. The fourth acquisition process includes using a normal convolutional layer and an active layer process in series; E. the fifth acquisition process includes a point-wise convolutional layer and an active layer process using a concatenation. F. Generating a first transition feature based on the first feature map and the second feature map, wherein the first transition feature is generated through activation of a Tanh function after the first feature map and the corresponding element of the second feature map are differenced; G. Generating a second transition featu