CN-121981952-A - Method for detecting cavity rate of bottom terminal component
Abstract
The invention belongs to the technical field of image recognition and electronic manufacturing detection, and particularly relates to a method for detecting the void ratio of a bottom terminal component. Firstly, a YOLOv model is utilized to accurately position a welding spot area in an image, then a semantic segmentation model EAUNet with an edge auxiliary task is adopted to realize fine recognition of a cavity area and a soldering tin area, finally calculation of the welding spot cavity rate is completed through pixel level statistics, a set of complete automatic detection flow is constructed, a detection period is greatly shortened, the operation is carried out in an actual SMT production line, and the engineering feasibility of quality improvement and efficiency improvement in electronic manufacturing quality detection by using a deep learning method is verified.
Inventors
- ZHANG SHAOHUA
- Wu Tingzhang
- LI JIACONG
- HE WENDUO
- SONG YANG
- LI YE
Assignees
- 中国航空工业集团公司西安飞行自动控制研究所
Dates
- Publication Date
- 20260505
- Application Date
- 20251224
Claims (10)
- 1. The method for detecting the void ratio of the bottom terminal component is characterized by comprising the following steps of: Step 1, collecting an X-ray image of a welding spot of a bottom terminal component; 2, marking the X-ray image of each component by adopting a rectangular frame to obtain a welding spot area, and forming a welding spot area data set; Step 3, semantic segmentation marking, namely dividing a hole area and a soldering tin area in a soldering spot area into the hole area and the soldering tin area to form a hole area data set and a soldering tin area data set; Step4, training by utilizing a welding spot area data set and adopting a YOLOv model to obtain a welding spot target detection model; Step 5, training three types of region semantic segmentation models of cavity, overlook soldering tin and side-looking soldering tin in a welding spot image by utilizing a cavity region data set and a soldering tin region data set and adopting an improved EAUNet algorithm to obtain a cavity semantic segmentation model and a soldering tin semantic segmentation model; step 6, adopting a welding spot target detection model to carry out target detection on the welding spot image to obtain a welding spot area; Step 7, adopting a cavity semantic segmentation model and a soldering tin semantic segmentation model to carry out semantic segmentation on the soldering region so as to obtain a cavity region and a soldering tin region; And 8, counting the number of pixel points in the cavity area and the number of pixel points in the soldering tin area, and calculating the solder joint cavity rate.
- 2. The method according to claim 1, characterized in that step 1, in particular, comprises: And acquiring top-view X-ray images and side-view X-ray images of welding spots of the bottom terminal components.
- 3. The method of claim 2, wherein in step 5, the modified EAUNet algorithm includes an edge-assist task and a multi-category semantic segmentation task.
- 4. A method according to claim 3, wherein in the edge assist task, discrete laplacian operators with different step sizes are applied to the original label image, convolution operation is performed as shown in the following formula, multi-scale edge information is extracted, and a true value image of the edge label is obtained after stitching 。
- 5. A method according to claim 3, wherein the total loss function of the modified EAUNet algorithm is: Wherein the edge-aided task loss function L edge , the cross-entropy loss function L seg , And And the weight coefficients are respectively used for adjusting the contribution proportion of the main task and the auxiliary task in the training process.
- 6. The method of claim 5, wherein the edge-assist task loss function L edge is: , wherein, BCE Loss, which is an edge assist task; the Dice Loss, an edge assist task.
- 7. The method of claim 6, wherein the step of providing the first layer comprises, The calculation formula of (2) is shown as follows: A real label representing an edge pixel or a non-edge pixel, For the prediction probability of the model for that pixel, Is the total number of pixels in the image.
- 8. The method of claim 7, wherein the step of determining the position of the probe is performed, The calculation formula of (2) is shown as follows: Wherein the method comprises the steps of A real label representing an edge pixel point, For the prediction probability of the model for that pixel, To prevent a small constant of zero removal.
- 9. The method of claim 8, wherein yi is 0 for an edge pixel, 1 for a non-edge pixel, ei is 0 for an edge pixel, and 1 for a non-edge pixel.
- 10. The method of claim 9, wherein the cross entropy loss function expression is as follows: Wherein, the The total number of pixels representing the solder area and the void area, The number of categories is indicated and, The representation represents the first The pixels belonging to a class Is a real tag of the (c) in the (c), Representation model predicts that the pixel belongs to a class Is a probability of (2).
Description
Method for detecting cavity rate of bottom terminal component Technical Field The invention belongs to the technical field of image recognition and electronic manufacturing detection, and particularly relates to a method for detecting the void ratio of a bottom terminal component. The method is suitable for automatic and high-precision detection and evaluation of the quality of the welding spots of the bottom terminal components in the electronic assembly process. Background The bottom terminal component package is widely applied to modern high-density electronic equipment, and the quality of welding spots directly influences the electrical connection reliability and mechanical strength of the product. In the manufacturing process of the welding spot of the bottom terminal component, cavity defects often occur due to the reflow soldering process or material problems, and the cavity rate is one of important indexes for measuring the quality of the welding spot. Currently, the X-ray detection technology is the main means for evaluating the internal quality of the solder joints of the bottom terminal components. Traditional detection methods rely mainly on manual interpretation or fixed threshold based image processing algorithms. The traditional image processing method has the advantages of low manual detection efficiency, strong subjectivity and poor consistency, and the detection precision is obviously reduced when the traditional image processing method faces the situations of image blurring, noise interference or complex welding spot arrangement and the like. In recent years, deep learning has made remarkable progress in image recognition and segmentation. The YOLO series model has the characteristics of high speed and high precision in the aspect of target detection, and networks such as Unet are excellent in semantic segmentation. However, there is no complete method and system for applying YOLOv and Unet networks in combination to X-ray image spot weld detection in the prior art, and there is a technical gap in spot weld area automatic identification and void fraction pixel level computation in particular. Disclosure of Invention The invention aims to provide a method for detecting the void ratio of a bottom terminal component, which can realize automatic identification of a welding spot area, pixel level segmentation of voids inside welding spots, and automatic calculation and quality assessment of the void ratio, thereby solving the problems of low detection precision, dependence on manual interpretation, poor adaptability to complex images and the like in the prior art. The technical scheme is as follows: a bottom terminal component void fraction detection method comprises the following steps: Step 1, collecting an X-ray image of a welding spot of a bottom terminal component; 2, marking the X-ray image of each component by adopting a rectangular frame to obtain a welding spot area, and forming a welding spot area data set; Step 3, semantic segmentation marking, namely dividing a hole area and a soldering tin area in a soldering spot area into the hole area and the soldering tin area to form a hole area data set and a soldering tin area data set; Step4, training by utilizing a welding spot area data set and adopting a YOLOv model to obtain a welding spot target detection model; Step 5, training three types of region semantic segmentation models of cavity, overlook soldering tin and side-looking soldering tin in a welding spot image by utilizing a cavity region data set and a soldering tin region data set and adopting an improved EAUNet algorithm to obtain a cavity semantic segmentation model and a soldering tin semantic segmentation model; step 6, adopting a welding spot target detection model to carry out target detection on the welding spot image to obtain a welding spot area; Step 7, adopting a cavity semantic segmentation model and a soldering tin semantic segmentation model to carry out semantic segmentation on the soldering region so as to obtain a cavity region and a soldering tin region; And 8, counting the number of pixel points in the cavity area and the number of pixel points in the soldering tin area, and calculating the solder joint cavity rate. Further, step 1 specifically includes: And acquiring top-view X-ray images and side-view X-ray images of welding spots of the bottom terminal components. Further, in step 5, a modified EAUNet algorithm, including an edge-assist task and a multi-category semantic segmentation task. Further, in the edge auxiliary task, a discrete Laplacian operator with different step sizes is applied to an original label image, convolution operation is performed as shown in the following formula, multi-scale edge information is extracted, and a truth image of an edge label is obtained after splicing 。 Further, the total loss function of the modified EAUNet algorithm is: Wherein the edge-aided task loss function L edge, the cross-entropy loss function L seg, AndAnd