Search

CN-121978138-A - Chip defect diagnosis method based on computer vision and detection system thereof

CN121978138ACN 121978138 ACN121978138 ACN 121978138ACN-121978138-A

Abstract

The application discloses a chip defect diagnosis method and a detection system based on computer vision. The method comprises the following steps of S1, synchronously or sequentially collecting first modal data from a first sensing unit and second modal data from a second sensing unit for the same chip to be tested, wherein the first sensing unit and the second sensing unit work based on different physical principles, S2, inputting the paired first modal data and second modal data into a pre-trained feature fusion neural network model. The chip defect diagnosis method and the detection system based on computer vision can comprehensively utilize the complementary information of different mode data to carry out cross verification and comprehensive analysis, thereby obviously reducing the inherent misjudgment rate and omission rate of a single detection mode and improving the overall accuracy and reliability of defect diagnosis.

Inventors

  • SU JIE
  • QIN YING

Assignees

  • 成都大学

Dates

Publication Date
20260505
Application Date
20260403

Claims (10)

  1. 1. A chip defect diagnosis method based on computer vision, which is characterized by comprising the following steps: S1, synchronously or sequentially collecting first modal data from a first sensing unit and second modal data from a second sensing unit for the same chip to be tested, wherein the first sensing unit and the second sensing unit work based on different physical principles; S2, inputting the paired first modality data and second modality data into a pre-trained feature fusion neural network model; S3, the feature fusion neural network model processes data through the following substeps: s31, processing the first modal data through a first feature extraction branch to obtain a first feature map, and processing the second modal data through a second feature extraction branch to obtain a second feature map; S32, dynamically generating a first weight coefficient according to the contents of the first characteristic diagram and the second characteristic diagram through an attention weight generation module And a second weight coefficient Wherein ; S33, combining the first feature map with a first weight coefficient Multiplying the second feature map with a second weight coefficient Multiplying and summing the results to obtain a fusion feature map; S4, generating a defect detection result containing defect types, position information and quantization parameters through an output layer of the feature fusion neural network model based on the fusion feature map.
  2. 2. The method according to claim 1, wherein the first and second sensor units are in particular a combination of an X-ray sensor unit and an optical sensor unit, or a combination of an X-ray sensor unit and an ultrasound sensor unit, wherein, When the X-ray sensing unit and the optical sensing unit are combined, the first modal data are three-dimensional data representing the internal structure of the chip, and the second modal data are three-dimensional morphology data representing the surface morphology of the chip; when the X-ray sensing unit and the ultrasonic sensing unit are combined, the first mode data are three-dimensional data representing the internal structure of the chip, and the second mode data are ultrasonic scanning image data representing the internal interface state of the chip.
  3. 3. The method of claim 2, wherein the quantization parameter in the defect detection result includes a defect volume calculated based on the segmented defect regions of the fused feature map when the first modality data is X-ray three-dimensional volume data.
  4. 4. The method of claim 1, wherein in step S32, the attention weight generating module is a lightweight convolutional neural network that outputs scalar forms by performing channel stitching, convolution, global pooling, and Softmax normalization operations on the first and second feature maps And And (2) and , 。
  5. 5. The method according to claim 1, wherein in step S1, a hardware trigger signal is sent to the first sensor unit and the second sensor unit by a synchronization trigger controller to achieve time synchronization of data acquisition; Before data acquisition, a conversion relation between the coordinate systems of the first sensing unit and the second sensing unit is established through a space calibration flow, and the conversion relation is used for spatially aligning the first mode data and the second mode data.
  6. 6. The method of claim 5, wherein the spatial scaling procedure comprises: placing a standard calibration object with known three-dimensional geometric characteristics on a carrier, and establishing a fixed carrier coordinate system; the first sensing unit and the second sensing unit are controlled to scan the standard calibration objects respectively to obtain respective calibration data; obtaining a first transformation matrix from the coordinate system of the first sensing unit to the coordinate system of the carrier and a second transformation matrix from the coordinate system of the second sensing unit to the coordinate system of the carrier through calculation; in the subsequent detection, the collected data are unified to the carrier coordinate system by utilizing the first transformation matrix and the second transformation matrix.
  7. 7. The method of claim 1, wherein the output layer of the feature fusion neural network model is a multi-tasking output head, performing defect segmentation, defect classification, defect parameter regression, and severity assessment tasks simultaneously.
  8. 8. A detection system for implementing the method of any one of claims 1-7, comprising: The multi-mode sensing acquisition module is provided with a first sensing unit based on a first physical principle and a second sensing unit based on a different second physical principle; the synchronous control module comprises a synchronous trigger controller and is used for controlling the acquisition actions of the first sensing unit and the second sensing unit to realize time synchronization; the digital twin mapping module constructs a three-dimensional digital twin model according to the design file of the chip to be detected, and maps and marks the defect position and type information in the defect detection result on the corresponding position of the three-dimensional digital twin model; And the calculation analysis module is in communication connection with the multi-mode sensing acquisition module and is used for storing and running the pre-trained feature fusion neural network model.
  9. 9. The system of claim 8, wherein the multi-modal sensing acquisition module specifically comprises: the probes of the first sensing unit and the second sensing unit are arranged at the z-axis end of the six-axis linear module; the probe of the first sensing unit is an X-ray sensing unit, and the probe of the second sensing unit is an optical sensing unit or an ultrasonic sensing unit.
  10. 10. The system of claim 8, further comprising a stage and environmental control module, the stage and environmental control module comprising: the temperature control carrier is internally provided with a semiconductor refrigerating sheet and a temperature sensor, and is used for directly heating or cooling the chip to be tested; The environment isolation cabin surrounds the temperature control carrier and is used for maintaining local constant-temperature atmosphere; And the temperature control controller is in communication connection with the semiconductor refrigerating sheet, the temperature sensor and the synchronous trigger controller and is used for adjusting and maintaining the temperature of the chip at a preset value according to the instruction of the synchronous trigger controller.

Description

Chip defect diagnosis method based on computer vision and detection system thereof Technical Field The application relates to the technical field of measurement and detection, in particular to a chip defect diagnosis method based on computer vision and a detection system thereof. Background Currently, chip defect diagnosis detection mainly depends on the following nondestructive detection modes based on different physical principles, including X-ray detection, optical detection, ultrasonic detection and the like. Different detection modes such as X-rays and optics have different misjudgment directions when in application. The X-ray detection may misjudge the image artifact generated by overlapping the internal structure as a true defect or miss-judge the tiny foreign matters with similar density to the base material, while the optical detection may misjudge the environmental interference as physical damage due to the surface reflection or cleanliness problem, and the internal defect cannot be detected at all. This variability in the direction of misjudgment makes the result of a single detection mode uncertain. Disclosure of Invention The present application aims to solve at least one of the technical problems in the related art to some extent. Therefore, an object of the present application is to provide a chip defect diagnosis method and a detection system thereof based on computer vision, which can comprehensively utilize complementary information of different mode data to perform cross validation and comprehensive analysis, thereby significantly reducing inherent misjudgment rate and omission rate of a single detection mode and improving overall accuracy and reliability of defect diagnosis. To achieve the above object, an embodiment of the first aspect of the present application provides a method for diagnosing a chip defect based on computer vision, comprising the steps of: S1, synchronously or sequentially collecting first modal data from a first sensing unit and second modal data from a second sensing unit for the same chip to be tested, wherein the first sensing unit and the second sensing unit work based on different physical principles; S2, inputting the paired first modality data and second modality data into a pre-trained feature fusion neural network model; S3, the feature fusion neural network model processes data through the following substeps: s31, processing the first modal data through a first feature extraction branch to obtain a first feature map, and processing the second modal data through a second feature extraction branch to obtain a second feature map; S32, dynamically generating a first weight coefficient according to the contents of the first characteristic diagram and the second characteristic diagram through an attention weight generation module And a second weight coefficientWherein; S33, combining the first feature map with a first weight coefficientMultiplying the second feature map with a second weight coefficientMultiplying and summing the results to obtain a fusion feature map; S4, generating a defect detection result containing defect types, position information and quantization parameters through an output layer of the feature fusion neural network model based on the fusion feature map. In addition, the chip defect diagnosis based on computer vision according to the present application may further have the following additional technical features: in one embodiment of the application, the first and second sensing units are specifically a combination of an X-ray sensing unit and an optical sensing unit, or a combination of an X-ray sensing unit and an ultrasonic sensing unit, wherein, When the X-ray sensing unit and the optical sensing unit are combined, the first modal data are three-dimensional data representing the internal structure of the chip, and the second modal data are three-dimensional morphology data representing the surface morphology of the chip; when the X-ray sensing unit and the ultrasonic sensing unit are combined, the first mode data are three-dimensional data representing the internal structure of the chip, and the second mode data are ultrasonic scanning image data representing the internal interface state of the chip. In one embodiment of the present application, when the first modality data is X-ray three-dimensional volume data, the quantization parameter in the defect detection result includes a defect volume calculated based on the defect region segmented by the fusion feature map. In one embodiment of the present application, in step S32, the attention weight generating module is a lightweight convolutional neural network, and the lightweight convolutional neural network outputs scalar form by performing channel stitching, convolution, global pooling and Softmax normalization operations on the first feature map and the second feature mapAndAnd (2) and,。 In one embodiment of the present application, in step S1, a hardware trigger signal is sent to