Search

CN-121982326-A - AI chip image information extraction method based on deep learning

CN121982326ACN 121982326 ACN121982326 ACN 121982326ACN-121982326-A

Abstract

The invention discloses an AI chip image information extraction method based on deep learning, which comprises the following steps of S1, collecting multi-source image data of an AI chip, calling design layout data and process parameters, S2, preprocessing the multi-source image data, outputting an aligned image sequence, S3, dividing functional areas, generating a structure guide graph, area function labels and hierarchical codes, S4, outputting fusion feature tensors, S5, executing image analysis, generating a structure boundary, semantic annotation and defect annotation, S6, executing boundary calibration on the structure boundary through an improved SAM mechanism, executing consistency calibration if the structure boundary is inconsistent with a layout geometric prompt, S7, executing self-supervision training and process migration update, and executing distribution alignment update if the process parameters are changed, and generating an image identification result. The invention improves the accuracy and the identification consistency of the microstructure extraction of the AI chip.

Inventors

  • NIE XIAOWEN

Assignees

  • 深圳市立印达科技有限公司

Dates

Publication Date
20260505
Application Date
20260224

Claims (10)

  1. 1. The AI chip image information extraction method based on deep learning is characterized by comprising the following steps: S1, acquiring multi-source image data of an AI chip, and calling design layout data and process parameters; S2, preprocessing is carried out on the multi-source image data, a pixel level alignment mapping relation is generated based on the design layout data, and an alignment image sequence is output; s3, dividing functional areas based on the alignment image sequence and pixel level alignment mapping relation, and generating a structure guide image, area functional labels and hierarchical codes; S4, inputting the aligned image sequence into a depth feature coding branch, inputting the structure guide image into a semantic coding branch, inputting the processing parameters into a processing Cheng Diaozhi branch, and outputting a fusion feature tensor; S5, performing image analysis based on the fusion feature tensor to generate a structural boundary, a semantic annotation and a defect annotation; s6, performing boundary calibration on a structure boundary through an improved SAM mechanism, and if the structure boundary is inconsistent with a domain geometric prompt, performing consistency calibration and outputting a consistent boundary result, wherein the improved SAM mechanism introduces the domain geometric prompt and boundary representation, the domain geometric prompt is generated by the alignment mapping relation between design domain data and pixel level, and the boundary representation is generated by region function labels and hierarchical codes; And S7, performing self-supervision training and process migration updating based on the uniform boundary result, the semantic annotation and the defect annotation, and performing distribution alignment updating if the process parameters are changed to generate an image recognition result.
  2. 2. The AI chip image information extraction method based on deep learning of claim 1, wherein S1 specifically is: the method comprises the steps of distributing chip numbers to AI chips and collecting multi-source image data, wherein the multi-source image data comprises a microscopic image, a variable illumination image, a multi-focal-plane image and an inclination angle image; Retrieving design layout data corresponding to the chip number from a design data source, wherein the design layout data comprises layout geometric figure data, layout level coding data and functional area dividing data; and establishing the corresponding relation among the chip number, the multi-source image data, the design layout data and the manufacturing process parameters.
  3. 3. The AI chip image information extraction method based on deep learning of claim 1, wherein S2 specifically is: performing brightness normalization processing, noise suppression processing, distortion correction processing, and resolution unification processing on the multi-source image data; Selecting layout geometric figure data and layout level coding data corresponding to the multi-source image data according to the design layout data and the chip number; calculating a pixel level alignment mapping relation based on pixel coordinates in the multi-source image data and layout coordinates in the design layout data; And performing coordinate transformation on the microscopic image, the variable illumination image, the multi-focal plane image and the inclination angle image in the multi-source image data according to the pixel level alignment mapping relation, and outputting an alignment image sequence.
  4. 4. The AI chip image information extraction method based on deep learning of claim 1, wherein S3 specifically includes: selecting functional area division data and layout level coding data from the design layout data based on the alignment image sequence and pixel level alignment mapping relation; performing region division on pixel coordinates in the aligned image sequence according to the functional region division data to generate a functional region division result; marking the boundary of the functional area to which each pixel belongs according to the partitioning result of the functional area to generate a structure guide diagram; and distributing area function labels for each function area according to the function area division data, distributing level codes for each function area according to the layout level code data, and outputting a structure guide diagram, the area function labels and the level codes.
  5. 5. The AI chip image information extraction method based on deep learning of claim 1, wherein S4 specifically is: inputting the aligned image sequence into a depth feature coding branch and performing convolution processing to generate depth feature characterization, wherein the depth feature coding branch is from a standard convolution feature extraction structure, and a convolution layer, a batch normalization layer and a nonlinear activation layer are sequentially combined; Inputting the structure guide graph into a semantic coding branch and performing mapping processing to generate semantic characterization, wherein the semantic coding branch is from a standard semantic segmentation coding structure and is sequentially combined by a convolution layer and a mapping layer; inputting process parameters into a process Cheng Diaozhi branch and executing parameter coding to generate a process Cheng Diaozhi representation, wherein the process modulation branch is from a process parameter characterization processing structure and sequentially combines a parameter mapping layer and a parameter coding layer; And splicing the depth feature characterization, the semantic characterization and the system Cheng Diaozhi characterization according to the unified dimension sequence to output a fusion feature tensor.
  6. 6. The AI chip image information extraction method based on deep learning of claim 1, wherein S5 specifically comprises: the image analysis comprises structural boundary analysis, semantic annotation analysis and defect annotation analysis; The structural boundary analysis comprises the steps of sequentially executing convolution operation, edge response extraction and connectivity detection based on fusion feature tensors to generate a connectivity detection result, determining a geometric edge contour according to the connectivity detection result, and outputting a structural boundary in a coordinate set form; the semantic annotation analysis comprises the steps of sequentially executing semantic channel separation operation, mapping index matching and region filling annotation based on fusion feature tensors, generating a semantic region marking set, and outputting the semantic annotation in a form of corresponding labels and coordinates; the defect labeling analysis comprises the steps of sequentially executing pixel difference extraction, threshold judgment and partition marking based on fusion feature tensors, generating a defect set, and outputting the defect set in the form of coordinates and marks as defect labels.
  7. 7. The AI chip image information extraction method based on deep learning of claim 1, wherein S6 specifically is: extracting boundary coordinates corresponding to the functional area based on the alignment mapping relation between the design layout data and the pixel level to generate a layout geometric prompt, wherein the layout geometric prompt comprises boundary coordinate information corresponding to the functional area; Generating boundary representation based on the target boundary coordinates corresponding to the region function label and the hierarchical code extraction function region, wherein the boundary representation comprises target boundary coordinate information corresponding to the function region; introducing a layout geometric prompt and boundary representation through an improved SAM mechanism, and calculating a pixel difference region between a structural boundary and the layout geometric prompt according to a unified coordinate system; If the pixel difference region exists, the boundary point in the pixel difference region in the structure boundary is subjected to position adjustment according to the boundary coordinates given by the geometric prompt and the boundary representation of the layout, and a uniform boundary result is output, and if the pixel difference region does not exist, the structure boundary is used as the uniform boundary result.
  8. 8. The method for extracting image information of AI chip based on deep learning according to claim 7, wherein the step of extracting boundary coordinates corresponding to a functional region based on the alignment mapping relation between design layout data and pixel level generates a layout geometry hint, specifically: reading functional area division data and layout level coding data from design layout data; Extracting boundary contour lines of the functional area from design layout data, and recording layout coordinates of the boundary contour lines; converting layout coordinates of boundary contour lines of the functional area into pixel coordinates corresponding to an imaging plane based on the pixel level alignment mapping relation; Performing coordinate correction operation on the converted pixel coordinates to form a functional area boundary coordinate set; recording the boundary coordinate set of the functional area according to the serial number sequence of the functional area, and outputting the boundary coordinate set of the functional area as a geometric prompt of the layout.
  9. 9. The AI chip image information extraction method based on deep learning of claim 7, wherein the boundary representation is generated based on target boundary coordinates of the region function tag corresponding to the hierarchical code extraction function region, specifically: Reading a function region number list from the function region division data, reading a corresponding region function label for each function region in the function region number list, and reading a corresponding hierarchical code for each function region in the function region number list; Recording the area function labels and the hierarchical codes according to the one-to-one correspondence of the function area numbers, and extracting boundary pixel sets of each function area in the structure guide graph; And correspondingly recording the boundary pixel set and the region function label, correspondingly recording the boundary pixel set and the hierarchical code to form boundary record items of each functional region, combining all the boundary record items according to the sequence of the functional region numbers and outputting the boundary record items as boundary representations, wherein the boundary record items comprise the functional region numbers, the region function label, the hierarchical code and the boundary pixel coordinate set.
  10. 10. The AI chip image information extraction method based on deep learning of claim 1, wherein S7 specifically includes: Based on the consistent boundary result, the semantic annotation and the defect annotation and combined with the aligned image sequence, a self-supervision training sample set is constructed, the consistent boundary result, the semantic annotation and the defect annotation are used as supervision signals and correspond to the fusion feature tensor according to pixel coordinates, pixel-level error information is calculated, depth feature coding branch parameters, semantic coding branch parameters and system Cheng Diaozhi branch parameters are updated according to the pixel-level error information, and updated fusion feature tensor is formed; Calculating feature distribution statistical information of fusion feature tensors corresponding to different process parameters and recording the feature distribution statistical information as a process feature distribution record; If the process parameters are changed, performing distribution alignment update on the fusion feature tensor corresponding to the new process parameters based on the process feature distribution record, outputting updated structure boundaries, semantic labels and defect labels, and summarizing in the dimension of the functional area to generate an image recognition result, wherein the distribution alignment update comprises the steps of performing standardized transformation and mapping transformation on the fusion feature tensor to generate an alignment feature representation, and performing image analysis processing on the alignment feature representation.

Description

AI chip image information extraction method based on deep learning Technical Field The invention relates to the technical field of integrated circuit structure detection and image recognition, in particular to an AI chip image information extraction method based on deep learning. Background As the structure density and multi-level interconnect complexity of advanced process node integrated circuits continue to increase, the need for high-precision image resolution for AI chip internal structural features, metal trace integrity, and potential defects is growing. The existing chip image detection method mainly relies on microscopic images or single focal plane imaging for structure identification, and extracts device boundaries by using a traditional image segmentation or edge detection algorithm, but the following problems generally exist in the actual wafer and chip deconstruction detection: Because of multi-focal-length imaging, dip angle imaging and structural shadows, metal reflection and layer overlapping interference generated under variable illumination conditions, the conventional image boundary positioning method is difficult to complete accurate correspondence between a layout logical boundary and an imaging geometric boundary, so that function layer division errors and false detection or omission of a key interconnection structure are caused, the conventional boundary calibration and structure extraction method lacks a geometric constraint contrast mechanism with a design layout, so that false boundaries caused by noise, diffraction and material reflection are difficult to judge and exclude by themselves, the accuracy of subsequent defect labeling and structure recovery is influenced, and the conventional system lacks a self-adaptive convergence calibration mechanism when a real image boundary is inconsistent with a theoretical boundary of the design layout, so that the deviation of key positions such as a laminated structure, a via edge, a wiring endpoint and the like cannot be uniformly adjusted, and false alarm or omission is further caused in defect detection. In addition, the lack of a joint modulation mode between the depth feature extraction and semantic segmentation process and the process parameters leads the feature performance difference of the same structure to be obvious under the process conditions of different batches, which is not beneficial to model generalization and cross-process migration application. Therefore, how to provide an AI chip image information extraction method based on deep learning is a problem that needs to be solved by those skilled in the art. Disclosure of Invention The invention aims to provide an AI chip image information extraction method based on deep learning, which sequentially collects multi-source image data and design layout data, generates a pixel level alignment mapping relation and an alignment image sequence, divides functional areas based on the mapping relation to generate a structure guide diagram, area function labels and hierarchical codes, constructs depth feature coding branches, semantic coding branches and process modulation branches to form fusion feature tensors, performs image analysis on the fusion feature tensors to generate a structure boundary, semantic annotation and defect annotation, performs boundary calibration on the structure boundary by introducing an improved SAM mechanism, performs consistency calibration based on layout geometric prompt and boundary representation to generate a consistency boundary result, and performs self-supervision training and process migration updating based on the consistency boundary result, the semantic annotation and the defect annotation to form an image recognition result. The invention has the advantages of stable structure boundary, high safety and high accuracy. According to the embodiment of the invention, the AI chip image information extraction method based on deep learning comprises the following steps: S1, acquiring multi-source image data of an AI chip, and calling design layout data and process parameters; S2, preprocessing is carried out on the multi-source image data, a pixel level alignment mapping relation is generated based on the design layout data, and an alignment image sequence is output; s3, dividing functional areas based on the alignment image sequence and pixel level alignment mapping relation, and generating a structure guide image, area functional labels and hierarchical codes; S4, inputting the aligned image sequence into a depth feature coding branch, inputting the structure guide image into a semantic coding branch, inputting the processing parameters into a processing Cheng Diaozhi branch, and outputting a fusion feature tensor; S5, performing image analysis based on the fusion feature tensor to generate a structural boundary, a semantic annotation and a defect annotation; s6, performing boundary calibration on a structure boundary through an improved SAM mechanis