CN-116486069-B - Conndylar CBCT image segmentation method based on edge and texture features
Abstract
The invention discloses a condyle CBCT image segmentation method based on edge and texture features, which comprises the steps of firstly, obtaining an interested region of the whole condyle by a condyle segmentation model through a original condyle CBCT image, then, inputting the interested region of the condyle into a segmentation model of condyle cortex and cancellous bone, obtaining corresponding texture feature data through a texture feature extraction module, inputting the texture feature data and a feature coding network into a feature decoding network containing texture attention together, further obtaining texture fusion features, simultaneously, inputting low-level features and high-level features in the feature coding network into an edge extraction module, obtaining edge information of the condyle cortex and the cancellous bone, inputting the edge information and the texture fusion features into a fusion module, and finally obtaining a visual result diagram of the condyle cortex and the cancellous bone. The invention enables the segmentation network to obtain more characteristics of cortex and cancellous bone, and improves the segmentation precision of the condylar cortex and the cancellous bone.
Inventors
- WU FULI
- Jin Linxiao
- HAO PENGYI
Assignees
- 浙江工业大学
Dates
- Publication Date
- 20260512
- Application Date
- 20230118
Claims (7)
- 1. The condyle CBCT image segmentation method based on the edge and the texture features is characterized by comprising the following steps of: Inputting the original condylar CBCT image into a condylar segmentation model to obtain a detection result of the whole condylar, and cutting the original condylar CBCT image according to the detection result to obtain a region of interest of the condylar An image; The region of interest Inputting the images into a feature coding network of the cortical and cancellous segmentation model to obtain feature images with different scales ; The region of interest Inputting the image into a texture feature extraction module of the cortical and cancellous segmentation model to extract texture features ; Map the characteristic map Inputting the region features into a feature decoding network of a cortical and cancellous segmentation model to decode the region features of different scales to obtain the region features ; Map the characteristic map Low-level features in (1) And advanced features Inputting into an edge feature extraction module of the cortical and cancellous segmentation model to obtain edge features ; Edge feature And region features of different dimensions Inputting the visual result prediction graph into a feature fusion module of a cortical and cancellous segmentation model to obtain a visual result prediction graph of condylar cortex and cancellous bone segmentation; Wherein the region of interest is to be Inputting the images into a feature coding network of the cortical and cancellous segmentation model to obtain feature images with different scales Comprising: step 2.1, regions of interest The image is input into a convolution block with the convolution kernel size of 3 multiplied by 3 to obtain an output characteristic diagram The dimension is as follows ; Step 2.2, feature map Maximum pooling is performed, and is input into a convolution block with a convolution kernel size of 3 x 3, obtaining an output characteristic diagram The dimension is as follows ; Step 2.4, feature map Maximum pooling is performed, and is input into a convolution block with a convolution kernel size of 3 x 3, obtaining an output characteristic diagram The dimension is as follows ; Step 2.5, feature map Maximum pooling is performed, and is input into a convolution block with a convolution kernel size of 3 x 3, obtaining an output characteristic diagram The dimension is as follows ; Step 2.6, feature map Maximum pooling is performed, and is input into a convolution block with a convolution kernel size of 3 x 3, obtaining an output characteristic diagram The dimension is as follows The feature map obtained by each convolution block forms feature maps with different scales ; The characteristic diagram is to Low-level features in (1) And advanced features Inputting into an edge feature extraction module of the cortical and cancellous segmentation model to obtain edge features Comprising: Will be Upsampling to make its size and characteristic diagram Consistent, spliced according to the number of channels, and input to an edge extraction module to obtain an edge characteristic diagram Its size and characteristic diagram And consistent.
- 2. The method for segmenting the condylar CBCT image based on the edge and the texture features according to claim 1, wherein the method is characterized in that the original condylar CBCT image is input into a condylar segmentation model to obtain a detection result of the whole condylar, and the original condylar CBCT image is cut according to the detection result to obtain a region of interest of the condylar An image, comprising: Step 1.1, inputting the CBCT image of the original condyle into a convolution block with the convolution kernel size of 3 multiplied by 3 to obtain an output characteristic diagram The dimension is as follows ; Step 1.2, feature map Maximum pooling is performed, and is input into a convolution block with a convolution kernel size of 3 x 3, obtaining an output characteristic diagram The dimension is as follows ; Step 1.3, feature map Maximum pooling is performed, and is input into a convolution block with a convolution kernel size of 3 x 3, obtaining an output characteristic diagram The dimension is as follows ; Step 1.4, feature map Maximum pooling is performed, and is input into a convolution block with a convolution kernel size of 3 x 3, obtaining an output characteristic diagram The dimension is as follows ; Step 1.5, feature map Maximum pooling is performed, and is input into a convolution block with a convolution kernel size of 3 x 3, obtaining an output characteristic diagram The dimension is as follows ; Step 1.6, feature map Upsampling and then comparing with the characteristic diagram The channel-level splicing is carried out, and is input into a convolution block having a convolution kernel size of 3 x 3, obtaining a characteristic diagram Its dimension is equal to Consistent; Step 1.7, feature map Upsampling and then comparing with the characteristic diagram The channel-level splicing is carried out, and is input into a convolution block having a convolution kernel size of 3 x 3, obtaining a characteristic diagram Its dimension is equal to Consistent; step 1.8, feature map Upsampling and then comparing with the characteristic diagram The channel-level splicing is carried out, and is input into a convolution block having a convolution kernel size of 3 x 3, obtaining a characteristic diagram Its dimension is equal to Consistent; step 1.9, feature map Upsampling and then comparing with the characteristic diagram The channel-level splicing is carried out, and is input into a convolution block having a convolution kernel size of 3 x 3, obtaining a characteristic diagram Its dimension is equal to Consistent; Step 1.10, feature map Is input to a convolution block with a convolution kernel size of 1x 1, obtaining a detection result ; Step 1.11, according to the detection result Cutting the original condylar CBCT image to obtain a region of interest of the condylar process The dimension is as follows 。
- 3. The edge and texture feature based condylar CBCT image segmentation method of claim 1, wherein the region of interest is Inputting the image into a texture feature extraction module of the cortical and cancellous segmentation model to extract texture features Comprising: The region of interest The image is input to a texture feature extraction module, feature extraction is carried out through an HOG method, and a texture feature image is obtained The dimension is as follows 。
- 4. The method for segmenting the condylar CBCT image based on the edge and texture features as recited in claim 3, wherein the feature extraction by the HOG method comprises: Step 3.1, region of interest The image is obtained along the Z-axis direction The size of the sheet is Is used for carrying out the normalization processing of pixel values on each condyle image slice to finally obtain a slice set ; Step 3.2, slicing each condyle image Dividing into multiple identical windows according to a certain size, wherein each adjacent window is not overlapped, dividing each window into multiple identical blocks according to a certain size, each adjacent window is not overlapped, and dividing each window into multiple identical cells according to a certain size Each adjacent Non-overlapping; Step 3.3, cell alignment Each pixel point on the display panel calculates a gradient value in the horizontal direction and a gradient value in the vertical direction respectively And For pixel points The calculation formula of the gradient values in the horizontal direction and the vertical direction is as follows: ; ; Wherein the method comprises the steps of Representing pixel points after image normalization Pixel values of (2); thereby obtaining the pixel point Gradient amplitude of (2) And gradient direction The calculation formula of (2) is as follows: ; ; Step 3.4, at each cell An internal statistical gradient direction histogram divides a plane into 0-180 degrees Intervals of 181-360 DEG The gradient value corresponding to the angle range in the interval is taken as the vertical axis of the histogram, the gradient value corresponding to the boundary of the interval is taken as the horizontal axis of the histogram, thus obtaining each cell HOG characteristics of (c); Step 3.5, cells in each Block Normalizing the obtained HOG characteristics to obtain quick HOG characteristics; step 3.6, combining HOG characteristics of the blocks in each window to finally obtain a slice HOG characteristics of (c); step 3.7, collecting the slices Repeating the steps 3.1-3.6 to obtain HOG characteristics corresponding to each slice; Step 3.8, collecting the slices HOG features corresponding to all slices in the image are arranged according to the slice sequence, so that a texture feature map is obtained 。
- 5. The edge and texture feature based condylar CBCT image segmentation method of claim 1, wherein the feature map is Inputting the region features into a feature decoding network of a cortical and cancellous segmentation model to decode the region features of different scales to obtain the region features Comprising: Step 4.1, feature map Feature map And texture features Inputting to a texture feature attention module to obtain texture fusion features The dimension is as follows ; Step 4.2, feature map Feature map And texture features Inputting to a texture feature attention module to obtain texture fusion features The dimension is as follows ; Step 4.3, feature map Feature map And texture features Inputting to a texture feature attention module to obtain texture fusion features The dimension is as follows ; Step 4.4, feature map Feature map And texture features Inputting to a texture feature attention module to obtain texture fusion features The dimension is as follows Finally, the regional characteristics with different scales are obtained 。
- 6. The edge and texture based condylar CBCT image segmentation method of claim 5, wherein the texture attention module performs the following operations: step 5.1, texture features Input to the fully connected layer with convolution kernel 1x1, to make it size and feature map Consistent with the characteristic diagram Splicing according to channels, and up-sampling to make it and characteristic diagram Is of the same size and is matched with the characteristic diagram Splicing according to channels, inputting the channels into a convolution block with the convolution kernel size of 3 multiplied by 3, and obtaining texture fusion characteristics through normalization and ReLU activation functions The dimension is as follows ; Step 5.2, texture features Input to the fully connected layer with convolution kernel 1x1, to make it size and feature map Consistent with the characteristic diagram Splicing according to channels, and up-sampling to make it and characteristic diagram Is of the same size and is matched with the characteristic diagram Splicing according to channels, inputting the channels into a convolution block with the convolution kernel size of 3 multiplied by 3, and obtaining texture fusion characteristics through normalization and ReLU activation functions The dimension is as follows ; Step 5.3, texture feature is added Input to the fully connected layer with convolution kernel 1x1, to make it size and feature map Consistent with the characteristic diagram Splicing according to channels, and up-sampling to make it and characteristic diagram Is of the same size and is matched with the characteristic diagram Splicing according to channels, inputting the channels into a convolution block with the convolution kernel size of 3 multiplied by 3, and obtaining texture fusion characteristics through normalization and ReLU activation functions The dimension is as follows ; Step 5.4, texture features Input to the fully connected layer with convolution kernel 1x1, to make it size and feature map Consistent with the characteristic diagram Splicing according to channels, and up-sampling to make it and characteristic diagram Is of the same size and is matched with the characteristic diagram Splicing according to channels, inputting the channels into a convolution block with the convolution kernel size of 3 multiplied by 3, and obtaining texture fusion characteristics through normalization and ReLU activation functions The dimension is as follows 。
- 7. The edge and texture feature-based condylar CBCT image segmentation method of claim 5, wherein the edge features are And region features of different dimensions Inputting the visual result prediction graph into a feature fusion module of a cortical and cancellous bone segmentation model to obtain a visual result prediction graph of condylar cortex and cancellous bone segmentation, wherein the visual result prediction graph comprises: Step 6.1, edge characterization Performing maximum pooling to enable the size and texture to be fused with characteristics Consistent and feature fused with texture Splicing according to the number of channels, inputting the splicing into a convolution block with the convolution kernel size of 3 multiplied by 3 to obtain new fusion characteristics The dimension is as follows ; Step 6.2, edge characterization Performing maximum pooling to enable the size and texture to be fused with characteristics Consistent and feature fused with texture Splicing according to the number of channels, inputting the splicing into a convolution block with the convolution kernel size of 3 multiplied by 3 to obtain new fusion characteristics The dimension is as follows ; Step 6.3, edge characterization Performing maximum pooling to enable the size and texture to be fused with characteristics Consistent and feature fused with texture Splicing according to the number of channels, inputting the splicing into a convolution block with the convolution kernel size of 3 multiplied by 3 to obtain new fusion characteristics The dimension is as follows ; Step 6.4, edge characterization Performing maximum pooling to enable the size and texture to be fused with characteristics Consistent and feature fused with texture Splicing according to the number of channels, inputting the splicing into a convolution block with the convolution kernel size of 3 multiplied by 3 to obtain new fusion characteristics The dimension is as follows ; Step 6.5, fusing the features Fusion features Fusion features And fusion features Splicing according to the number of channels, inputting the spliced channels into a convolution block with the convolution kernel size of 1 multiplied by 1 to obtain final edge enhancement region characteristics, and activating the edge enhancement region characteristics according to the following formula to obtain an output characteristic result diagram: ; Wherein, the Representing x The operation is activated, x representing the edge enhancement region feature.
Description
Conndylar CBCT image segmentation method based on edge and texture features Technical Field The invention belongs to the field of medical image processing, and particularly relates to a condylar CBCT image segmentation method based on edge and texture features. Background Temporomandibular joint disorders (TMD) are the most common diseases of the oromaxillofacial region, a group of diseases whose pathogenesis is not completely understood, and temporomandibular joint (TMJ) cannot function normally. And has high incidence rate, and is ranked fourth in common oral diseases. Temporomandibular arthritis (TMJ-OA) is a subtype of TMD that can lead to severe joint pain, dysfunction, malocclusions, and health-related quality of life degradation. On the image, TMJ-OA can be diagnosed by a change in the microstructure of the condyle. As the affected condyle may exhibit alterations in bone microstructure including bone hardening, erosion blurring of the cortical bone, and the like. So the change of the microstructure of the condylar bone is recognized early, and the method plays an important role in the diagnosis and treatment of temporomandibular arthritis. Cone Beam Computed Tomography (CBCT) has now been recommended as one of the most reliable methods of diagnosing changes in the mandibular condyle bone, as the integrity of the condylar cortex and underlying cancellous bone can be observed. However, the mandibular condyle remains one of the most difficult structures to segment on CBCT images. The condyle has various and complex forms, relatively low bone density, and blurred glenoid fossa near the articular disc, so that the condyle is difficult to anatomically describe. However, there are few studies currently using deep learning studies to identify fine segmentation of the condyle in CBCT images. In the prior art, the condyle in the image is segmented by using a model, so that the condyle can only be subjected to integral semantic segmentation, the cortical layer and the bottom cancellous bone of the condyle can not be subjected to further fine segmentation, and the diagnosis and treatment efficiency of doctors can not be improved. Although some techniques realize example segmentation, a targeted deep learning method is not used for improving the segmentation precision of the condylar cortex layer and the cancellous bone, various characteristic information between the cortex and the cancellous bone is not fully considered, the problem that boundaries of the cortex and the cancellous bone are fuzzy after pathological changes cannot be solved, and when the condylar is further segmented, the segmentation of the cortex and the cancellous bone is not accurate enough, and the situation that the cortex of the abnormal condylar is mispredicted into the cancellous bone exists. Disclosure of Invention The application aims to overcome the defects in the prior art, provides a condylar CBCT image segmentation method based on edge and texture characteristics, further realizes fine segmentation of condylar cortex and cancellous bone on the basis of improving the prior art, and solves the problem of difficult segmentation caused by fuzzy boundary between the condylar cortex and the cancellous bone. In order to achieve the above purpose, the technical scheme of the application is as follows: a condylar CBCT image segmentation method based on edge and texture features comprises the following steps: Inputting the original condylar CBCT image into a condylar segmentation model to obtain a detection result of the whole condylar, and cutting the original condylar CBCT image according to the detection result to obtain an interested region I ROI image of the condylar; Inputting the image of the region of interest I ROI into a feature coding network of a cortical and cancellous segmentation model to obtain feature maps F en with different scales; inputting the image of the region of interest I ROI into a texture feature extraction module of the cortical and cancellous segmentation model to obtain texture features F texture; Inputting the feature map F en into a feature decoding network of the cortical and cancellous segmentation model to decode the region features of different scales to obtain a region feature F de; Inputting the low-level features F e_low and the high-level features F e_high in the feature map F en into an edge feature extraction module of the cortical and cancellous segmentation model to obtain edge features F edge; and inputting the edge features F edge and the region features F de with different scales into a feature fusion module of the cortical and cancellous segmentation model to obtain a visual result prediction graph of the condyloid cortical and cancellous bone segmentation. Further, the step of inputting the original condylar CBCT image to the condylar segmentation model to obtain a detection result of the whole condylar, and cutting the original condylar CBCT image according to the detection result to obtain a