CN-121982048-A - Automatic segmentation method and system for oral and maxillofacial CT image
Abstract
The invention relates to the technical field of edge segmentation, in particular to an automatic segmentation method and an automatic segmentation system for an oral and maxillofacial CT image, wherein, firstly, local geometric texture matching is carried out on current bone model data and historical bone model data, and key feature points representing key textures in CT slice images are screened out in space dimension; and finally, based on the distribution condition of the edge characteristic values and the position distribution condition of key characteristic points, the real edge line is screened out, so that the accuracy of image segmentation on the current CT slice image corresponding to the oromaxillofacial region based on the real edge line is higher.
Inventors
- DU HANG
- QIN XUCAI
- TONG JIN
Assignees
- 西安国际医学中心有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20260316
Claims (10)
- 1. An automatic segmentation method for an oral and maxillofacial Computed Tomography (CT) image, the method comprising: Acquiring all CT slice images corresponding to the oromaxillofacial region of a current patient, current bone model data and historical bone model data corresponding to the oromaxillofacial region of all historical patients; The method comprises the steps of obtaining current bone model data, historical bone model data, screening key feature points in CT slice images according to local geometric texture matching conditions between the current bone model data and the historical bone model data, determining edge probability of each pixel point position in each CT slice image according to edge position coincidence conditions of each pixel point position on each CT slice image, and determining edge feature values of each pixel point position in the current CT slice image according to edge probability change conditions of each pixel point position on each CT slice image; and image segmentation is carried out on the current CT slice image corresponding to the oral and maxillofacial region according to the distribution condition of the edge characteristic values and the position distribution condition of the key characteristic points.
- 2. The automatic oromaxillofacial CT image segmentation method according to claim 1, wherein the acquiring process of the key feature points comprises: Projecting each historical bone model data into the current bone model data to obtain a corresponding reference feature point of each initial feature point in each historical bone model data; Determining local matching coefficients of each initial feature point under each historical bone model data according to the similarity condition of each initial feature point in the current bone model data and the corresponding reference feature point in each historical bone model data on curvature and normal vector; determining the overall matching degree according to the average value of the local matching coefficient of each initial feature point under all the historical bone model data; and determining key characteristic points of each screening characteristic point in each CT slice image according to the position of each screening characteristic point in the current bone model data.
- 3. The automatic oromaxillofacial CT image segmentation method according to claim 2, wherein the obtaining the local matching coefficients comprises: Normalizing the difference between the curvature of each initial feature point in the current bone model data and the curvature of the corresponding reference feature point in each historical bone model data, determining a corresponding curvature deviation coefficient of each initial feature point under each historical bone model data, normalizing the included angle between the normal vector of each initial feature point in the current bone model data and the normal vector of the corresponding reference feature point in each historical bone model data, determining a corresponding normal vector deviation coefficient of each initial feature point under each historical bone model data, carrying out negative correlation mapping on the product of the curvature deviation coefficient and the normal vector deviation coefficient, and determining a local matching coefficient of each initial feature point under each historical bone model data.
- 4. The automatic oromaxillofacial CT image segmentation method according to claim 2, wherein the step of obtaining the screening feature points comprises: and taking the initial characteristic points with the overall matching degree larger than a preset matching threshold value as screening characteristic points in the current bone model data.
- 5. The automatic oromaxillofacial CT image segmentation method according to claim 1, wherein the edge probability acquisition process comprises: Determining edge information parameters corresponding to each pixel point position according to the edge information distribution condition of each CT slice image, sequentially taking each CT slice image as a target slice image, taking other CT slice images except the target slice image as reference CT slice images, and sequentially taking each pixel point position as a target position; Performing positive correlation mapping on the included angle between the normal vector of the corresponding edge point of the target position in the target slice image and the normal vector of the corresponding edge point in each reference CT slice image, and determining the corresponding edge direction consistency, wherein when the target position does not belong to an edge point in the target slice image or does not belong to an edge point in the corresponding reference CT slice image, the corresponding edge direction consistency is set as a preset edge parameter; Determining the edge credibility of the target position under each reference CT slice image according to the overall size and the relative deviation of the edge information parameters between the target slice image and each reference CT slice image and the edge direction consistency; and normalizing the mean value of the edge credibility of the target position under all the reference CT slice images, and determining the edge probability of the target position in the target CT slice images.
- 6. The method for automatically segmenting an oromaxillofacial CT image according to claim 5, wherein the step of obtaining the edge information parameter comprises: And in all the edge information images corresponding to each CT slice image, taking the ratio of the number of the edge information images with edge points at each pixel point position to the total number of the edge information images as the corresponding edge information parameter of each pixel point position in each CT slice image.
- 7. The method for automatically segmenting an oromaxillofacial CT image according to claim 5, wherein the obtaining the edge confidence comprises: Taking the sum value of the edge information parameter corresponding to the target position in the target slice image and the edge information parameter corresponding to each reference CT slice image as the corresponding edge significance; Performing negative correlation mapping on the difference between the corresponding edge information parameter of the target position in the target slice image and the corresponding edge information parameter in each reference CT slice image, and determining the corresponding edge remarkable consistency; And determining the edge credibility of the target position under each reference CT slice image according to the product of the edge direction consistency, the edge saliency consistency and the edge saliency.
- 8. The automatic oromaxillofacial CT image segmentation method according to claim 1, wherein the edge feature value acquiring process comprises: In the CT slice image sequence, carrying out negative correlation mapping on the difference between the edge probability of each pixel point position in each CT slice image and the edge probability in the previous CT slice image, and determining the edge change parameter of each pixel point position in each CT slice image; and determining the edge characteristic value of each pixel point position in the current CT slice image by multiplying the edge probability of each pixel point position in the current CT slice image by the edge effectiveness.
- 9. The automatic segmentation method of an oromaxillofacial CT image according to claim 1, wherein the image segmentation of a current CT slice image corresponding to an oromaxillofacial region according to a distribution of edge feature values and a position distribution of key feature points comprises: calculating the average value of edge characteristic values of all pixel point positions on each edge line, and determining the corresponding edge confidence coefficient; And in all edges without key feature points, taking all edge lines with the edge confidence coefficient smaller than a preset confidence coefficient threshold as pseudo edge lines, taking other edge lines outside the pseudo edge lines as real edge lines in the current CT slice image, and carrying out image segmentation on the current CT slice image corresponding to the oromaxillofacial region according to the real edge lines.
- 10. An automatic oromaxillofacial CT image segmentation system, comprising: The data acquisition preprocessing module is used for acquiring all CT slice images corresponding to the oral and maxillofacial parts of the current patient, current bone model data and historical bone model data corresponding to the oral and maxillofacial parts of all historical patients; The parameter determining module is used for screening key feature points in the CT slice images according to the local geometric texture matching condition between the current bone model data and the historical bone model data, determining the edge probability of each pixel point position in each CT slice image according to the edge position coincidence condition of each pixel point position on each CT slice image, and determining the edge feature value of each pixel point position in the current CT slice image according to the edge probability change condition of each pixel point position on each CT slice image; And the image segmentation module is used for carrying out image segmentation on the current CT slice image corresponding to the oral and maxillofacial region according to the distribution condition of the edge characteristic values and the position distribution condition of the key characteristic points.
Description
Automatic segmentation method and system for oral and maxillofacial CT image Technical Field The invention relates to the technical field of edge segmentation, in particular to an automatic segmentation method and system for an oral and maxillofacial CT image. Background The automatic segmentation of the oral and maxillofacial CT image is a technology for automatically processing the CT image through a computer and extracting structures and areas of the oral cavity and the maxillofacial region, and can improve the analysis efficiency of medical image data, so that the automatic segmentation method is widely applied to the field of medical data mining processing. In the prior art, edge detection is generally carried out on CT slice images corresponding to the oromaxillofacial region directly through a canny edge detection method, and image segmentation is carried out through the obtained edge connected domain. When the oral and maxillofacial CT image is automatically segmented, the diversity of the diseased condition of the patient is presented, the contrast between soft tissues in the segmentation process is lower, or a fuzzy structural boundary exists, especially the boundary between the soft tissues and the bone tissues, and meanwhile, the accuracy of image segmentation is affected by the false edge generated by noise interference, so that the accuracy of image segmentation by the edge detection result of a single CT slice image in the prior art is lower. Disclosure of Invention In order to solve the technical problem of lower accuracy of image segmentation by using an edge detection result of a single CT slice image in the prior art, the application aims to provide an automatic segmentation method and an automatic segmentation system for an oral and maxillofacial CT image, and the adopted technical scheme is as follows: The first aspect of the application provides an automatic segmentation method for an oral and maxillofacial CT image, which comprises the following steps: Acquiring all CT slice images corresponding to the oromaxillofacial region of a current patient, current bone model data and historical bone model data corresponding to the oromaxillofacial region of all historical patients; The method comprises the steps of obtaining current bone model data, historical bone model data, screening key feature points in CT slice images according to local geometric texture matching conditions between the current bone model data and the historical bone model data, determining edge probability of each pixel point position in each CT slice image according to edge position coincidence conditions of each pixel point position on each CT slice image, and determining edge feature values of each pixel point position in the current CT slice image according to edge probability change conditions of each pixel point position on each CT slice image; and image segmentation is carried out on the current CT slice image corresponding to the oral and maxillofacial region according to the distribution condition of the edge characteristic values and the position distribution condition of the key characteristic points. Further, the process for obtaining the key feature points includes: Projecting each historical bone model data into the current bone model data to obtain a corresponding reference feature point of each initial feature point in each historical bone model data; Determining local matching coefficients of each initial feature point under each historical bone model data according to the similarity condition of each initial feature point in the current bone model data and the corresponding reference feature point in each historical bone model data on curvature and normal vector; determining the overall matching degree according to the average value of the local matching coefficient of each initial feature point under all the historical bone model data; and determining key characteristic points of each screening characteristic point in each CT slice image according to the position of each screening characteristic point in the current bone model data. Further, the obtaining process of the local matching coefficient includes: Normalizing the difference between the curvature of each initial feature point in the current bone model data and the curvature of the corresponding reference feature point in each historical bone model data, determining a corresponding curvature deviation coefficient of each initial feature point under each historical bone model data, normalizing the included angle between the normal vector of each initial feature point in the current bone model data and the normal vector of the corresponding reference feature point in each historical bone model data, determining a corresponding normal vector deviation coefficient of each initial feature point under each historical bone model data, carrying out negative correlation mapping on the product of the curvature deviation coefficient and the normal vector d