Search

CN-121981962-A - Periodontitis stage diagnosis auxiliary method based on deep learning

CN121981962ACN 121981962 ACN121981962 ACN 121981962ACN-121981962-A

Abstract

The invention relates to the technical field of image recognition, in particular to a periodontitis stage diagnosis auxiliary method based on deep learning, which comprises the steps of collecting panoramic dental film images of a patient and preprocessing the panoramic dental film images to obtain preprocessed images; the method comprises the steps of preprocessing an image, obtaining a maxillofacial region by segmentation of a semantic segmentation model from the preprocessed image, obtaining a marking position of each tooth by recognition of a mask segmentation model of the maxillofacial region, generating RBL values of each tooth according to the marking position, and outputting a key attention region. Aiming at the problem that the technical scheme of adopting a model to directly learn periodontitis characteristics to carry out auxiliary identification in the prior art has poor interpretability in the diagnosis process, in the scheme, the method is changed into the method that the maxillofacial region is sequentially segmented through different models to obtain single teeth, specific mark positions are respectively extracted for each tooth, RBL values are calculated by combining pixel coordinates, and the points which are worth focusing are screened, so that the method has strong interpretability in the processing flow and can provide more reference information for doctors.

Inventors

  • LU JIAWEI
  • HE MENGKE
  • DUAN HUI
  • LUO LIJUN
  • Yao Zihe
  • CHEN LINGFAN

Assignees

  • 上海市同济口腔医院(同济大学附属口腔医院)

Dates

Publication Date
20260505
Application Date
20251229

Claims (10)

  1. 1. The periodontitis stage diagnosis auxiliary method based on deep learning is characterized by comprising the following steps of: s1, collecting panoramic dental film images of a patient and preprocessing the panoramic dental film images to obtain preprocessed images; S2, segmenting the preprocessed image by adopting a semantic segmentation model to obtain a maxillofacial region; S3, identifying the maxillofacial region by using a mask segmentation model to obtain the marking position of each tooth; and S4, generating RBL values of each tooth according to the marking positions and outputting important attention areas.
  2. 2. The method according to claim 1, wherein the step S11 comprises: S11, collecting the panoramic dental film image, and carrying out noise filtering on the panoramic dental film image to obtain a denoising image; and step S12, carrying out standardization processing on the denoising image to obtain the preprocessing image with uniform brightness and contrast.
  3. 3. The method according to claim 1, wherein in the step S2, the semantic segmentation model is implemented using a pre-trained Res-U-Net model.
  4. 4. The method according to claim 3, further comprising a first training process for training the semantic segmentation model using a first data set generated in advance before performing the step S1; the method further comprises a labeling process, wherein the labeling process is used for generating the first data set; The labeling process comprises the following steps: A11, collecting standard medical images and preprocessing the standard medical images to obtain preprocessed standard images; A12, marking the cementum boundary, the alveolar ridge top boundary and the root apex position of each tooth in the preprocessing standard image respectively to form a marked image; And A13, generating the first data set based on the marked image.
  5. 5. The method according to claim 4, further comprising, after performing step a 12: And step B12, scaling and rotating the marked images to amplify the number of the marked images.
  6. 6. The method according to claim 4, wherein the recognition result of the semantic segmentation model is measured by using a cross-correlation ratio in the first training process.
  7. 7. The method according to claim 1, further comprising a second training process for training the mask segmentation model before performing the step S1; the Mask segmentation model is realized based on Mask-RCNN model training; In the second training process, model loss is measured by adopting a Mask-RCNN loss function; the Mask-RCNN loss function includes: L=L cls +L box +L mask ; Where L is the overall penalty, L cls is the classification penalty, L box is the bounding box regression penalty, and L mask is the mask prediction penalty.
  8. 8. The method according to claim 1, wherein in the step S4, the RBL value generation process includes: Wherein CEJ_L and CEJ_R represent the cementum boundary points on the left and right sides of the tooth, crest _L and crest _R represent the alveolar ridge apex points on the two sides, and root_L and root_R represent the Root apex points on the two sides.
  9. 9. The method for assisting in the diagnosis of periodontitis stage according to claim 7, wherein said step S4 comprises: S41, extracting a single tooth area from a segmented image output by the mask segmentation model; step S42, respectively acquiring the mark positions of the segmentation aiming at the single tooth areas; the marking positions comprise the cementum boundary point, the alveolar ridge top point and the root apex point; Step S43, generating RBL value of each tooth according to the mark position; and S44, screening according to the RBL value and a preset threshold value to obtain target teeth, and outputting the corresponding single tooth region as the key attention region according to the target teeth.
  10. 10. A storage medium comprising computer instructions which, when executed by a computer device, perform the method of assisting in the staging diagnosis of periodontitis according to any one of claims 1-9.

Description

Periodontitis stage diagnosis auxiliary method based on deep learning Technical Field The invention relates to the technical field of image recognition, in particular to a periodontitis stage diagnosis auxiliary method based on deep learning. Background Periodontitis (Periodontitis) is a chronic inflammation caused by bacterial infection, which is developed mainly by gingivitis (GINGIVITIS). In dental clinical and imaging (X-ray) diagnostics, the enamel cementum kingdom (CEJ), the alveolar ridge crest (Alveolar Crest), and the Root Apex (Root Apex) are key reference points for judging the severity and progression of periodontitis. By the relative positional change of these three points, the physician can quantify the degree of damage to periodontal tissue. In the prior art, a technical scheme for assisting in identifying periodontitis based on computer technology exists. For example, patent document CN202510097300.4 discloses a periodontal photo-based periodontitis identification system, which is mainly used for automatically identifying periodontitis through periodontal photo. The system comprises five modules, namely an image acquisition and preprocessing module, a multi-instance learning model training module, a classification and diagnosis module, a system common user side and a system expert user side. The system comprises the steps of preprocessing an acquired periodontal photo through image processing and target detection technology, eliminating a low-quality picture, cutting out a tooth area, classifying the image by utilizing a multi-instance learning framework and combining contrast loss, and identifying the risk level of periodontitis. The user side provides visual diagnosis results, and the expert side provides more detailed analysis information and suggestions for the expert. The invention can effectively improve the diagnosis precision of periodontitis, lighten clinical work load, realize early automatic identification of periodontitis and provide timely and effective treatment advice for patients. For example, patent document CN202510097299.5 discloses a multi-dimensional periodontitis identification and detection method based on incomplete supervision information, which includes the steps of firstly, obtaining a marked periodontal disease image, removing unrecognizable images, cutting out parts to be identified in the rest images, classifying the images according to whether gums are inflamed and the periodontal retraction degree, estimating label distribution of all samples based on a multi-dimensional classification model, modifying training loss of each type of data, generating pseudo labels for non-marked periodontitis image data based on the multi-dimensional classification model, obtaining more accurate pseudo labels by estimating the type deviation of the existing parameters, retraining the model by using the corrected labels, and finally, generating final labels corresponding to the images by using the trained models to obtain final identification results. By utilizing the framework of semi-supervised learning, under the scene of only a small amount of marked data and unbalanced data types, unbiased loss is obtained by estimating the tag distribution, and the pseudo tag is corrected by estimating the type deviation, so that the problems of unbalanced types and limited marked data are solved, and the model can achieve higher recognition accuracy. However, in the practical implementation process, the inventor finds that the technical scheme generally adopts a deep learning model to directly learn and identify relevant characteristics of periodontitis, and the identification process has relatively poor interpretation. Disclosure of Invention Aiming at the problems in the prior art, the periodontitis stage diagnosis auxiliary method based on deep learning is provided. The specific technical scheme is as follows: an auxiliary method for periodontitis stage diagnosis based on deep learning, comprising the following steps: s1, collecting panoramic dental film images of a patient and preprocessing the panoramic dental film images to obtain preprocessed images; S2, segmenting the preprocessed image by adopting a semantic segmentation model to obtain a maxillofacial region; S3, identifying the maxillofacial region by using a mask segmentation model to obtain the marking position of each tooth; and S4, generating RBL values of each tooth according to the marking positions and outputting important attention areas. On the other hand, the step S11 includes: S11, collecting the panoramic dental film image, and carrying out noise filtering on the panoramic dental film image to obtain a denoising image; and step S12, carrying out standardization processing on the denoising image to obtain the preprocessing image with uniform brightness and contrast. On the other hand, in the step S2, the semantic segmentation model is realized by adopting a pre-trained Res-U-Net model. On the other hand, a first train