CN-121982312-A - Automatic oral cavity image segmentation and identification method
Abstract
The invention relates to the technical field of image processing, in particular to an automatic oral image segmentation and identification method which comprises the steps of determining whether image supplementation is carried out according to a comparison result of an identification error rate and a preset error rate of a target identification model, executing a supplementation mode based on identification feature difference degree for the target identification model in image supplementation, judging whether the supplementation mode comprises supplementation based on an out-of-standard parameter according to dislocation anomaly degree and artifact interference degree of an anomaly identification image, judging whether the supplementation mode comprises refinement analysis supplementation according to pixel confusion comparison degree of the anomaly identification image, and training the target identification model based on the supplementation image and a training image. The invention can screen the training data of the model to improve the image recognition accuracy.
Inventors
- SUN LU
- XU LAIQING
- YANG YANG
- LI HUA
- HAO SIWEI
Assignees
- 中国人民解放军总医院第一医学中心
Dates
- Publication Date
- 20260505
- Application Date
- 20260123
Claims (10)
- 1. An automated oral image segmentation recognition method, comprising: Determining whether to perform image supplementation according to a comparison result of the identification error rate of the target identification model and the preset error rate so as to obtain a supplementation image; In the image supplementation, a supplementation mode based on the recognition feature difference degree is executed for the target recognition model, whether the supplementation mode comprises supplementation based on the out-of-standard parameters or not is judged according to the dislocation anomaly degree and the artifact interference degree of the anomaly recognition image, and whether the supplementation mode comprises refinement analysis supplementation is judged according to the pixel confusion comparison degree of the anomaly recognition image; In the refinement analysis supplementation, the supplement is determined based on the region fit degree or the interference analysis supplementation is performed according to the overlapping characteristic value and the texture association degree; In the interference analysis supplementing, a refined supplementing mode based on an edge interference coefficient is executed aiming at the target identification model, and whether the refined supplementing mode is adjusted to be regular texture analysis supplementing is determined according to the aggregation degree of the interference pixel points; In the regular texture analysis supplementation, determining a texture distribution entropy or a texture rule coefficient based on the texture regularity to carry out image supplementation; Training is performed for the target recognition model based on the supplemental image and the training image.
- 2. The automated oral image segmentation recognition method according to claim 1, wherein image replenishment is performed for a target recognition model having a recognition error rate greater than or equal to a preset recognition error rate.
- 3. The automated oral image segmentation recognition method according to claim 2, wherein determining the replenishment manner comprises replenishing based on an over-standard parameter for a target recognition model having a misalignment anomaly greater than or equal to a preset error anomaly or an artifact interference greater than or equal to a preset artifact interference.
- 4. The automated oral image segmentation recognition method according to claim 3, wherein determining the replenishment comprises refining the analysis replenishment for a target recognition model having a pixel confusion comparison greater than or equal to a preset pixel confusion comparison.
- 5. The automated oral image segmentation recognition method according to claim 4, wherein the supplementing is based on region fit for a target recognition model having an overlap feature value less than a preset overlap feature value and a texture fit greater than or equal to a preset texture fit.
- 6. The automated oral image segmentation recognition method according to claim 5, wherein the disturbance analysis supplementing is performed for a target recognition model having an overlap feature value greater than or equal to a preset overlap feature value or a texture association degree less than a preset texture association degree.
- 7. The automated oral image segmentation recognition method according to claim 6, wherein when a refinement and replenishment mode based on edge interference coefficients is performed for the target recognition model, an influence region is determined based on an edge expansion width, an edge interference coefficient to be selected is determined based on the number of interference pixels and a sub-aggregation level in the influence region, and images to be selected are replenished in order of the edge interference coefficient from large to small.
- 8. The automated oral image segmentation recognition method according to claim 7, wherein the refinement supplement mode is adjusted to a regular texture analysis supplement for a target recognition model having an interference pixel point concentration less than a preset interference pixel point concentration.
- 9. The automated oral image segmentation recognition method according to claim 8, wherein image replenishment is performed based on a texture regularity coefficient for a target recognition model having a texture regularity greater than or equal to a preset texture regularity.
- 10. The automated oral image segmentation recognition method according to claim 9, wherein the supplementing is based on texture distribution entropy for a target recognition model having a texture regularity less than a preset texture regularity.
Description
Automatic oral cavity image segmentation and identification method Technical Field The invention relates to the technical field of image processing, in particular to an automatic oral image segmentation and identification method. Background When the model is adopted for recognizing the oral cavity image, the problems of uneven quality, incomplete feature coverage, unbalanced data distribution and the like often exist in the training data, so that the model is difficult to fully learn the effective features of the oral cavity tissue, and the identification deviation such as misjudgment of the area is easy to occur, therefore, how to screen the training data of the model to improve the image recognition accuracy is a technical problem to be solved urgently by the person skilled in the art. Chinese patent publication No. CN114926627A discloses an oral structure positioning model training method, a positioning method, a device and electronic equipment, wherein the method comprises the steps of obtaining a plurality of oral images marking an oral characteristic area, selecting a training set and a verification set from the plurality of oral images, inputting any one oral image in the training set into an initial oral structure positioning model to obtain a characteristic diagram corresponding to each oral image in the training set, carrying out area characteristic aggregation on each interested area in the characteristic diagram, determining the oral characteristic area, carrying out image semantic segmentation on the interested area, comparing a segmentation result with the area of the oral characteristic, adjusting training parameters of the initial oral structure positioning model, and if the training parameters are met, verifying the training parameters by the verification set to obtain the target oral structure positioning model. Therefore, the technical scheme has the following problems that the image selection mode is single, the training data of the model cannot be adaptively supplemented according to the actual application scene, and the generalization capability of the model and the adaptability of the complex scene are poor. Disclosure of Invention Therefore, the invention provides an automatic oral image segmentation and recognition method, which is used for solving the problems that in the prior art, the image selection mode is single, the training data of a model cannot be adaptively supplemented according to an actual application scene, and the generalization capability and the complex scene suitability of the model are poor. In order to achieve the above object, the present invention provides an automated oral image segmentation recognition method, comprising: Determining whether to perform image supplementation according to a comparison result of the identification error rate of the target identification model and the preset error rate so as to obtain a supplementation image; In the image supplementation, a supplementation mode based on the recognition feature difference degree is executed for the target recognition model, whether the supplementation mode comprises supplementation based on the out-of-standard parameters or not is judged according to the dislocation anomaly degree and the artifact interference degree of the anomaly recognition image, and whether the supplementation mode comprises refinement analysis supplementation is judged according to the pixel confusion comparison degree of the anomaly recognition image; In the refinement analysis supplementation, the supplement is determined based on the region fit degree or the interference analysis supplementation is performed according to the overlapping characteristic value and the texture association degree; In the interference analysis supplementing, a refined supplementing mode based on an edge interference coefficient is executed aiming at the target identification model, and whether the refined supplementing mode is adjusted to be regular texture analysis supplementing is determined according to the aggregation degree of the interference pixel points; In the regular texture analysis supplementation, determining a texture distribution entropy or a texture rule coefficient based on the texture regularity to carry out image supplementation; Training is performed for the target recognition model based on the supplemental image and the training image. Further, image supplementation is performed on the target recognition model with the recognition error rate being greater than or equal to the preset recognition error rate. Further, for the target recognition model with the dislocation anomaly degree larger than or equal to the preset error anomaly degree or the artifact interference degree larger than or equal to the preset artifact interference degree, the method for judging the supplement comprises the step of supplementing based on the out-of-standard parameters. Further, for the target recognition model with the pixel confusion compar