CN-122023876-A - Multi-mode image-based rat thyroid classifying method
Abstract
The application relates to the technical field of image analysis, in particular to a rat thyroid classifying method based on multi-mode images. The method comprises the steps of obtaining digital pathology images of SD rat thyroid tissues, preprocessing the digital pathology images to obtain preprocessed image data, training a multi-task deep learning model by utilizing an image database containing labeling information, wherein the multi-task deep learning model is used for processing the preprocessed image data to output image analysis information containing image region classification results and quantitative morphological characteristics, inputting the digital pathology images of the SD rat thyroid tissues to be analyzed into the trained multi-task deep learning model, and performing model reasoning to obtain preliminary analysis results. The method can automatically and accurately analyze the complex tissue image of the rat thyroid.
Inventors
- WANG YONGLI
- GUO JIANPING
- LI SHUFANG
- GUO XUELI
Assignees
- 中国辐射防护研究院
Dates
- Publication Date
- 20260512
- Application Date
- 20251229
Claims (10)
- 1. The method for classifying the rat thyroid based on the multi-mode images is characterized by comprising the following steps of: acquiring a digital pathological image of thyroid tissue of an SD rat, and preprocessing the digital pathological image to obtain preprocessed image data; Training a multi-task deep learning model by using an image database containing labeling information, wherein the multi-task deep learning model is used for processing the preprocessed image data so as to output image analysis information containing image region classification results and quantitative morphological characteristics; Inputting a digital pathological image of the thyroid tissue of the SD rat to be analyzed into the trained multi-task deep learning model, and performing model reasoning to obtain a preliminary analysis result; Updating a pathology knowledge database based on the preliminary analysis result and user feedback information, wherein the pathology knowledge database is used for storing structured image analysis case data; According to screening conditions set by a user, searching matched historical image analysis data from the pathology knowledge database, and carrying out integration statistics inference on the current experimental image analysis data and the historical image analysis data based on a hierarchical Bayesian model to generate a statistics evaluation report; Based on the user feedback information and the updated state of the pathology knowledge database, incremental training of the multi-task deep learning model is triggered to update model parameters.
- 2. The multi-modal image based rat thyroid classifying method of claim 1, wherein the specific step of preprocessing comprises: and carrying out structural perception color normalization processing and main histological structure identification on the H & E staining image, and selecting a corresponding standardized staining vector according to the identified histological structure category to carry out color deconvolution and reconstruction.
- 3. The multi-modal image based rat thyroid classifying method of claim 2, wherein the specific step of preprocessing further comprises: Non-linear registration of the paired H & E-stained image with the Ki-67 immunohistochemical stained image, the non-linear registration including local elastic deformation correction based on a free deformation model of B-spline to spatially align the Ki-67 immunohistochemical stained image with the H & E stained image.
- 4. The multi-modal image based rat thyroid classification method of claim 3, wherein the multi-task deep learning model comprises a dual stream encoder, a cross-modal attention fusion gate, and a multi-task prediction head; the dual stream encoder includes morphological features for processing H & E stained images and assist features for processing Ki-67 immunohistochemical stained images; the cross-modal attention fusion gate is used for dynamically fusing dual-flow features at a plurality of network levels of the morphological features and the auxiliary features; The multi-task prediction head at least comprises a main classification head for outputting the classification probability of the image region, a regression head for outputting the nuclear division density heat map and a scoring head for outputting the score of the envelope structure.
- 5. The multi-modal image based rat thyroid classification method of claim 4, wherein the cross-modal attention fusion gate dynamically fuses dual-flow features using the following formula: ; ; Wherein, the For the purpose of the attention of the layer i, Is a morphological feature of the H & E stained image of the first layer, The auxiliary features of the immunohistochemical staining image were performed for Ki-67 of the first layer, As a function of the Sigmoid, Is that Is used for the convolution kernel of (c), Is a fusion feature of the first layer.
- 6. The method for classifying thyroid gland of rat based on multi-modal image according to claim 1, wherein the specific steps of inputting the digital pathological image of the thyroid gland tissue of SD rat to be analyzed into the trained multi-task deep learning model, performing model reasoning, and obtaining the preliminary analysis result include: And starting Monte Carlo dropouout when executing model reasoning, carrying out forward propagation for T times on the same image block to obtain T groups of prediction results, and calculating prediction uncertainty based on the T groups of prediction results.
- 7. The multi-modal image based rat thyroid classification method of claim 6, wherein the calculating the prediction uncertainty comprises: Calculating entropy of a mean vector of the T group of prediction results based on the classification probability vectors in the T group of prediction results, and taking the entropy as classification uncertainty; And/or the number of the groups of groups, And calculating the variance of the regression prediction values based on the regression prediction values in the T groups of prediction results, and taking the variance as regression uncertainty.
- 8. The multi-modal image-based rat thyroid classifying method of claim 7, wherein the specific steps of inputting the digital pathological image of SD rat thyroid tissue to be analyzed into the trained multi-task deep learning model, performing model reasoning, and obtaining the preliminary analysis result further comprise: Generating candidate attention areas through a spatial clustering algorithm based on the prediction results of all the image blocks; for each candidate attention area, aggregating multiple prediction results of pixels in the candidate attention area, and determining an image analysis result and a quantitative morphological feature of the area through weighted voting; and integrating the results of all candidate regions of interest, and generating an image analysis report of the whole slice, wherein the report contains the prediction uncertainty.
- 9. The multi-modal image-based rat thyroid classification method of claim 1, wherein the specific step of generating a statistical evaluation report based on the hierarchical bayesian model for performing an integrated statistical inference on the current experimental image analysis data and the historical image analysis data comprises: assuming that the historical data is from S independent image analysis data sources, setting a specific occurrence parameter theta_s for each data source, and assuming that all theta_s obey a common population distribution Beta (alpha, beta); Obtaining posterior distribution of super-parameters Beta (alpha, beta) through inference, and constructing prediction prior distribution of background occurrence rate for current image analysis data based on the posterior distribution; Establishing a Bayesian logistic regression model, wherein the priori of the intercept term is deduced based on the prediction priori distribution, and the priori of the processing effect term is centered on zero; taking image analysis data of a control group and an experimental group of the current experiment as likelihood, and carrying out posterior inference to obtain posterior distribution of the treatment effect; Based on the posterior distribution of the treatment effects, a posterior distribution of the risk ratio, a highest density interval of the risk ratio, and a posterior probability that the experimental group number is higher than the control group are calculated.
- 10. The multi-modal image based rat thyroid classification method of claim 1, wherein the specific step of triggering incremental training of the multi-task deep learning model to update model parameters based on the user feedback information and the updated state of the pathology knowledge database comprises: Monitoring a model retraining candidate queue in the pathology knowledge database; When the accumulation of the high-quality training samples in the queue reaches a preset threshold or reaches a preset period, automatically starting an incremental training process; and fine tuning the multi-task deep learning model by using the samples in the queue, and verifying that the performance passes the model after post-deployment updating.
Description
Multi-mode image-based rat thyroid classifying method Technical Field The application relates to the technical field of image analysis, in particular to a rat thyroid classifying method based on multi-mode images. Background In the field of rat histopathological image analysis, in particular in automated analysis of thyroid tissue sections of SD rats, it is a key technical link to accurately classify regions of different morphological structures in images (e.g. to distinguish normal follicles, proliferative regions, adenomatous regions and cancerous regions). Existing automated classification methods rely mainly on machine learning model training of H & E stained slice images. However, the existing methods have obvious technical defects, and the reliability and efficiency of analysis results and the application value in practical toxicology research are limited. In particular, existing classification methods are generally static and isolated. First, training of the model relies on a fixed, finite and costly initial data set. Once deployed, the classification capability of the model is solidified, so that the newly generated image data which is verified by an expert is difficult to absorb to realize continuous evolution of performance, and the model is easy to outdate, low in precision and incapable of adapting to image characteristic differences possibly existing in different laboratories or different batches of experimental animals. Disclosure of Invention In order to solve the defects in the prior art, the invention aims to provide a multi-mode image-based rat thyroid classifying method for automatically and accurately analyzing complex tissue images of rat thyroid. In order to achieve the above purpose, the present invention provides a method for classifying thyroid gland of rat based on multi-modal image, comprising the following steps: acquiring a digital pathological image of thyroid tissue of an SD rat, and preprocessing the digital pathological image to obtain preprocessed image data; Training a multi-task deep learning model by using an image database containing labeling information, wherein the multi-task deep learning model is used for processing the preprocessed image data so as to output image analysis information containing image region classification results and quantitative morphological characteristics; Inputting a digital pathological image of the thyroid tissue of the SD rat to be analyzed into the trained multi-task deep learning model, and performing model reasoning to obtain a preliminary analysis result; Updating a pathology knowledge database based on the preliminary analysis result and user feedback information, wherein the pathology knowledge database is used for storing structured image analysis case data; According to screening conditions set by a user, searching matched historical image analysis data from the pathology knowledge database, and carrying out integration statistics inference on the current experimental image analysis data and the historical image analysis data based on a hierarchical Bayesian model to generate a statistics evaluation report; Based on the user feedback information and the updated state of the pathology knowledge database, incremental training of the multi-task deep learning model is triggered to update model parameters. Further, the specific steps of the pretreatment include: and carrying out structural perception color normalization processing and main histological structure identification on the H & E staining image, and selecting a corresponding standardized staining vector according to the identified histological structure category to carry out color deconvolution and reconstruction. Further, the specific steps of the pretreatment further comprise: Non-linear registration of the paired H & E-stained image with the Ki-67 immunohistochemical stained image, the non-linear registration including local elastic deformation correction based on a free deformation model of B-spline to spatially align the Ki-67 immunohistochemical stained image with the H & E stained image. Further, the multi-task deep learning model comprises a double-flow encoder, a cross-modal attention fusion door and a multi-task pre-measuring head; the dual stream encoder includes morphological features for processing H & E stained images and assist features for processing Ki-67 immunohistochemical stained images; the cross-modal attention fusion gate is used for dynamically fusing dual-flow features at a plurality of network levels of the morphological features and the auxiliary features; The multi-task prediction head at least comprises a main classification head for outputting the classification probability of the image region, a regression head for outputting the nuclear division density heat map and a scoring head for outputting the score of the envelope structure. Further, the following formula is adopted for the cross-modal attention fusion door to dynamically fuse the double-