Search

CN-122029565-A - Training method of medical image semantic segmentation model

CN122029565ACN 122029565 ACN122029565 ACN 122029565ACN-122029565-A

Abstract

The present invention relates to a method of training a segmented neural network to segment an anatomical structure in a medical image. The segmented neural network (30) is subjected to a supervision training (107) based on supervision training data (25), the supervision training data (25) comprising segmented real medical images (11, 12) and segmentation masks (21, 22) associated therewith, and a synthetic image (13) generated based on artificial segmentation masks (23). At the same time, the teacher network (30') that initially corresponds to the clone of the segmented neural network is updated using an exponential moving average method. The teacher network is to generate (110) a pseudo-segmentation mask (24) based on semi-supervised training data (26), the semi-supervised training data (26) comprising non-segmented real medical images (14), each non-segmented real medical image (14) being associated with a weak annotation. The non-segmented real medical image and the pseudo-segmentation mask associated therewith are added (111) to the supervised training data to continue training the segmented neural network.

Inventors

  • M. GIRARD
  • L. brondale
  • B Nadam
  • F Ba Danuo

Assignees

  • 康坦手术股份有限公司

Dates

Publication Date
20260512
Application Date
20240711
Priority Date
20230811

Claims (14)

  1. 1. A method (100) for training a segmented neural network (30) to segment an anatomical structure on a medical image representing a target anatomical structure (41) of a patient, the method (100) comprising the steps of: -acquiring (101) segmented real medical images (11, 12), each real medical image being associated with a segmentation mask (21, 22), respectively, the real medical images comprising: A plurality of real medical images (11) associated with a plurality of segmentation masks (21), representing a target anatomy (41) of a patient without abnormalities, and A minority of real medical images (12) associated with a minority of segmentation masks (22) representing a target anatomy (41) of an abnormal patient (42); -training (102) a generative neural network (31) to generate a composite image based on the segmentation mask; -obtaining (103) artificial segmentation masks (23), each of said artificial segmentation masks (23) being obtained by combining a majority segmentation mask (11) with a segmentation (44) of anomalies in a minority segmentation mask (12); -acquiring (104) a synthetic image (13) based on the artificial segmentation mask (23) using a trained generated neural network (31); -acquiring (105) non-segmented real medical images (14), wherein each non-segmented real medical image represents a target anatomy (41) of a patient and is associated with a weak annotation; -duplicate (106) the split neural network (30) identically to form a teacher network (30'); the method further comprises a plurality of iterations of the steps of: -performing a supervised training (107) of the segmented neural network (30) based on supervised training data (25), the supervised training data (25) comprising the segmented real medical image (11, 12) and the segmentation mask (21, 22) associated therewith, and a composite image (13) and the artificial segmentation mask (23) associated therewith; -updating (108) weights w S of the segmented neural network (30) using a gradient descent method; -updating (109) the weights w T of the teacher network (30 ') using an exponential moving average method calculated as a function of the previous weights w Tprec of the teacher network (30') and the updated weights w S of the split neural network (30); -generating (110) a pseudo-segmentation mask (24) corresponding to a prediction of the teacher network (30 '), the prediction of the teacher network (30') being obtained based on semi-supervised training data (26), the semi-supervised training data (26) containing the non-segmented real medical image (14) and weak labels associated therewith; -adding the generated pseudo-segmentation masks (24) and the non-segmented real medical image (14) for generating (110) these pseudo-segmentation masks (24) to the supervision training data (25).
  2. 2. The method (100) of claim 1, comprising, for each iteration: -calculating a supervised cost function C S representing dissimilarity between a prediction of the segmented neural network (30) performed based on an image belonging to the supervised training data (25) and a segmentation mask associated with the image; -calculating a semi-supervised cost function C SS representing the dissimilarity between the predictions of the teacher network (30') and the predictions of the segmented neural network (30) of the images belonging to the semi-supervised training data (26); -calculating a total cost function C from the supervised cost function C S and the semi-supervised cost function C SS ; -updating (108) weights w S of the segmented neural network (30) using a gradient descent method performed as a function of the total cost function.
  3. 3. The method (100) of claim 2, wherein the teacher network (30') predicts images belonging to the semi-supervised training data (26), enhanced by weak labels associated with the images, the enhanced predictions being noted as Can be expressed as Wherein the expression is For a rough prediction of the teacher network (30'), In correspondence with the weak note that, And the contribution rate of the weak labels is calculated.
  4. 4. A method (100) according to claim 3, wherein the contribution rate of the weak labels In the range of 0.45-0.55.
  5. 5. The method (100) according to any one of claims 2-4, wherein each of the weak labels comprises an approximate position of the anatomical structure to be segmented, the approximate position being represented as one or more points of the anatomical structure to be segmented, a geometry covering an interior region of the anatomical structure to be segmented, or a geometry surrounding or covering the anatomical structure to be segmented.
  6. 6. The method (100) according to any one of claims 2-5, wherein the total cost function C is calculated in the form of a weighted sum of the supervised cost function C S and the semi-supervised cost function C SS .
  7. 7. The method (100) of claim 6, wherein the total cost function is represented as Wherein In the range of 0.05-0.15.
  8. 8. The method (100) according to any one of claims 2-7, wherein the supervised cost function C S is calculated as a linear combination of a real cost function C Re and a synthetic cost function C Syn , the real cost function C Re being calculated based on the segmented real medical image (11, 12) and the segmentation mask (21, 22) associated therewith, the synthetic cost function C Syn being calculated based on the synthetic image (13) and the artificial segmentation mask (23) associated therewith.
  9. 9. The method (100) of claim 8, wherein the total cost function C is represented as Wherein the expression is For the contribution rate of the semi-supervised cost function C SS , To synthesize the contribution of cost function C Syn , For the plausibility score of the composite image (13) calculated using a discriminant neural network (32), the discriminant neural network (32) and the generative neural network (31) together form a pair of generative countermeasure networks.
  10. 10. The method (100) of claim 9, wherein the contribution rate In the range of 0.35-0.45, said contribution rate In the range of 0.05-0.15.
  11. 11. The method (100) according to any one of claims 1-10, wherein the step of obtaining (103) a manual segmentation mask (23) comprises transforming the segmentation (44) of anomalies in the minority segmentation mask.
  12. 12. The method (100) of claim 11, wherein the transformation of the segmentation (44) of an anomaly corresponds to a rotation, an enlargement, a reduction, a deformation and/or a displacement of the segmentation (44) of an anomaly.
  13. 13. A method of automatically segmenting a medical image, comprising: -training the segmented neural network (30) of any one of claims 1-12; -segmenting the medical image using a trained segmented neural network (30).
  14. 14. A computer readable storage medium comprising the trained segmented neural network (30) of any one of claims 1-12.

Description

Training method of medical image semantic segmentation model Technical Field The application relates to the field of semantic segmentation of medical images. In particular, a method for training a segmented neural network, and a computer-readable storage medium containing a segmented neural network trained by the method, are presented. Prior Art Machine learning algorithms for semantic segmentation of medical images are typically implemented by artificial neural networks. In particular, it is known to train convolutional neural networks based on a U-net type model to enable image semantic segmentation. Semantic segmentation is essentially labeling each pixel in an image with a class label corresponding to the image content. In the medical image field, a category may represent a specific anatomical structure, such as a target anatomical structure (which may be an organ like a liver, lung or kidney, or even other anatomical structure like a bone, a blood vessel, etc.), an abnormality within the target anatomical structure (such as a cyst, a tumor or an ablation zone), or a surgical artifact (such as a portion of a medical instrument). Training a segmented neural network requires a large amount of training data to obtain good predictive quality. In addition, insufficient samples of a class in the training data can lead to significant deviations in predictions for that class. This problem is particularly important in the medical field. Indeed, in order to train the neural network in a supervised manner, the training data must contain a large number of segmented medical images. Each medical image must then be segmented manually by a medical expert, or optionally semi-automatically. Such segmentation of medical images is particularly costly in terms of time and expertise. Collecting a large number of medical images showing rare anatomical abnormalities is even more difficult (due to the rarity of the disease, patient confidentiality, the effort and expense required to perform medical imaging procedures, etc.). The document entitled "Deep semi-supervised segmentation WITH WEIGHT-averaged consistency targets" by Christian S. PERONE et al describes a method for semi-supervised training of a split neural network according to the "average teacher" method. Several solutions of the prior art are also based on a relatively similar approach, namely replicating a segmented neural network to be trained to form a teacher network. Notably, patent applications WO2021/140426A1, US2022/0292689A1, US2021/0407656A1 and US2022/0358658A1 describe different methods of training a neural network for segmenting medical images. However, these schemes suffer from several drawbacks, including the segmentation algorithms having an overfitting on the supervision training data, sometimes inadequate segmentation performance, learning bias for rare class segmentations, and/or inadequate participation of medical expertise in segmenting the neural network learning process. Patent application WO2022/238640A1 describes in part a method for generating a composite image showing rare anatomical anomalies using a generation-type countermeasure network. Disclosure of Invention The object of the present application is to propose a solution to overcome all or part of the drawbacks of the prior art, in particular of the prior art disclosed above. To this end, according to a first aspect, a method of training a segmented neural network is proposed to segment an anatomical structure on a medical image representing a target anatomical structure of a patient. The method comprises the following steps: -acquiring segmented real medical images, each associated with a segmentation mask, the real medical images comprising: o a plurality of real medical images associated with a plurality of segmentation masks representing a target anatomy of the patient without abnormalities; o a minority of real medical images associated with a minority of segmentation masks representing a target anatomy of a patient with an abnormality; -training a generative neural network to generate a composite image based on the segmentation mask; -obtaining artificial segmentation masks, each artificial segmentation mask being obtained by combining a majority segmentation mask with a segmentation of anomalies in a minority segmentation mask; -acquiring a synthetic image based on the artificial segmentation mask using a trained generated neural network; -acquiring non-segmented real medical images, wherein each non-segmented real medical image represents a target anatomy of a patient and is associated with a weak annotation; -duplicate the split neural network identically to form a teacher network. The method further comprises a plurality of iterations of the steps of: Performing supervised training on a segmented neural network based on supervised training data, the supervised training data comprising the segmented real medical image and the segmentation mask associated therewith, and a compo