EP-4567715-B1 - GENERATING SYNTHETIC REPRESENTATIONS
Inventors
- MOHAMMADI, Seyed Sadegh
- Montalt Tordera, Javier
- Prokop, Jakub
Dates
- Publication Date
- 20260513
- Application Date
- 20241202
Claims (18)
- A computer-implemented method comprising: (a) providing a pre-trained conditional generative model (CGM t ), wherein the pre-trained conditional generative model (CGM t ) was trained to reconstruct a plurality of reference representations (RR1, RR2, RR3, RR4, RR5, RR6) of an examination area of a plurality of examination objects, (b) providing a native representation (R0) of the examination area of a new examination object, wherein the native representation (R0) represents the examination area of the new examination object without contrast agent, (c) providing a contrast-enhanced representation (R1) of the examination area of the new examination object, wherein the contrast-enhanced representation (R1) represents the examination area of the new examination object after administration of an amount of a contrast agent, (d) generating one or more embeddings (E1, E2, EC) at least partially based on the native representation (R0) and/or the contrast-enhanced representation (R1), (e) providing starting data (SD), (f) generating a synthetic representation (SR) of the examination area of the new examination object based on the starting data (SD) using the pre-trained conditional generative model (CGM t ) and using the one or more embeddings (E1, E2, EC) as condition, (g) generating a transformed synthetic representation (SR T ) based on the synthetic representation (SR), wherein the transformed synthetic representation (SR T ) is a contrast-enhanced representation of the examination area of the new examination object in which the contrast-enhancement is reduced compared to the synthetic representation (SR), (h) quantifying deviations between the transformed synthetic representation (SR T ) and the contrast-enhanced representation (R1) of the examination area of the new examination object, (i) reducing the deviations by modifying the starting data (SD) and/or the one or more embeddings (E1, E2, EC) and/or parameters of the pre-trained conditional generative model (CGM t ), (j) repeating steps (f) to (i) one or more times, (k) outputting and/or storing the synthetic representation (SR) of the examination area of the new examination object and/or a synthetic image of the examination area of the new examination object generated therefrom, and/or transmitting the synthetic representation (SR) and/or the synthetic image to a separate computer system.
- The computer-implemented method of claim 1, wherein the pre-trained conditional generative model (CGM t ) was pre-trained on training data (TD), wherein the training data (TD) comprised, for each examination object of the plurality of examination objects, one or more reference representations (RR1, RR2, RR3, RR4, RR5, RR6) of the examination area of the examination object, wherein pre-training comprised, for each examination object of the plurality of examination objects: - generating one or more embeddings (E1, E2, EC) based on the one or more reference representations (RR1, RR2, RR3, RR4, RR5, RR6) of the examination area of the examination object, - inputting one or more reference representations (RR1, RR2, RR3, RR4, RR5, RR6) of the examination area of the examination object into the conditional generative model (CGM), and reconstruction at least one of the one or more reference representations (RR1, RR2, RR3, RR4, RR5, RR6) by the conditional generative model (CGM) using the one or more embeddings (E1, E2, EC) as condition, - receiving a reconstructed reference representation (RR1 R , RR2 R , RR3 R , RR4 R , RR5 R , RR6 R ) as an output of the conditional generative model (CGM), - determining deviations between the reconstructed reference representation (RR1 R , RR2 R , RR3 R , RR4 R , RR5 R , RR6 R ) and one of the one or more reference representations (RR1, RR2, RR3, RR4, RR5, RR6), - reducing the deviations by modifying parameters of the conditional generative model (CGM).
- The computer-implemented method of claim 1 or 2, wherein each examination object is a human, wherein the examination area is or comprises a liver, a kidney, a heart, a lung, a brain, a stomach, a bladder, a prostate, an intestine, an eye, a breast, a thyroid, a pancreas, or an uterus or a part thereof.
- The computer-implemented method of any one of claims 1 to 3, wherein each representation is a computed tomography representation, an X-ray representation, a magnetic resonance imaging representation, an ultrasound representation, or a positron emission tomography representation.
- The computer-implemented method of any one of claims 1 to 4, wherein each representation is a representation in real space, in frequency space or in projection space.
- The computer-implemented method of any one of claims 1 to 5, wherein generating the transformed synthetic representation (SR T ) comprises: reducing the contrast-enhancement of the synthetic representation (SR).
- The computer-implemented method of any one of claims 1 to 6, wherein generating the transformed synthetic representation (SR T ) comprises: reducing the contrast-enhancement of the synthetic representation (SR), so that the contrast-enhancement of the transformed synthetic representation (SR T ) corresponds to the contrast-enhancement of the contrast-enhanced representation (R1).
- The computer-implemented method of any one of claims 1 to 7, wherein generating the transformed synthetic representation (SR T ) comprises: reducing the contrast-enhancement of the synthetic representation (SR), thereby generating a transformed synthetic representation that represents the examination area of the new examination object after administration of the first amount of contrast agent.
- The computer-implemented method of any one of claims 1 to 8, wherein generating the transformed synthetic representation (SR T ) comprises: - generating a difference representation (DR), wherein generating the difference representation (DR) comprises subtracting the native representation (R0) from the synthetic representation (SR), - generating an attenuated difference representation (aDR), wherein generating the attenuated difference representation (aDR) comprises multiplying the difference representation (DR) with an attenuation factor, - adding the attenuated difference representation (aDR) to the native representation (R0) or subtracting the attenuated difference representation (aDR) from the synthetic representation (SR).
- The computer-implemented method of claim 9, wherein generating the transformed synthetic representation (SR T ) further comprises: multiplying the difference representation (DR) or the attenuated difference representation (aDR) in frequency space by a frequency-dependent weighting function.
- The computer-implemented method of any one of claims 1 to 10, wherein steps (f) to (i) are repeated until a stop criterion is reached, wherein the stop criterion comprises: a predefined maximum number of steps has been performed, the deviations between the transformed synthetic representation (SR T ) and the contrast-enhanced representation (R1) can no longer be reduced by modifying the starting data (SD) and/or the one or more embeddings (E1, E2, EC) and/or the model parameters, a predefined minimum of a loss function (LF) is reached, and/or an extreme value of another performance value is reached.
- The computer-implemented method of any one of claims 1 to 11, wherein the pre-trained conditional generative model (CGM t ) is or comprises a diffusion model (DM t ) or a portion thereof.
- The computer-implemented method of any one of claims 1 to 12, wherein the one or more embeddings (E1, E2, EC) are generated using one or more encoders (E), wherein the one or more encoders (E) comprise one or more transformers and/or convolutional neural networks.
- A computer system comprising: a processing unit (20); and a memory (50) storing a computer program (60) configured to perform, when executed by the processing unit (20), an operation, the operation comprising: (a) providing a pre-trained conditional generative model (CGM t ), wherein the pre-trained conditional generative model (CGM t ) was trained to reconstruct a plurality of reference representations (RR1, RR2, RR3, RR4, RR5, RR6) of an examination area of a plurality of examination objects, (b) providing a native representation (R0) of the examination area of a new examination object, wherein the native representation (R0) represents the examination area of the new examination object without contrast agent, (c) providing a contrast-enhanced representation (R1) of the examination area of the new examination object, wherein the contrast-enhanced representation (R1) represents the examination area of the new examination object after administration of an amount of a contrast agent, (d) generating one or more embeddings (E1, E2, EC) at least partially based on the native representation (R0) and/or the contrast-enhanced representation (R1), (e) providing starting data (SD), (f) generating a synthetic representation (SR) of the examination area of the new examination object based on the starting data (SD) using the pre-trained conditional generative model (CGM t ) and using the one or more embeddings (E1, E2, EC) as condition, (g) generating a transformed synthetic representation (SR T ) based on the synthetic representation (SR), wherein the transformed synthetic representation (SR T ) is a contrast-enhanced representation of the examination area of the new examination object in which the contrast-enhancement is reduced compared to the synthetic representation (SR), (h) quantifying deviations between the transformed synthetic representation (SR T ) and the contrast-enhanced representation (R1) of the examination area of the new examination object, (i) reducing the deviations by modifying the starting data (SD) and/or the one or more embeddings (E1, E2, EC) and/or parameters of the pre-trained conditional generative model (CGM t ), (j) repeating steps (f) to (i) one or more times, (k) outputting and/or storing the synthetic representation (SR) of the examination area of the new examination object and/or a synthetic image of the examination area of the new examination object generated therefrom, and/or transmitting the synthetic representation (SR) and/or the synthetic image to a separate computer system.
- A non-transitory computer readable storage medium having stored thereon software instructions that, when executed by a processing unit (20) of a computer system (1), cause the computer system (1) to perform the following steps: (a) providing a pre-trained conditional generative model (CGM t ), wherein the pre-trained conditional generative model (CGM t ) was trained to reconstruct a plurality of reference representations (RR1, RR2, RR3, RR4, RR5, RR6) of an examination area of a plurality of examination objects, (b) providing a native representation (R0) of the examination area of a new examination object, wherein the native representation (R0) represents the examination area of the new examination object without contrast agent, (c) providing a contrast-enhanced representation (R1) of the examination area of the new examination object, wherein the contrast-enhanced representation (R1) represents the examination area of the new examination object after administration of an amount of a contrast agent, (d) generating one or more embeddings (E1, E2, EC) at least partially based on the native representation (R0) and/or the contrast-enhanced representation (R1), (e) providing starting data (SD), (f) generating a synthetic representation (SR) of the examination area of the new examination object based on the starting data (SD) using the pre-trained conditional generative model (CGM t ) and using the one or more embeddings (E1, E2, EC) as condition, (g) generating a transformed synthetic representation (SR T ) based on the synthetic representation (SR), wherein the transformed synthetic representation (SR T ) is a contrast-enhanced representation of the examination area of the new examination object in which the contrast-enhancement is reduced compared to the synthetic representation (SR), (h) quantifying deviations between the transformed synthetic representation (SR T ) and the contrast-enhanced representation (R1) of the examination area of the new examination object, (i) reducing the deviations by modifying the starting data (SD) and/or the one or more embeddings (E1, E2, EC) and/or parameters of the pre-trained conditional generative model (CGM t ), (j) repeating steps (f) to (i) one or more times, (k) outputting and/or storing the synthetic representation (SR) of the examination area of the new examination object and/or a synthetic image of the examination area of the new examination object generated therefrom, and/or transmitting the synthetic representation (SR) and/or the synthetic image to a separate computer system.
- Use of a contrast agent in an examination of an examination area of a new examination object, the examination comprising: (a) providing a pre-trained conditional generative model (CGM t ), wherein the pre-trained conditional generative model (CGM t ) was trained to reconstruct a plurality of reference representations (RR1, RR2, RR3, RR4, RR5, RR6) of an examination area of a plurality of examination objects, (b) generating a native representation (R0) of the examination area of the new examination object, wherein the native representation (R0) represents the examination area of the new examination object without contrast agent, (c) generating a contrast-enhanced representation (R1) of the examination area of the new examination object, wherein the contrast-enhanced representation (R1) represents the examination area of the new examination object after administration of an amount of the contrast agent, (d) generating one or more embeddings (E1, E2, EC) at least partially based on the native representation (R0) and/or the contrast-enhanced representation (R1), (e) providing starting data (SD), (f) generating a synthetic representation (SR) of the examination area of the new examination object based on the starting data (SD) using the pre-trained conditional generative model (CGM t ) and using the one or more embeddings (E1, E2, EC) as condition, (g) generating a transformed synthetic representation (SR T ) based on the synthetic representation (SR), wherein the transformed synthetic representation (SR T ) is a contrast-enhanced representation of the examination area of the new examination object in which the contrast-enhancement is reduced compared to the synthetic representation (SR), (h) quantifying deviations between the transformed synthetic representation (SR T ) and the contrast-enhanced representation (R1) of the examination area of the new examination object, (i) reducing the deviations by modifying the starting data (SD) and/or the one or more embeddings (E1, E2, EC) and/or parameters of the pre-trained conditional generative model (CGM t ), (j) repeating steps (f) to (i) one or more times, (k) outputting and/or storing the synthetic representation (SR) of the examination area of the new examination object and/or a synthetic image of the examination area of the new examination object generated therefrom, and/or transmitting the synthetic representation (SR) and/or the synthetic image to a separate computer system.
- Use of claim 16, wherein the contrast agent is or comprises: gadolinium 2,2',2"-(10-{1-carboxy-2-[2-(4-ethoxyphenyl)ethoxy]ethyl} -1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate, gadolinium 2,2',2"-{10-[1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate, gadolinium 2,2',2"-{10-[(1R)-1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate, gadolinium (2S,2'S,2"S)-2,2',2"-{10-[(1S)-1-carboxy-4-14-[2-(2-ethoxyethoxy)ethoxy]phenyl}butyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}tris(3-hydroxypropanoate, gadolinium 2,2',2"-{10-[(1S)-4-(4-butoxyphenyl)-1-carboxybutyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate, gadolinium -2,2',2"-{(2S)-10-(carboxymethyl)-2-[4-(2-ethoxyethoxy)benzyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate, gadolinium 2,2',2"-[10-(carboxymethyl)-2-(4-ethoxybenzyl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl]triacetate. gadolinium(III) 5,8-bis(carboxylatomethyl)-2-[2-(methylamino)-2-oxoethyl]-10-oxo-2,5,8,11-tetraazadodecane-1-carboxylate hydrate, gadolinium(III) 2-[4-(2-hydroxypropyl)-7,10-bis(2-oxido-2-oxoethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetate, gadolinium(III) 2,2',2"-(10-((2R,3S)-1,3,4-trihydroxybutan-2-yl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate, gadolinium(III) 2-[4,7,10-tris(carboxymethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetic acid, gadolinium(III) ethoxybenzyldiethylenetriaminepentaacetic acid, gadolinium(III) 2-[3,9-bis[1-carboxylato-4-(2,3-dihydroxypropylamino)-4-oxobutyl]-3,6,9,15-tetrazabicyclo[9.3.1]pentadeca-1(15),11,13-trien-6-yl]-5-(2,3-dihydroxypropylamino)-5-oxopentanoate, dihydrogen [(±)-4-carboxy-5,8,11-tris(carboxymethyl)-1-phenyl-2-oxa-5,8,11-triazatridecan-13-oato(5-)]gadolinate(2-), tetragadolinium [4,10-bis(carboxylatomethyl)-7-{3,6,12,15-tetraoxo-16-[4,7,10-tris-(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]-9,9-bis({[({2-[4,7,10-tris-(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]propanoyl}amino)acetyl]-amino}methyl)-4,7,11,14-tetraazahepta-decan-2-yl}-1,4,7,10-tetraazacyclododecan-1-yl]acetate, a Gd 3+ complex of a compound of the formula (I) where Ar is a group selected from where # is the linkage to X, X is a group selected from CH 2 , (CH 2 ) 2 , (CH 2 ) 3 , (CH 2 ) 4 and *-(CH 2 ) 2 -O-CH 2 - # , where * is the linkage to Ar and # is the linkage to the acetic acid residue, R 1 , R 2 and R 3 are each independently a hydrogen atom or a group selected from C 1 -C 3 alkyl, -CH 2 OH, -(CH 2 ) 2 OH and -CH 2 OCH 3 , R 4 is a group selected from C 2 -C 4 alkoxy, (H 3 C-CH 2 )-O-(CH 2 ) 2 -O-, (H 3 C-CH 2 )-O-(CH 2 ) 2 -O-(CH 2 ) 2 -O- and (H 3 C-CH 2 )-O-(CH 2 ) 2 -O-(CH 2 ) 2 -O-(CH 2 ) 2 -O-, R 5 is a hydrogen atom, and R 6 is a hydrogen atom, or a stereoisomer, tautomer, hydrate, solvate or salt thereof, or a mixture thereof, or a Gd 3+ complex of a compound of the formula (II) where Ar is a group selected from where # is the linkage to X, X is a group selected from CH 2 , (CH 2 ) 2 , (CH 2 ) 3 , (CH 2 ) 4 and *-(CH 2 ) 2 -O-CH 2 - # , where * is the linkage to Ar and # is the linkage to the acetic acid residue, R 7 is a hydrogen atom or a group selected from C 1 -C 3 alkyl, -CH 2 OH, -(CH 2 ) 2 OH and -CH 2 OCH 3 ; R 8 is a group selected from C 2 -C 4 alkoxy, (H 3 C-CH 2 O)-(CH 2 ) 2 -O-, (H 3 C-CH 2 O)-(CH 2 ) 2 -O-(CH 2 ) 2 -O- and (H 3 C-CH 2 O)-(CH 2 ) 2 -O-(CH 2 ) 2 -O-(CH 2 ) 2 -O-; R 9 and R 10 independently represent a hydrogen atom; or a stereoisomer, tautomer, hydrate, solvate or salt thereof, or a mixture thereof.
- A kit comprising a contrast agent and computer-readable program code that, when executed by a processing unit (20) of a computer system (1), cause the computer system (1) to execute the following steps: (a) providing a pre-trained conditional generative model (CGM t ), wherein the pre-trained conditional generative model (CGM t ) was trained to reconstruct a plurality of reference representations (RR1, RR2, RR3, RR4, RR5, RR6) of an examination area of a plurality of examination objects, (b) providing a native representation of the examination area of a new examination object, wherein the native representation represents the examination area of the new examination object without contrast agent, (c) providing a contrast-enhanced representation (R1) of the examination area of the new examination object, wherein the contrast-enhanced representation (R1) represents the examination area of the new examination object after administration of an amount of the contrast agent, (d) generating one or more embeddings (E1, E2, EC) at least partially based on the native representation and/or the contrast-enhanced representation (R1), (e) providing starting data (SD), (f) generating a synthetic representation (SR) of the examination area of the new examination object based on the starting data (SD) using the pre-trained conditional generative model (CGM t ) and using the one or more embeddings (E1, E2, EC) as condition, (g) generating a transformed synthetic representation (SR T ) based on the synthetic representation (SR), wherein the transformed synthetic representation (SR T ) is a contrast-enhanced representation of the examination area of the new examination object in which the contrast-enhancement is reduced compared to the synthetic representation (SR), (h) quantifying deviations between the transformed synthetic representation (SR T ) and the contrast-enhanced representation (R1) of the examination area of the new examination object, (i) reducing the deviations by modifying the starting data (SD) and/or the one or more embeddings (E1, E2, EC) and/or parameters of the pre-trained conditional generative model (CGM t ), (j) repeating steps (f) to (i) one or more times, (k) outputting and/or storing the synthetic representation (SR) of the examination area of the new examination object and/or a synthetic image of the examination area of the new examination object generated therefrom, and/or transmitting the synthetic representation (SR) and/or the synthetic image to a separate computer system.
Description
FIELD OF THE DISCLOSURE Systems, methods, and computer programs disclosed herein relate to generating synthetic representations, such as synthetic radiologic images. BACKGROUND Artificial intelligence is increasingly finding its way into medicine. Machine learning models are being used not only to recognize signs of disease in medical images of the human or animal body (see, for example, WO2018202541A1, WO2020229152A1), but also increasingly to generate synthetic (artificial) medical images. For example, WO2019/074938A1 and WO2022184297A1 describe methods for generating a synthetic radiological image showing an examination area of an examination object after application of a standard amount of a contrast agent, although only a smaller amount of contrast agent than the standard amount was applied. The standard amount is the amount recommended by the manufacturer and/or distributor of the contrast agent and/or the amount approved by a regulatory authority and/or the amount listed in a package insert for the contrast agent. The methods described in WO2019/074938A1 and WO2022184297A can therefore be used to reduce the amount of contrast agent. The machine learning models disclosed in the cited publications are or include convolutional neural networks. Such machine learning models can be difficult to train, and they often require extensive tuning of hyperparameters; such models can be unstable and sometimes produce images that are not realistic or do not match the training data. Overfitting is a frequently observed problem (see, e.g., P. Thanapol et al.: Reducing Overfitting and Improving Generalization in Training Convolutional Neural Network (CNN) under Limited Sample Sizes in Image Recognition, 2020, 5th International Conference on Information Technology (InCIT), pp. 300-305, doi: 10.1109/InCIT50588.2020.9310787. SUMMARY These problems are addressed by the subject matter of the independent claims of the present disclosure. Exemplary embodiments are defined in the dependent claims, the description, and the drawings. In a first aspect, the present disclosure relates to a computer-implemented method comprising the steps: (a) providing a pre-trained conditional generative model, wherein the pre-trained conditional generative model was trained to reconstruct a plurality of reference representations of an examination area of a plurality of examination objects,(b) providing a native representation of the examination area of a new examination object, wherein the native representation represents the examination area of the new examination object without contrast agent,(c) providing a contrast-enhanced representation of the examination area of the new examination object, wherein the contrast-enhanced representation represents the examination area of the new examination object after administration of an amount of a contrast agent,(d) generating one or more embeddings at least partially based on the native representation and/or the contrast-enhanced representation,(e) providing starting data,(f) generating a synthetic representation of the examination area of the new examination object based on the starting data using the pre-trained conditional generative model and using the one or more embeddings as condition,(g) generating a transformed synthetic representation based on the synthetic representation, wherein the transformed synthetic representation is a contrast-enhanced representation of the examination area of the new examination object in which the contrast-enhancement is reduced compared to the synthetic representation,(h) quantifying deviations between the transformed synthetic representation and the contrast-enhanced representation of the examination area of the new examination object,(i) reducing the deviations by modifying the starting data and/or the one or more embeddings and/or parameters of the pre-trained conditional generative model,(j) repeating steps (f) to (i) one or more times,(k) outputting and/or storing the synthetic representation of the examination area of the new examination object and/or a synthetic image of the examination area of the new examination object generated therefrom, and/or transmitting the synthetic representation and/or the synthetic image to a separate computer system. In another aspect, the present disclosure provides a computer system comprising: a processing unit; anda memory storing an application program configured to perform, when executed by the processing unit, an operation, the operation comprising: (a) providing a pre-trained conditional generative model, wherein the pre-trained conditional generative model was trained to reconstruct a plurality of reference representations of an examination area of a plurality of examination objects,(b) providing a native representation of the examination area of a new examination object, wherein the native representation represents the examination area of the new examination object without contrast agent,(c) providing a contrast-enhan