EP-4567716-B1 - GENERATING SYNTHETIC REPRESENTATIONS
Inventors
- MOHAMMADI, Seyed Sadegh
- Montalt Tordera, Javier
- Prokop, Jakub
Dates
- Publication Date
- 20260513
- Application Date
- 20241202
Claims (16)
- A computer-implemented method of training a generative machine learning model (MLM), the method comprising: - providing the generative machine learning model (MLM), - providing training data (TD), wherein the training data (TD) comprises, for each reference object of a plurality of reference objects, a first reference representation (RR1) representing an examination area of the reference object without contrast agent, and a second reference representation (RR2) representing the examination area of the reference object after application of a first amount of a contrast agent, - training the generative machine learning model (MLM), wherein training the generative machine learning model (MLM) comprises, for each reference object of the plurality of reference objects: • causing the generative machine learning model (MLM) to generate a first synthetic representation (SR1) of the examination area of the reference object based on the first reference representation (RR1) and/or the second reference representation (RR2), • generating a second synthetic representation (SR2) of the examination area of the reference object, wherein generating the second synthetic representation (SR2) comprises: subtracting the first reference representation (RR1) from the first synthetic representation (SR1), • generating a third synthetic representation (SR3) of the examination area of the reference object, wherein generating the third synthetic representation (SR3) comprises: reducing the contrast of the second synthetic representation (SR2), • generating a fourth synthetic representation (SR4) of the examination area of the reference object, wherein generating the fourth synthetic representation (SR4) comprises: subtracting the third synthetic representation (SR3) from the first synthetic representation (SR1) or adding the third synthetic representation (SR3) to the first reference representation (RR1), • determining deviations between the fourth synthetic representation (SR4) and the second reference representation (RR2), • reducing the deviations by modifying parameters of the generative machine learning model (MLM), - storing the trained generative machine learning model (MLM t ) and/or transmitting the trained generative machine learning model (MLM t ) to another computer system and/or using the trained generative machine learning model (MLM t ) to generate a synthetic representation (SR) of the examination area of an examination object.
- A computer-implemented method of generating a synthetic representation (SR) using a trained generative machine learning model (MLM t ), the method comprising: - providing the trained generative machine learning model (MLM t ), • wherein the trained generative machine learning model (MLM t ) was trained on training data (TD), • wherein the training data (TD) comprised, for each reference object of a plurality of reference objects, a first reference representation (RR1) representing an examination area of the reference object without contrast agent, and a second reference representation (RR2) representing the examination area of the reference object after application of a first amount of a contrast agent, • wherein training the generative machine learning model (MLM) comprised, for each reference object of the plurality of reference objects: ∘ causing the generative machine learning model (MLM) to generate a first synthetic representation (SR1) of the examination area of the reference object based on the first reference representation (RR1) and/or the second reference representation (RR2), ∘ generating a second synthetic representation (SR2) of the examination area of the reference object, wherein generating the second synthetic representation (SR2) comprises: subtracting the first reference representation (RR1) from the first synthetic representation (SR1), ∘ generating a third synthetic representation (SR3) of the examination area of the reference object, wherein generating the third synthetic representation (SR3) comprises: reducing the contrast of the second synthetic representation (SR2), ∘ generating a fourth synthetic representation (SR4) of the examination area of the reference object, wherein generating the fourth synthetic representation (SR4) comprises: subtracting the third synthetic representation (SR3) from the first synthetic representation (SR1) or adding the third synthetic representation (SR3) to the first reference representation (RR1), ∘ determining deviations between the fourth synthetic representation (SR4) and the second reference representation (RR2), ∘ reducing the deviations by modifying parameters of the generative machine learning model (MLM), - providing a first representation (R1) and/or a second representation (R2) of the examination area of an examination object, wherein the first representation (R1) represents the examination area of the examination object without contrast agent, and the second representation (R2) represents the examination area of the examination object after application of the first amount of the contrast agent, - causing the trained generative machine learning model (MLM t ) to generate a synthetic representation (SR) of the examination area of the examination object based on the first representation (R1) and/or the second representation (R2), - outputting the synthetic representation (SR) of the examination area of the examination object and/or storing the synthetic representation (SR) of the examination area of the examination object in a data storage and/or transmitting the synthetic representation (SR) of the examination area of the examination object to a separate computer system.
- The method of claim 1 or 2, wherein each reference object is a living being, e.g. a mammal, e.g. a human, and the examination object is a living being, e.g. a mammal, e.g. a human.
- The method of any one of claims 1 to 3, wherein the examination area is or comprises a liver, kidney, heart, lung, brain, stomach, bladder, prostate, intestine, breast, thyroid, pancreas, uterus or a part thereof of a mammal, e.g. of a human.
- The method of any one of claims 1 to 4, wherein each representation is a radiologic representation.
- The method of any one of claims 1 to 5, wherein the synthetic representation (SR) of the examination area of the examination object represents the examination area of the examination object after application of a second amount of the contrast agent, wherein the second amount is larger than the first amount.
- The method of any one of claims 1 to 6, wherein reducing the contrast of the second synthetic representation (SR2) comprises: linear or non-linear attenuation of grey values or color values of image elements of the second synthetic representation (SR2)
- The method of any one of claims 1 to 7, wherein reducing the contrast of the second synthetic representation (SR2) comprises: multiplying grey values or color values of image elements of the second synthetic representation (SR2) by an attenuation factor, wherein the attenuation factor is greater than zero and smaller than 1.
- The method of any one of claims 1 to 8, wherein the generative machine learning model (MLM) is or comprises one or more of the following: artificial neural network, convolutional neural network, variational autoencoder, generative adversarial network, transformer, diffusion network.
- The method of any one of claims 1 to 10, wherein the examination area is a human liver or comprises a human liver or is part of a human liver, and the contrast agent is a hepatobiliary contrast agent, e.g. a hepatobiliary MRI contrast agent.
- The method of any one of claims 1 to 11, wherein the generation of the first synthetic representation (SR1) of the examination area of the reference object is based on the first reference representation (RR1) and the second reference representation (RR2), and the generation of the synthetic representation (SR) of the examination area of the examination object is based on the first representation (R1) and the second representation (R2).
- A computer system (1) comprising: a processing unit (20) or; and a memory (50) storing a computer program (60) configured to perform, when executed by the processing unit (20) or, an operation, the operation comprising: - providing the trained generative machine learning model (MLM t ), • wherein the trained generative machine learning model (MLM t ) was trained on training data (TD), • wherein the training data (TD) comprised, for each reference object of a plurality of reference objects, a first reference representation (RR1) representing an examination area of the reference object without contrast agent, and a second reference representation (RR2) representing the examination area of the reference object after application of a first amount of a contrast agent, • wherein training the generative machine learning model (MLM) comprised, for each reference object of the plurality of reference objects: ∘ causing the generative machine learning model (MLM) to generate a first synthetic representation (SR1) of the examination area of the reference object based on the first reference representation (RR1) and/or the second reference representation (RR2), ∘ generating a second synthetic representation (SR2) of the examination area of the reference object, wherein generating the second synthetic representation (SR2) comprises: subtracting the first reference representation (RR1) from the first synthetic representation (SR1), ∘ generating a third synthetic representation (SR3) of the examination area of the reference object, wherein generating the third synthetic representation (SR3) comprises: reducing the contrast of the second synthetic representation (SR2), ∘ generating a fourth synthetic representation (SR4) of the examination area of the reference object, wherein generating the fourth synthetic representation (SR4) comprises: subtracting the third synthetic representation (SR3) from the first synthetic representation (SR1) or adding the third synthetic representation (SR3) to the first reference representation (RR1), ∘ determining deviations between the fourth synthetic representation (SR4) and the second reference representation (RR2), ∘ reducing the deviations by modifying parameters of the generative machine learning model (MLM), - providing a first representation (R1) and/or a second representation (R2) of the examination area of an examination object, wherein the first representation (R1) represents the examination area of the examination object without contrast agent, and the second representation (R2) represents the examination area of the examination object after application of the first amount of the contrast agent, - causing the trained generative machine learning model (MLM t ) to generate a synthetic representation (SR) of the examination area of the examination object based on the first representation (R1) and/or the second representation (R2), - outputting the synthetic representation (SR) of the examination area of the examination object and/or storing the synthetic representation (SR) of the examination area of the examination object in a data storage and/or transmitting the synthetic representation (SR) of the examination area of the examination object to a separate computer system.
- A non-transitory computer readable storage medium having stored thereon a computer program that, when executed by a processing unit (20) of a computer system (1), cause the computer system (1) to perform the following steps: - providing the trained generative machine learning model (MLM t ), • wherein the trained generative machine learning model (MLM t ) was trained on training data (TD), • wherein the training data (TD) comprised, for each reference object of a plurality of reference objects, a first reference representation (RR1) representing an examination area of the reference object without contrast agent, and a second reference representation (RR2) representing the examination area of the reference object after application of a first amount of a contrast agent, • wherein training the generative machine learning model (MLM) comprised, for each reference object of the plurality of reference objects: ∘ causing the generative machine learning model (MLM) to generate a first synthetic representation (SR1) of the examination area of the reference object based on the first reference representation (RR1) and/or the second reference representation (RR2), ∘ generating a second synthetic representation (SR2) of the examination area of the reference object, wherein generating the second synthetic representation (SR2) comprises: subtracting the first reference representation (RR1) from the first synthetic representation (SR1), ∘ generating a third synthetic representation (SR3) of the examination area of the reference object, wherein generating the third synthetic representation (SR3) comprises: reducing the contrast of the second synthetic representation (SR2), ∘ generating a fourth synthetic representation (SR4) of the examination area of the reference object, wherein generating the fourth synthetic representation (SR4) comprises: subtracting the third synthetic representation (SR3) from the first synthetic representation (SR1) or adding the third synthetic representation (SR3) to the first reference representation (RR1), ∘ determining deviations between the fourth synthetic representation (SR4) and the second reference representation (RR2), ∘ reducing the deviations by modifying parameters of the generative machine learning model (MLM), - providing a first representation (R1) and/or a second representation (R2) of the examination area of an examination object, wherein the first representation (R1) represents the examination area of the examination object without contrast agent, and the second representation (R2) represents the examination area of the examination object after application of the first amount of the contrast agent, - causing the trained generative machine learning model (MLM t ) to generate a synthetic representation (SR) of the examination area of the examination object based on the first representation (R1) and/or the second representation (R2), - outputting the synthetic representation (SR) of the examination area of the examination object and/or storing the synthetic representation (SR) of the examination area of the examination object in a data storage and/or transmitting the synthetic representation (SR) of the examination area of the examination object to a separate computer system.
- Use of a contrast agent in an examination of an examination area of an examination object, the examination comprising: - providing a trained generative machine learning model (MLM t ), • wherein the trained generative machine learning model (MLM t ) was trained on training data (TD), • wherein the training data (TD) comprised, for each reference object of a plurality of reference objects, a first reference representation (RR1) representing an examination area of the reference object without contrast agent, and a second reference representation (RR2) representing the examination area of the reference object after application of an amount of the contrast agent, • wherein training the generative machine learning model (MLM) comprised, for each reference object of the plurality of reference objects: ∘ causing the generative machine learning model (MLM) to generate a first synthetic representation (SR1) of the examination area of the reference object based on the first reference representation (RR1) and/or the second reference representation (RR2), ∘ generating a second synthetic representation (SR2) of the examination area of the reference object, wherein generating the second synthetic representation (SR2) comprises: subtracting the first reference representation (RR1) from the first synthetic representation (SR1), ∘ generating a third synthetic representation (SR3) of the examination area of the reference object, wherein generating the third synthetic representation (SR3) comprises: reducing the contrast of the second synthetic representation (SR2), ∘ generating a fourth synthetic representation (SR4) of the examination area of the reference object, wherein generating the fourth synthetic representation (SR4) comprises: subtracting the third synthetic representation (SR3) from the first synthetic representation (SR1) or adding the third synthetic representation (SR3) to the first reference representation (RR1), ∘ determining deviations between the fourth synthetic representation (SR4) and the second reference representation (RR2), ∘ reducing the deviations by modifying parameters of the generative machine learning model (MLM), - generating a first representation (R1) and/or a second representation (R2) of the examination area of an examination object, wherein the first representation (R1) represents the examination area of the examination object without contrast agent, and the second representation (R2) represents the examination area of the examination object after application of the amount of the contrast agent, - causing the trained generative machine learning model (MLM t ) to generate a synthetic representation (SR) of the examination area of the examination object based on the first representation (R1) and/or the second representation (R2), - outputting the synthetic representation (SR) of the examination area of the examination object and/or storing the synthetic representation (SR) of the examination area of the examination object in a data storage and/or transmitting the synthetic representation (SR) of the examination area of the examination object to a separate computer system.
- Use of the contrast agent of claim 14, wherein the contrast agent comprises - a Gd 3+ complex of a compound of the formula (I) where Ar is a group selected from where # is the linkage to X, X is a group selected from CH 2 , (CH 2 ) 2 , (CH 2 ) 3 , (CH 2 ) 4 and *-(CH 2 ) 2 -O-CH 2 - # , where * is the linkage to Ar and # is the linkage to the acetic acid residue, R 1 , R 2 and R 3 are each independently a hydrogen atom or a group selected from C 1 -C 3 alkyl, -CH 2 OH, -(CH 2 ) 2 OH and -CH 2 OCH 3 , R 4 is a group selected from C 2 -C 4 alkoxy, (H 3 C-CH 2 )-O-(CH 2 ) 2 -O-, (H 3 C-CH 2 )-O-(CH 2 ) 2 -O-(CH 2 ) 2 -O- and (H 3 C-CH 2 )-O-(CH 2 ) 2 -O-(CH 2 ) 2 -O-(CH 2 ) 2 -O-, R 5 is a hydrogen atom, and R 6 is a hydrogen atom, or a stereoisomer, tautomer, hydrate, solvate or salt thereof, or a mixture thereof, or - a Gd 3+ complex of a compound of the formula (II) where Ar is a group selected from where # is the linkage to X, X is a group selected from CH 2 , (CH 2 ) 2 , (CH 2 ) 3 , (CH 2 ) 4 and *-(CH 2 ) 2 -O-CH 2 - # , where * is the linkage to Ar and # is the linkage to the acetic acid residue, R 7 is a hydrogen atom or a group selected from C 1 -C 3 alkyl, -CH 2 OH, -(CH 2 ) 2 OH and -CH 2 OCH 3 ; R 8 is a group selected from C 2 -C 4 alkoxy, (H 3 C-CH 2 O)-(CH 2 ) 2 -O-, (H 3 C-CH 2 O)-(CH 2 ) 2 -O-(CH 2 ) 2 -O- and (H 3 C-CH 2 O)-(CH 2 ) 2 -O-(CH 2 ) 2 -O-(CH 2 ) 2 -O-; R 9 and R 10 independently represent a hydrogen atom; or a stereoisomer, tautomer, hydrate, solvate or salt thereof, or a mixture thereof, or the contrast agent comprises one of the following substances: - gadolinium(III) 2-[4,7,10-tris(carboxymethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetic acid, - gadolinium(III) ethoxybenzyldiethylenetriaminepentaacetic acid, - gadolinium(III) 2-[3,9-bis[1-carboxylato-4-(2,3-dihydroxypropylamino)-4-oxobutyl]-3,6,9,15-tetrazabicyclo[9.3.1]pentadeca-1(15),11,13-trien-6-yl]-5-(2,3-dihydroxypropylamino)-5-oxopentanoate, - dihydrogen [(±)-4-carboxy-5,8,11-tris(carboxymethyl)-1-phenyl-2-oxa-5,8,11-triazatridecan-13-oato(5-)]gadolinate(2-), - tetragadolinium [4,10-bis(carboxylatomethyl)-7-{3,6,12,15-tetraoxo-16-[4,7,10-tris(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]-9,9-bis({[({2-[4,7,10-tris(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]propanoyl}amino)acetyl]-amino}methyl)-4,7,11,14-tetraazahepta-decan-2-yl}-1,4,7,10-tetraazacyclododecan-1-yl]acetate, - 2,2',2"-(10-{1-carboxy-2-[2-(4-ethoxyphenyl)ethoxy]ethyl}-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate, - gadolinium 2,2',2"-{10-[1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate, - gadolinium 2,2',2"-{10-[(1R)-1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate, - gadolinium (2S,2'S,2"S)-2,2',2"-{10-[(1S)-1-carboxy-4-{4-[2-(2-ethoxyethoxy)ethoxy] phenyl}butyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}tris(3-hydroxypropanoate) - gadolinium 2,2',2"-{10-[(1S)-4-(4-butoxyphenyl)-1-carboxybutyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate, - gadolinium(III) 5,8-bis(carboxylatomethyl)-2-[2-(methylamino)-2-oxoethyl]-10-oxo-2,5,8,11-tetraazadodecane-1-carboxylate hydrate - gadolinium(III) 2-[4-(2-hydroxypropyl)-7,10-bis(2-oxido-2-oxoethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetate, - gadolinium(III) 2,2',2"-(10-((2R,3S)-1,3,4-trihydroxybutan-2-yl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate, - gadolinium-2,2',2"-{(2S)-10-(carboxymethyl)-2-[4-(2-ethoxyethoxy)benzyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate, - gadolinium-2,2',2"-[10-(carboxymethyl)-2-(4-ethoxybenzyl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl]triacetate.
- Kit comprising a computer program according to claim 13 and a contrast agent, wherein the contrast agent comprises - a Gd 3+ complex of a compound of the formula (I) where Ar is a group selected from where # is the linkage to X, X is a group selected from CH 2 , (CH 2 ) 2 , (CH 2 ) 3 , (CH 2 ) 4 and *-(CH 2 ) 2 -O-CH 2 - # , where * is the linkage to Ar and # is the linkage to the acetic acid residue, R 1 , R 2 and R 3 are each independently a hydrogen atom or a group selected from C 1 -C 3 alkyl, -CH 2 OH, -(CH 2 ) 2 OH and -CH 2 OCH 3 , R 4 is a group selected from C 2 -C 4 alkoxy, (H 3 C-CH 2 )-O-(CH 2 ) 2 -O-, (H 3 C-CH 2 )-O-(CH 2 ) 2 -O-(CH 2 ) 2 -O- and (H 3 C-CH 2 )-O-(CH 2 ) 2 -O-(CH 2 ) 2 -O-(CH 2 ) 2 -O-, R 5 is a hydrogen atom, and R 6 is a hydrogen atom, or a stereoisomer, tautomer, hydrate, solvate or salt thereof, or a mixture thereof, or - a Gd 3+ complex of a compound of the formula (II) where Ar is a group selected from where # is the linkage to X, X is a group selected from CH 2 , (CH 2 ) 2 , (CH 2 ) 3 , (CH 2 ) 4 and *-(CH 2 ) 2 -O-CH 2 - # , where * is the linkage to Ar and # is the linkage to the acetic acid residue, R 7 is a hydrogen atom or a group selected from C 1 -C 3 alkyl, -CH 2 OH, -(CH 2 ) 2 OH and -CH 2 OCH 3 ; R 8 is a group selected from C 2 -C 4 alkoxy, (H 3 C-CH 2 O)-(CH 2 ) 2 -O-, (H 3 C-CH 2 O)-(CH 2 ) 2 -O-(CH 2 ) 2 -O- and (H 3 C-CH 2 O)-(CH 2 ) 2 -O-(CH 2 ) 2 -O-(CH 2 ) 2 -O-; R 9 and R 10 independently represent a hydrogen atom; or a stereoisomer, tautomer, hydrate, solvate or salt thereof, or a mixture thereof, or the contrast agent comprises one of the following substances: - gadolinium(III) 2-[4,7,10-tris(carboxymethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetic acid, - gadolinium(III) ethoxybenzyldiethylenetriaminepentaacetic acid, - gadolinium(III) 2-[3,9-bis[1-carboxylato-4-(2,3-dihydroxypropylamino)-4-oxobutyl]-3,6,9,15-tetrazabicyclo[9.3.1]pentadeca-1(15),11,13-trien-6-yl]-5-(2,3-dihydroxypropylamino)-5-oxopentanoate, - dihydrogen [(±)-4-carboxy-5,8,11-tris(carboxymethyl)-1-phenyl-2-oxa-5,8,11-triazatridecan-13-oato(5-)]gadolinate(2-), - tetragadolinium [4,10-bis(carboxylatomethyl)-7-{3,6,12,15-tetraoxo-16-[4,7,10-tris(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]-9,9-bis({[({2-[4,7,10-tris(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]propanoyl}amino)acetyl]-amino}methyl)-4,7,11,14-tetraazahepta-decan-2-yl}-1,4,7,10-tetraazacyclododecan-1-yl]acetate, - 2,2',2"-(10-{1-carboxy-2-[2-(4-ethoxyphenyl)ethoxy]ethyl}-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate, - gadolinium 2,2',2"-{10-[1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate, - gadolinium 2,2',2"-{10-[(1R)-1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate, - gadolinium (2S,2'S,2"S)-2,2',2"-{10-[(1S)-1-carboxy-4-{4-[2-(2-ethoxyethoxy)ethoxy] phenyl}butyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}tris(3-hydroxypropanoate) - gadolinium 2,2',2"-{10-[(1S)-4-(4-butoxyphenyl)-1-carboxybutyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate, - gadolinium(III) 5,8-bis(carboxylatomethyl)-2-[2-(methylamino)-2-oxoethyl]-10-oxo-2,5,8,11-tetraazadodecane-1-carboxylate hydrate - gadolinium(III) 2-[4-(2-hydroxypropyl)-7,10-bis(2-oxido-2-oxoethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetate, - gadolinium(III) 2,2',2"-(10-((2R,3S)-1,3,4-trihydroxybutan-2-yl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate, - gadolinium-2,2',2"-{(2S)-10-(carboxymethyl)-2-[4-(2-ethoxyethoxy)benzyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate, - gadolinium-2,2',2"-[10-(carboxymethyl)-2-(4-ethoxybenzyl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl]triacetate.
Description
FIELD OF THE DISCLOSURE Systems, methods, and computer programs disclosed herein relate to generating synthetic contrast-enhanced radiologic images. BACKGROUND WO2019/074938A1 and WO2022184297A1 describe methods for generating a synthetic contrast-enhanced radiological image showing an examination area of an examination object after application of a standard amount of a contrast agent, although only a smaller amount of contrast agent than the standard amount was applied. The standard amount is the amount recommended by the manufacturer and/or distributor of the contrast agent and/or the amount approved by a regulatory authority and/or the amount listed in a package insert for the contrast agent. The methods described in WO2019/074938A1 and WO2022184297A can therefore be used to reduce the amount of contrast agent. The methods described in WO2019/074938A1 and WO2022184297A1 use machine learning models that have been trained based on training data. For each examination object of a plurality of examination objects, the training data comprises a native radiological image and a radiological image after application of an amount of the contrast agent that is smaller than the standard amount as input data and a radiological image after application of the standard amount of the contrast agent as target data. The training procedure cannot be carried out if, for example, the target data is not available. In order to generate a synthetic radiological image that represents an examination area of an examination object after application of a larger than standard amount of contrast agent, corresponding target data would have to be available, i.e. a larger than standard amount of contrast agent would have to be administered to examination objects. SUMMARY These problems are addressed by the subject matter of the independent claims of the present disclosure. Exemplary embodiments are defined in the dependent claims, the description, and the drawings. In a first aspect, the present disclosure relates to a computer-implemented method comprising: providing a generative machine learning model,providing training data, wherein the training data comprises, for each reference object of a plurality of reference objects, a first reference representation representing an examination area of the reference object without contrast agent, and a second reference representation representing the examination area of the reference object after application of an amount of a contrast agent,training the generative machine learning model, wherein training the generative machine learning model comprises, for each reference object of the plurality of reference objects: causing the generative machine learning model to generate a first synthetic representation of the examination area of the reference object based on the first reference representation and/or the second reference representation,generating a second synthetic representation of the examination area of the reference object, wherein generating the second synthetic representation comprises: subtracting the first reference representation from the first synthetic representation,generating a third synthetic representation of the examination area of the reference object, wherein generating the third synthetic representation comprises: reducing the contrast of the second synthetic representation,generating a fourth synthetic representation of the examination area of the reference object, wherein generating the fourth synthetic representation comprises: subtracting the third synthetic representation from the first synthetic representation or adding the third synthetic representation to the first reference representation,determining deviations between the fourth synthetic representation and the second reference representation,reducing the deviations by modifying parameters of the generative machine learning model,storing the trained generative machine learning model and/or transmitting the trained generative machine learning model to another computer system and/or using the trained generative machine learning model to generate a synthetic representation of the examination area of an examination object. In another aspect, the present disclosure relates to a computer-implemented method comprising: providing a trained generative machine learning model, wherein the trained generative machine learning model was trained on training data,wherein the training data comprised, for each reference object of a plurality of reference objects, a first reference representation representing an examination area of the reference object without contrast agent, and a second reference representation representing the examination area of the reference object after application of an amount of a contrast agent,wherein training the generative machine learning model comprised, for each reference object of the plurality of reference objects: ∘ causing the generative machine learning model to generate a first synthetic representat