EP-4581641-B1 - GENERATION OF SYNTHETIC RADIOLOGICAL RECORDINGS
Inventors
- LENGA, Matthias
- BALTRUSCHAT, Ivo Matteo
- KREIS, Felix Karl
Dates
- Publication Date
- 20260506
- Application Date
- 20230823
Claims (16)
- Computer-implemented method for generating a synthetic radiological image, comprising: - providing a trained machine-learning model (MLM t ), ∘ the trained machine-learning model (MLM t ) having been trained by means of training data (TD) to generate on the basis of at least one input representation (R1, R2) of an examination region of an examination object a synthetic representation (SR) of the examination region of the examination object, ∘ the training data (TD) comprising for each examination object of a multiplicity of examination objects i) at least one input representation (R1, R2) of the examination region of the examination object, ii) a target representation (TR) of the examination region of the examination object and iii) a transformed target representation (TR T ), ▪ the at least one input representation (R1, R2) representing the examination region of the respective examination object in a first period of time before or after administration of a contrast agent, ▪ the target representation (TR) representing the examination region of the respective examination object in a second period of time after the administration of the contrast agent, the second period of time following the first period of time, ▪ the transformed target representation (TR T ) representing at least part of the examination region of the respective examination object in frequency space, if the target representation (TR) represents the examination region of the respective examination object in real space, or in real space, if the target representation (TR) represents the examination region of the respective examination object in frequency space, ∘ the training of the machine-learning model (MLM t ) comprising reducing differences between i) at least part of the synthetic representation (SR) and at least part of the target representation (TR) and ii) between at least part of a transformed synthetic representation (SR T ) and at least part of the transformed target representation (TR T ), - receiving at least one input representation (R1*, R2*) of the examination region of a new examination object, the at least one input representation (R1*, R2*) of the examination region of the new examination object representing the examination region in a first period of time before and/or after administration of a contrast agent, - inputting the at least one input representation (R1*, R2*) of the examination region of the new examination object into the trained machine-learning model (MLM t ), - receiving a synthetic representation (SR*) of the examination region of the new examination object from the machine-learning model (MLM t ), - outputting and/or storing the synthetic representation (SR*) of the examination region of the new examination object and/or transmitting the synthetic representation (SR*) of the examination region of the new examination object to a separate computer system.
- Method according to Claim 1, wherein the training of the machine-learning model (MLM t ) comprises: - receiving and/or providing the training data (TD), the training data (TD) comprising a set of input data and target data for each examination object of the multiplicity of examination objects, ∘ each set comprising the at least one input representation (R1, R2) of the examination region of the examination object as input data and the target representation (TR) of the examination region of the examination object and the transformed target representation (TR T ) as target data, ∘ the at least one input representation (R1, R2) representing the examination region in the first period of time before and/or after the administration of the contrast agent and the target representation (TR) representing the examination region in the second period of time after the administration of the contrast agent, ∘ the transformed target representation (TR T ) representing at least part of the examination region of the examination object ▪ in frequency space, if the target representation (TR) represents the examination region of the examination object in real space, or ▪ in real space, if the target representation (TR) represents the examination region of the examination object in frequency space, - training a machine-learning model (MLM), the machine-learning model (MLM) being configured to generate on the basis of at least one input representation (R1, R2) of an examination region of an examination object and model parameters (MP) a synthetic representation (SR) of the examination region of the examination object, wherein the training comprises for each examination object of the multiplicity of examination objects: ∘ feeding the at least one input representation (R1, R2) to the machine-learning model, ∘ receiving a synthetic representation (SR) of the examination region of the examination object from the machine-learning model (MLM), ∘ generating and/or receiving a transformed synthetic representation (SR T ) on the basis of the synthetic representation (SR) and/or in relation to the synthetic representation (SR), the transformed synthetic representation (SR T ) representing at least part of the examination region of the examination object ▪ in frequency space, if the synthetic representation (SR) represents the examination region of the examination object in real space, or ▪ in real space, if the synthetic representation (SR) represents the examination region of the examination object in frequency space, ∘ quantifying the differences i) between at least part of the synthetic representation (SR) and at least part of the target representation (TR) and ii) between at least part of the transformed synthetic representation (SR T ) and at least part of the transformed target representation (TR T ) by means of a loss function (L), ∘ reducing the differences by modifying model parameters (MP), - outputting and/or storing the trained machine-learning model (MLM t ) and/or the model parameters (MP) and/or transmitting the trained machine-learning model (MLM t ) and/or the model parameters (MP) to a separate computer system.
- Method according to Claim 2, wherein the receiving and/or providing of training data (TD) comprises: - generating a partial transformed target representation (TR T,P ), the partial transformed target representation (TR T,P ) being reduced to one part or multiple parts of the transformed target representation (TR T ), wherein the generating and/or receiving of a transformed synthetic representation (SR T ) on the basis of and/or in relation to the synthetic representation (SRT) comprises: - generating a partial transformed synthetic representation (SR T,P ), the partial transformed synthetic representation (SR T,P ) being reduced to one part or multiple parts of the transformed synthetic representation (SR T ), wherein the quantifying of the differences between the transformed synthetic representation (SR T ) and the transformed target representation (TR T ) comprises: - quantifying the differences between the partial transformed synthetic representation (SR T,P ) and the partial transformed target representation (TR T,P ).
- Method according to any of Claims 1 to 3, wherein the training comprises for each examination object of the multiplicity of examination objects: ∘ feeding the at least one input representation (R1, R2) to the machine-learning model (MLM), ∘ receiving the synthetic representation (SR) and a first transformed synthetic representation (SR T ) of the examination region of the examination object from the machine-learning model (MLM), the first transformed synthetic representation (SR T ) representing at least part of the examination region of the examination object ▪ in frequency space, if the synthetic representation (SR) represents the examination region of the examination object in real space, or ▪ in real space, if the synthetic representation (SR) represents the examination region of the examination object in frequency space, ∘ generating a second transformed synthetic representation (SR T # ) on the basis of the synthetic representation (SR) by means of a transform (T), the second transformed synthetic representation (SR T # ) representing at least part of the examination region of the examination object ▪ in frequency space, if the synthetic representation (SR) represents the examination region of the examination object in real space, or ▪ in real space, if the synthetic representation (SR) represents the examination region of the examination object in frequency space, - quantifying the differences i) between at least part of the synthetic representation (SR) and at least part of the target representation (TR), ii) between at least part of the first transformed synthetic representation (SR T ) and at least part of the transformed target representation (TR T ) and iii) between at least part of the first transformed synthetic representation (SR T ) and at least part of the second transformed synthetic representation (SR T# ) by means of a loss function (L), - reducing the differences by modifying model parameters (MP).
- Method according to any of Claims 1 to 4, wherein the machine-learning model (MLM) undergoing training comprises a first machine-learning model (MLM1) and a second machine-learning model (MLM2), wherein the first machine-learning model (MLM1) is configured to generate on the basis of at least one input representation (R1, R2) and model parameters (MP1) a synthetic representation (SR) of the examination region of the examination object, wherein the second machine-learning model (MLM2) is configured to reconstruct on the basis of the synthetic representation (SR) of the examination region of the examination object and model parameters (MP2) at least one input representation (R1, R2), wherein the training comprises for each examination object of the multiplicity of examination objects: ∘ generating a transformed input representation (R2) on the basis of at least one input representation (R1) by means of a transform (T), the transformed input representation (R2) representing at least part of the examination region of the examination object ▪ in frequency space, if the at least one input representation (R1) represents the examination region of the examination object in real space, or ▪ in real space, if the input representation (R1) represents the examination region of the examination object in frequency space, ∘ feeding the at least one input representation (R1) and/or the transformed input representation (R2) to the first machine-learning model (MLM1), ∘ receiving a synthetic representation (SR) of the examination region of the examination object from the first machine-learning model (MLM1), ∘ generating and/or receiving a transformed synthetic representation (SR T ) on the basis of and/or in relation to the synthetic representation (SR), the transformed synthetic representation (SR T ) representing at least part of the examination region of the examination object ▪ in frequency space, if the synthetic representation (SR) represents the examination region of the examination object in real space, or ▪ in real space, if the synthetic representation (SR) represents the examination region of the examination object in frequency space, ∘ feeding the synthetic representation (SR) and/or the transformed synthetic representation (SR T ) to the second machine-learning model (MLM2), ∘ receiving a predicted input representation (R1#) from the second machine-learning model (MLM2), ∘ generating and/or receiving a transformed predicted input representation (R2#) on the basis of and/or in relation to the predicted input representation (R1#), the transformed predicted input representation (R2#) representing at least part of the examination region of the examination object ▪ in frequency space, if the predicted input representation (R1#) represents the examination region of the examination object in real space, or ▪ in real space, if the predicted input representation (R1#) represents the examination region of the examination object in frequency space, ∘ quantifying the differences i) between at least part of the synthetic representation (SR) and at least part of the target representation (TR), ii) between at least part of the transformed synthetic representation (SR T ) and at least part of the transformed target representation (TR T ), iii) between at least part of the input representation (R1) and at least part of the predicted input representation (R1#) and iv) between at least part of the transformed input representation (R2) and at least part of the transformed predicted input representation (R2#) by means of a loss function (L), ∘ reducing the differences by modifying model parameters (MP).
- Method according to any of Claims 1 to 5, wherein the examination object is a mammal, preferably a human.
- Method according to any of Claims 1 to 6, wherein the examination region is or includes a liver, brain, heart, kidney, lung, stomach, intestine, pancreas, thyroid gland, prostate or breast of a human.
- Method according to any of Claims 1 to 7, wherein the examination region is a liver or part of a liver of a human.
- Method according to any of Claims 1 to 8, wherein each input representation of the at least one input representation (R1, R2, R1*, R2*) is a representation of the examination region in real space, the target representation (TR) is a representation of the examination region in real space, and the synthetic representation (SR, SR*) is a representation of the examination region in real space.
- Method according to any of Claims 3 to 9, wherein the partial transformed synthetic representation (SR T,P ) represents the examination region in frequency space, the partial transformed synthetic representation (SR T,P ) being reduced to a frequency range of the transformed synthetic representation (SR T ), contrast information being encoded in the frequency range.
- Method according to any of Claims 3 to 9, wherein the partial transformed synthetic representation (SR T,P ) represents the examination region in frequency space, the partial transformed synthetic representation (SR T,P ) being reduced to a frequency range of the transformed synthetic representation (SR T ), information about fine structures being encoded in the frequency range.
- Computer system (10) for generating a synthetic radiological image, comprising • a receiving unit (11), • a control and calculation unit (12) and • an output unit (13), - wherein the control and calculation unit (12) is configured to provide a trained machine-learning model (MLM t ), ∘ the trained machine-learning model (MLM t ) having been trained by means of training data (TD) to generate on the basis of at least one input representation (R1, R2) of an examination region of an examination object a synthetic representation (SR) of the examination region of the examination object, ∘ the training data (TD) comprising for each examination object of a multiplicity of examination objects i) at least one input representation (R1, R2) of the examination region of the examination object, ii) a target representation (TR) of the examination region of the examination object and iii) a transformed target representation (TR T ), ▪ the at least one input representation (R1, R2) representing the examination region of the respective examination object in a first period of time before or after administration of a contrast agent, ▪ the target representation (TR) representing the examination region of the respective examination object in a second period of time after the administration of the contrast agent, the second period of time following the first period of time, ▪ the transformed target representation (TR T ) representing at least part of the examination region of the respective examination object in frequency space, if the target representation (TR) represents the examination region of the respective examination object in real space, or in real space, if the target representation (TR) represents the examination region of the respective examination object in frequency space, ∘ the training of the machine-learning model (MLM t ) comprising reducing differences between i) at least part of the synthetic representation (SR) and at least part of the target representation (TR) and ii) between at least part of a transformed synthetic representation (SR T ) and at least part of the transformed target representation (TR T ), - wherein the control and calculation unit (12) is configured to cause the receiving unit (11) to receive at least one input representation (R1*, R2*) of an examination region of a new examination object, the at least one input representation (R1*, R2*) of the examination region of the new examination object representing the examination region in a first period of time before and/or after administration of a contrast agent, - wherein the control and calculation unit (12) is configured to input the at least one input representation (R1*, R2*) of the examination region of the new examination object into a trained machine-learning model (MLM t ), - wherein the control and calculation unit (12) is configured to receive from the machine-learning model (MLM t ) a synthetic representation (SR*) of the examination region of the new examination object, - wherein the control and calculation unit (12) is configured to cause the output unit (13) to output the synthetic representation (SR*) of the examination region of the new examination object and/or to store it and/or to transmit it to a separate computer system.
- Computer program product for generating a synthetic radiological image, comprising a computer program (40) that can be loaded into a working memory (22) of a computer system (1), where it causes the computer system (1) to execute the following steps: - providing a trained machine-learning model (MLM t ), ∘ the trained machine-learning model (MLM t ) having been trained by means of training data (TD) to generate on the basis of at least one input representation (R1, R2) of an examination region of an examination object a synthetic representation (SR) of the examination region of the examination object, ∘ the training data (TD) comprising for each examination object of a multiplicity of examination objects i) at least one input representation (R1, R2) of the examination region of the examination object, ii) a target representation (TR) of the examination region of the examination object and iii) a transformed target representation (TR T ), ▪ the at least one input representation (R1, R2) representing the examination region of the respective examination object in a first period of time before or after administration of a contrast agent, ▪ the target representation (TR) representing the examination region of the respective examination object in a second period of time after the administration of the contrast agent, the second period of time following the first period of time, ▪ the transformed target representation (TR T ) representing at least part of the examination region of the respective examination object in frequency space, if the target representation (TR) represents the examination region of the respective examination object in real space, or in real space, if the target representation (TR) represents the examination region of the respective examination object in frequency space, ∘ the training of the machine-learning model (MLM t ) comprising reducing differences between i) at least part of the synthetic representation (SR) and at least part of the target representation (TR) and ii) between at least part of a transformed synthetic representation (SR T ) and at least part of the transformed target representation (TR T ), - receiving at least one input representation (R1*, R2*) of the examination region of a new examination object, the at least one input representation (R1*, R2*) of the examination region of the new examination object representing the examination region in a first period of time before and/or after administration of a contrast agent, - inputting the at least one input representation (R1*, R2*) of the examination region of the new examination object into the trained machine-learning model (MLM t ), - receiving a synthetic representation (SR*) of the examination region of the new examination object from the machine-learning model (MLM t ), - outputting and/or storing the synthetic representation (SR*) of the examination region of the new examination object and/or transmitting the synthetic representation (SR*) of the examination region of the new examination object to a separate computer system.
- Use of a contrast agent in a radiological examination method, the radiological examination method comprising: - providing a trained machine-learning model (MLM t ), ∘ the trained machine-learning model (MLM t ) having been trained by means of training data (TD) to generate on the basis of at least one input representation (R1, R2) of an examination region of an examination object a synthetic representation (SR) of the examination region of the examination object, ∘ the training data (TD) comprising for each examination object of a multiplicity of examination objects i) at least one input representation (R1, R2) of the examination region of the examination object, ii) a target representation (TR) of the examination region of the examination object and iii) a transformed target representation (TR T ), ▪ the at least one input representation (R1, R2) representing the examination region of the respective examination object in a first period of time before or after administration of the contrast agent, ▪ the target representation (TR) representing the examination region of the respective examination object in a second period of time after the administration of the contrast agent, the second period of time following the first period of time, ▪ the transformed target representation (TR T ) representing at least part of the examination region of the respective examination object in frequency space, if the target representation (TR) represents the examination region of the respective examination object in real space, or in real space, if the target representation (TR) represents the examination region of the respective examination object in frequency space, ∘ the training of the machine-learning model (MLM t ) comprising reducing differences between i) at least part of the synthetic representation (SR) and at least part of the target representation (TR) and ii) between at least part of a transformed synthetic representation (SR T ) and at least part of the transformed target representation (TR T ), - receiving at least one input representation (R1*, R2*) of the examination region of a new examination object, the at least one input representation (R1*, R2*) of the examination region of the new examination object representing the examination region in a first period of time before and/or after administration of a contrast agent, - inputting the at least one input representation (R1*, R2*) of the examination region of the new examination object into the trained machine-learning model (MLM t ), - receiving a synthetic representation (SR*) of the examination region of the new examination object from the machine-learning model (MLM t ), - outputting and/or storing the synthetic representation (SR*) of the examination region of the new examination object and/or transmitting the synthetic representation (SR*) of the examination region of the new examination object to a separate computer system.
- Kit comprising a contrast agent and a computer program product comprising a computer program (40) that can be loaded into a working memory (22) of a computer system (1), where it causes the computer system (1) to execute the following steps: - providing a trained machine-learning model (MLM t ), ∘ the trained machine-learning model (MLM t ) having been trained by means of training data (TD) to generate on the basis of at least one input representation (R1, R2) of an examination region of an examination object a synthetic representation (SR) of the examination region of the examination object, ∘ the training data (TD) comprising for each examination object of a multiplicity of examination objects i) at least one input representation (R1, R2) of the examination region of the examination object, ii) a target representation (TR) of the examination region of the examination object and iii) a transformed target representation (TR T ), ▪ the at least one input representation (R1, R2) representing the examination region of the respective examination object in a first period of time before or after administration of the contrast agent, ▪ the target representation (TR) representing the examination region of the respective examination object in a second period of time after the administration of the contrast agent, the second period of time following the first period of time, ▪ the transformed target representation (TR T ) representing at least part of the examination region of the respective examination object in frequency space, if the target representation (TR) represents the examination region of the respective examination object in real space, or in real space, if the target representation (TR) represents the examination region of the respective examination object in frequency space, ∘ the training of the machine-learning model (MLM t ) comprising reducing differences between i) at least part of the synthetic representation (SR) and at least part of the target representation (TR) and ii) between at least part of a transformed synthetic representation (SR T ) and at least part of the transformed target representation (TR T ), - receiving at least one input representation (R1*, R2*) of the examination region of a new examination object, the at least one input representation (R1*, R2*) of the examination region of the new examination object representing the examination region in a first period of time before and/or after administration of a contrast agent, - inputting the at least one input representation (R1*, R2*) of the examination region of the new examination object into the trained machine-learning model (MLM t ), - receiving a synthetic representation (SR*) of the examination region of the new examination object from the machine-learning model (MLM t ), - outputting and/or storing the synthetic representation (SR*) of the examination region of the new examination object and/or transmitting the synthetic representation (SR*) of the examination region of the new examination object to a separate computer system.
- Kit according to Claim 15, wherein the contrast agent is or comprises one or more contrast agents selected from the following list: gadoxetate disodium, gadolinium(III) 2-[4,7,10-tris(carboxymethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetic acid, gadolinium(III) ethoxybenzyldiethylenetriaminepentaacetic acid, gadolinium(III) 2-[3,9-bis[1-carboxylato-4-(2,3-dihydroxypropylamino)-4-oxobutyl]-3,6,9,15-tetrazabicyclo[9.3.1]pentadeca-1(15),11,13-trien-6-yl]-5-(2,3-dihydroxypropylamino)-5-oxopentanoate, dihydrogen [(±)-4-carboxy-5,8,11-tris(carboxymethyl)-1-phenyl-2-oxa-5,8,11-triazatridecan-13-oato(5-)]gadolinate(2-), tetragadolinium [4,10-bis(carboxylatomethyl)-7-{3,6,12,15-tetraoxo-16-[4,7,10-tris-(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]-9,9-bis({[({2-[4,7,10-tris-(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]propanoyl}amino)acetyl]amino}methyl)-4,7,11,14-tetraazaheptadecan-2-yl}-1,4,7,10-tetraazacyclododecan-1-yl]acetate, gadolinium 2,2',2"-(10-{1-carboxy-2-[2-(4-ethoxyphenyl)ethoxy]ethyl}-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate, gadolinium 2,2',2"-{10-[1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate, gadolinium 2,2',2"-{10-[(1R)-1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate, gadolinium (2S,2'S,2"S)-2,2',2"-{10-[(1S)-1-carboxy-4-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}butyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}tris(3-hydroxypropanoate), gadolinium 2,2',2"-{10-[(1S)-4-(4-butoxyphenyl)-1-carboxybutyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate, gadolinium-2,2',2"-{(2S)-10-(carboxymethyl)-2-[4-(2-ethoxyethoxy)benzyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate, gadolinium-2,2',2"-[10-(carboxymethyl)-2-(4-ethoxybenzyl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl]triacetate, gadolinium(III) 5,8-bis(carboxylatomethyl)-2-[2-(methylamino)-2-oxoethyl]-10-oxo-2,5,8,11-tetraazadodecane-1-carboxylate hydrate, gadolinium(III) 2-[4-(2-hydroxypropyl)-7,10-bis(2-oxido-2-oxoethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetate, gadolinium(III) 2,2',2"-(10-((2R,3S)-1,3,4-trihydroxybutan-2-yl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate, a Gd 3+ complex of a compound of the formula (I) where Ar is a group selected from where # is the linkage to X, X is a group selected from CH 2 , (CH 2 ) 2 , (CH 2 ) 3 , (CH 2 ) 4 and *-(CH 2 ) 2 -O-CH 2 - # , where * is the linkage to Ar and # is the linkage to the acetic acid residue, R 1 , R 2 and R 3 are each independently a hydrogen atom or a group selected from C 1 -C 3 alkyl, -CH 2 OH, -(CH 2 ) 2 OH and - CH 2 OCH 3 , R 4 is a group selected from C 2 -C 4 alkoxy, (H 3 C-CH 2 )-O-(CH 2 ) 2 -O-, (H 3 C-CH 2 )-O-(CH 2 ) 2 -O-(CH 2 ) 2 -O- and (H 3 C-CH 2 )-O-(CH 2 ) 2 -O-(CH 2 ) 2 -O-(CH 2 ) 2 -O-, R 5 is a hydrogen atom, and R 6 is a hydrogen atom, or a stereoisomer, tautomer, hydrate, solvate or salt thereof, or a mixture thereof, a Gd 3+ complex of a compound of the formula (II) where Ar is a group selected from where # is the linkage to X, X is a group selected from CH 2 , (CH 2 ) 2 , (CH 2 ) 3 , (CH 2 ) 4 and *- (CH 2 ) 2 -O-CH 2 - # , where * is the linkage to Ar and # is the linkage to the acetic acid residue, R 7 is a hydrogen atom or a group selected from C 1 -C 3 alkyl, -CH 2 OH, - (CH 2 ) 2 OH and -CH 2 OCH 3 ; R 8 is a group selected from C 2 -C 4 alkoxy, (H 3 C-CH 2 O)-(CH 2 ) 2 -O-, (H 3 C-CH 2 O)-(CH 2 ) 2 -O-(CH 2 ) 2 -O- and (H 3 C-CH 2 O)-(CH 2 ) 2 -O-(CH 2 ) 2 -O-(CH 2 ) 2 -O-; R 9 and R 10 are each independently a hydrogen atom; or a stereoisomer, tautomer, hydrate, solvate or salt thereof, or a mixture thereof.
Description
TECHNISCHES GEBIET Die vorliegende Offenbarung betrifft das technische Gebiet der Radiologie, insbesondere der Unterstützung von Radiologen bei radiologischen Untersuchungen mit Methoden der künstlichen Intelligenz. Die vorliegende Erfindung befasst sich mit dem Trainieren eines Modells des maschinellen Lernens und der Nutzung des trainierten Modells zur Vorhersage einer synthetischen Repräsentation eines Untersuchungsbereichs eines Untersuchungsobjekts. EINLEITUNG WO2021/197996A1 offenbart ein Verfahren zur Erzeugung radiologischer synthetischer Repräsentationen eines Untersuchungsbereichs eines Untersuchungsobjekts. Auf der Grundlage gemessener radiologischer Aufnahmen eines Untersuchungsbereichs, die Blutgefäße im Untersuchungsbereich mit zeitlich abnehmender Kontrastintensität zeigen, erzeugt das Verfahren synthetische radiologische Aufnahmen des Untersuchungsbereichs, die Blutgefäße mit konstanter Kontrastintensität zeigen. US 10997716B2 offenbart ein Verfahren zur diagnostischen Bildgebung mit reduzierter Kontrastmittelgabe. Das Verfahren verwendet ein Deep-Learning-Netzwerk, das mit kontrastfreien und kontrastarmen Bildern als Eingabe für das Deep-Learning-Netzwerk und kontrastreichen Bildern als Referenz-Grundwahrheitsbilder trainiert wurde. Das trainierte Deep-Learning-Netzwerk wird dann verwendet, um aus den aufgenommenen Bildern ohne Kontrastmittel und mit geringer Kontrastmittelgabe ein synthetisches Bild mit voller Kontrastmittelgabe vorherzusagen. EP3875979A1 offenbart ein Verfahren, eine Vorrichtung, ein System und ein Computerprogrammprodukt zur Ermittlung eines optimierten Zeitpunktes zum Starten der Aufnahme und Erfassung einer MRT-Aufnahme nach der Applikation eines Kontrastmittels. Die zeitliche Verfolgung von Vorgängen innerhalb des Körpers eines Menschen oder eines Tieres mit bildgebenden Verfahren spielt unter anderem bei der Diagnose und/oder Therapie von Krankheiten eine wichtige Rolle. Als ein Beispiel sei die Detektion und Differentialdiagnose fokaler Leberläsionen mittels dynamischer kontrastverstärkender Magnetresonanztomographie (MRT) mit einem hepatobiliären Kontrastmittel genannt. Ein hepatobiliäres Kontrastmittel wie beispielsweise Primovist® kann zur Detektion von Tumoren in der Leber eingesetzt werden. Die Blutversorgung des gesunden Lebergewebes erfolgt in erster Linie über die Pfortader (Vena portae), während die Leberarterie (Arteria hepatica) die meisten Primärtumoren versorgt. Nach intravenöser Bolusinjektion eines Kontrastmittels lässt sich dementsprechend eine Zeitverzögerung zwischen der Signalanhebung des gesunden Leberparenchyms und des Tumors beobachten. Neben malignen Tumoren findet man in der Leber häufig gutartige Läsionen wie Zysten, Hämangiome und fokal noduläre Hyperplasien (FNH). Für eine sachgerechte Therapieplanung müssen diese von den malignen Tumoren differenziert werden. Primovist® kann zur Erkennung gutartiger und bösartiger fokaler Leberläsionen eingesetzt werden. Es liefert mittels T1-gewichteter MRT Informationen über den Charakter dieser Läsionen. Bei der Differenzierung nutzt man die unterschiedliche Blutversorgung von Leber und Tumor und den zeitlichen Verlauf der Kontrastverstärkung. Bei der durch Primovist® erzielten Kontrastverstärkung während der Anflutungsphase beobachtet man typische Perfusionsmuster, die Informationen für die Charakterisierung der Läsionen liefern. Die Darstellung der Vaskularisierung hilft, die Läsionstypen zu charakterisieren und den räumlichen Zusammenhang zwischen Tumor und Blutgefäßen zu bestimmen. Bei T1-gewichteten MRT-Aufnahmen führt Primovist® 10-20 Minuten nach der Injektion (in der hepatobiliären Phase) zu einer deutlichen Signalverstärkung im gesunden Leberparenchym, während Läsionen, die keine oder nur wenige Hepatozyten enthalten, z.B. Metastasen oder mittelgradig bis schlecht differenzierte hepatozelluläre Karzinome (HCCs), als dunklere Bereiche auftreten. Die zeitliche Verfolgung der Verteilung des Kontrastmittels bietet also eine gute Möglichkeit der Detektion und Differentialdiagnose fokaler Leberläsionen; allerdings zieht sich die Untersuchung über eine vergleichsweise lange Zeitspanne hin. Über diese Zeitspanne sollten Bewegungen des Patienten weitgehend vermieden werden, um Bewegungsartefakte in den MRT-Aufnahmen zu minimieren. Die lang andauernde Bewegungseinschränkung kann für einen Patienten unangenehm sein. In der Offenlegungsschrift WO2021/052896A1 wird vorgeschlagen, eine oder mehrere MRT-Aufnahmen während der hepatobiliären Phase nicht messtechnisch zu erzeugen, sondern auf Basis von MRT-Aufnahmen aus einer oder mehreren vorangegangenen Phasen zu berechnen (vorherzusagen), um die Aufenthaltsdauer des Patienten im MRT-Scanner zu verkürzen. In dem in der Offenlegungsschrift WO2021/052896A1 beschriebenen Ansatz wird ein Modell des maschinellen Lernens trainiert, auf Basis von MRT-Aufnahmen eines Untersuchungsbereichs vor und/oder unmittelbar nach der Applikation eines Kontrastmittels eine MRT-Au