US-20260127741-A1 - System and Method for Generating Synthetic SWI obtained from T1 Weighted MRI Scans for Characterizing Brain Disease
Abstract
A system generating a simulated SWI image of a human brain based upon a single non-contrast (SNC) magnetic resonance (MR) image thereof comprising: an input image module storing an SNC MR image; a pre-processing module generating the SNC MR image into a standard format for an AI model to extract and classify features thereof; a simulated SWI-generating model compartment receiving the SNC MR image and generating a simulated SWI image corresponding thereto; a deep learning platform operating the AI model; a training module receiving and communicating training data to the deep learning platform whereby the AI model may be adjusted to optimize for generating the simulated SWI image; a testing module communicating with the training module and the deep learning platform to receive testing data and adapted to validate the simulated SWI image with pre-trained performance criteria; and an output storage compartment receiving and storing the synthetic SWI image.
Inventors
- Kannie CHAN
- Abdul-Mojeed Olabisi ILYAS
- Jianpan HUANG
- Jamal Firmat BANZI
Assignees
- Hong Kong Centre for Cerebro-Cardiovascular Health Engineering Limited
Dates
- Publication Date
- 20260507
- Application Date
- 20250403
Claims (20)
- 1 . A system for generating a simulated SWI image of a human brain based upon a single non-contrast (SNC) magnetic resonance (MR) image of the human brain, wherein the system comprising: an input image module to store the SNC MR images; a pre-processing module for receiving the SNC MR image, wherein the pre-processing module is adapted to prepare and generate the SNC MR image into a standard format for an artificial intelligence (AI) model to extract and classify features of SNC MR image; a simulated SWI-generating model compartment for receiving the SNC MR images in the stand format and to generate simulated SWI image corresponding to each SNC image; a deep learning platform for operating the AI model, wherein the AI model is a connected; a training module for receiving and communicating training data to the deep learning platform whereby tunable parameters of the AI model may be adjusted to optimize for generating the simulated SWI image; a testing module for communicating with the training module and the deep learning platform to receive testing data, wherein the testing module is adapted to validate the simulated SWI image with pre-trained performance criteria; an output storage compartment for receiving and storing the synthetic SWI image.
- 2 . A method for generating simulated SWI images using acquired single contrast MR image (T1-w MR image) by a system having at least a processor and a memory therein to execute instructions of an artificial intelligence engine configured to a UNet model stored within the memory of the system; wherein the UNet model comprises: an encoder having a plurality of layer blocks, each of the layer blocks of the encoder comprising one or more convolutional layers, each of the convolution layers associating with an activation layer, and a down sampling layer; a decoder having a plurality of layer blocks, each of the layer blocks of the decoder comprising one up-sampling layer, one or more convolutional layers, and each of the convolution layers associating with an activation layer; a skip connection for associating with one of the layer blocks of the encoder with one of the layer blocks of the decoder at a corresponding multiscale resolution level; wherein the encoder is adapted to extract features from the T1-w MR image for the decoder to combine outputs from the encoder and extracted image features in multiscale resolution levels through the skip connection to generate the simulated SWI images.
- 3 . The method of claim 2 , wherein the encoder and the decoder are adapted to perform cross-sequence from a T1-w image to SWI image translation consisting of 19 convolutional layers.
- 4 . The method of claim 3 , wherein the encoder is adapted to receive images comprising three dimensions and one or more color channels, wherein one or more layer blocks of the encoder comprises a repeated implementation of two 3×3 convolution layers with 2 voxels stride over five-layer blocks, and wherein a layer block of the encoder that immediately precedes the decoder comprises a single convolution layer.
- 5 . The method of claim 4 , wherein the activation layer is adapted to conduct a linear rectification function by one or more rectified linear units (ReLU).
- 6 . The method of claim 5 , wherein the down sampling comprises a 2×2×2 max-pooling operation with a stride of 2 voxels, wherein each of the convolutional layers is adapted to process input data with a number of convolutional filters.
- 7 . The method of claim 6 , wherein the max-pooling operation after an activation layer reduces a spatial size of an image feature map by a factor of 2, and the number of convolutional filters doubles, from 16 in a first block to 1024 in a last block, such that the UNet model is permitted to learn a hierarchical relationships over a sizeable receptive field of the SNC MR image.
- 8 . The method of claim 7 , wherein the up-sampling layer of the decoder is adapted to perform nearest-neighbor interpolation to increase image size through each layer block by a factor of 2 through each layer within the decoder.
- 9 . The method of claim 8 , wherein one or more convolution layers with the decoder uses random initialization and unequalled kernel size.
- 10 . The method of claim 9 , wherein the skip connection is adapted to copied and concatenated features generated from one of the layer blocks of the encoder to one of the layer blocks of the decoder at a corresponding multiscale resolution level, such that both high- and low-level features from the encoder to be utilized as additional inputs in the decoder to provide effective and stable image representation.
- 11 . The method of claim 10 , wherein the output layer comprises a single output convolutional layer followed by an output activation layer, wherein the single output convolutional layer is a 1×1 convolutional layer with a stride of 1, and the output activation layer is adapted to conduct hyperbolic tangent (tanh) operations.
- 12 . The method of claim 11 , wherein the system further comprises a diagnostic model adapted to classifying an abnormality in the simulated SWI images for characterization of a brain diseases.
- 13 . A method for generating simulated SWI images of a human brain based upon SNC MR images of the human brain without injection of a contrast agent into the body, the method comprising the steps of: collecting SNC images; inputting the SNC images into a training module; compartment; collecting a corresponding SWI image for each subject in the SNC images; storing the SWI images in the training compartment; input the SNC images and the corresponding SWI images into an AI model; training the AI model to generate a simulated SWI image based upon the SNC images input into the AI model and the corresponding SWI images as target outputs; testing the simulated SWI images against the corresponding SWI images previously input into the AI model and optimizing the AI model input.
- 14 . The method of claim 13 , wherein the SNC MR image is a T1-weighted image.
- 15 . The method of claim 14 , wherein the SNC MR image is acquired from a MRI scanner of any model, including GE, Siemens, and Philips.
- 16 . The method of claim 15 , wherein the SNC MR image can be acquired from MRI scanner of any field strength including 1.5 T, 3T, and 7T.
- 17 . The method of claim 16 , wherein the training step for the AI model to generate the simulated SWI images comprises the step of applying deep learning techniques.
- 18 . The method of claim 17 , wherein the steps of testing the simulated SWI images against the corresponding SWI images previously input into the AI model and optimizing the AI model input comprises the step of applying the deep learning techniques.
- 19 . The method of claim 18 , further comprising the steps of: acquiring a SNC MR image of a person on an MRI scanner, registering the SNC MR images to a standard non-contrast image template; transferring registered SNC MR images to a storage compartment; inputting the SNC MR images into a trained AI model; generating simulated SWI images corresponding to the SNC MR input images and viewing images using a pe-existing software.
- 20 . The method of claim 19 , wherein the step of training the AI model utilize an Adam stochastic optimization algorithm with a learning rate of 0.002 applied to minimize a mean-squared error (MSE) loss function in a stepwise fashion and update at every training step progressively until the AI model reaches convergence.
Description
TECHNICAL FIELD This invention relates to a method and system of generating Synthetic SWI obtained from T1 Weighted MRI scans for characterizing brain diseases, and in particular Parkinson's Related Disease. BACKGROUND Magnetic resonance imaging (MRI) is a non-invasive technique used to visualize internal body structures, leveraging the relaxation properties of water protons in a magnetic field. MRI permits safe repeated scans with no known harm when used within well-defined technical constraints. MRI Images can be created with contrast reflecting proton density, T1 and T2 relaxation times, tissue susceptibility variations, diffusion, temperature, fields of motion, biomechanical properties, tissue perfusion, electrical currents, oxygen levels, and spectra of key biochemical species, etc. Among these sequences, T1-weighted imaging (T1-w) is widely employed in clinical settings to identify morphological changes in the brain but it is unable to reveal specific pathologies that need a more nuanced imaging technique. Susceptibility weighted imaging (SWI) uses magnitude and filtered-phase information, both separately and in combination with each other, to create new sources of endogenic contrast enhancement. SWI shows high sensitivity for deoxygenated blood, hemosiderin, ferritin, and calcium. This makes SWI valuable for diagnosing neurological disorders like ageing, multiple sclerosis, stroke, cerebral hyperperfusion, traumatic brain injury, cerebral vascular malformations, intracranial artery stenosis and moyamoya disease, cerebral microbleeds, primary central nervous system vasculitis, mycotic aneurysm, brain tumour, and neurodegenerative disorders, such Alzheimer and Parkinson diseases. SWI plays a crucial role in detecting Parkinson's disease through the visualization of the swallow tail sign (STS). This sign is attributed to the presence of Nigrosome-1, the largest cluster of dopaminergic neurons, located in the dorsolateral substantia nigra. However, the acquisition of SWI has been reported as time-consuming, with susceptibility artifacts potentially causing significant signal loss and reducing diagnostic accuracy. While technological advancements have helped streamline the SWI acquisition process, these artifacts remain a challenge, leading to substantial signal cancellation and a loss of anatomical detail. Additionally, SWI can be complex, often extending diagnostic procedures and introducing artifacts that may further compromise accuracy. Consequently, its availability may be limited compared to routine T1-weighted (T1w) MRI scans. Recent advancements in machine learning, particularly the emergence of convolutional neural networks (CNNs), have shown significant potential in various medical image analysis tasks, including lesion detection, brain tumour segmentation, medical image super resolution, intra- and inter-modality medical image synthesis, and automatic vessel extraction methods. For example, a Convolutional Neural Network (CNN)-based automated diagnostic system was employed to classify Parkinson's disease (PD) and healthy controls (HC). The Parkinson's Progression Markers Initiative (PPMI) was used as benchmark T2-weighted Magnetic Resonance Imaging (MRI) data for both PD and HC. Mid-brain slices from 500 T2-weighted MRI scans were selected and aligned using an image registration technique. The performance of the proposed method was evaluated based on accuracy, sensitivity, specificity, and the Area Under the Curve (AUC). However, diagnosing the disease in its early stages can be challenging. The largest cluster of dopaminergic neurons, located in the dorsolateral substantia nigra (SN), specifically Nigrosome-1, is particularly affected in PD. On 3D susceptibility-weighted imaging (SWI) sequences, this region appears as a hyperintense structure against the otherwise hypointense SN, creating the characteristic “swallow tail sign” (STS). Hence, Researchers have explored other image-to-image translation approaches to generate synthesized PD, T2, T2-FLAIR and MRA scans from different single-image or multi-input modalities. Some studies have utilized computationally expensive generative adversarial networks (GANs) in 2D and 3D modes while others have employed different variants of the U-Net, a deep-learning architecture. Some approaches utilize multi-input modalities. However, there is a limited availability of validation data from diseased patients and extended time is required to acquire the necessary input images. In contrast, single-input modalities, such as T1-weighted (T1w) imaging, remain the preferred option due to their consistent availability in public MRI datasets. Given the clinical significance of SWI and the time-sensitive nature of diagnosing neurodegenerative diseases, it is essential to minimize both practical and clinical delays to establish an efficient SWI acquisition pathway. Optimizing the diagnostic process enables healthcare professionals to ensure timely and accurate differen