Contrast-enhanced MRI Synthesis Using 3D High-Resolution ConvNets
- URL: http://arxiv.org/abs/2104.01592v1
- Date: Sun, 4 Apr 2021 11:54:15 GMT
- Title: Contrast-enhanced MRI Synthesis Using 3D High-Resolution ConvNets
- Authors: Chao Chen, Catalina Raymond, Bill Speier, Xinyu Jin, Timothy F.
Cloughesy, Dieter Enzmann, Benjamin M. Ellingson, Corey W. Arnold
- Abstract summary: Gadolinium-based contrast agents (GBCAs) have been widely used to better visualize disease in brain magnetic resonance imaging (MRI)
For brain tumor patients, standard-of-care includes repeated MRI with gadolinium-based contrast for disease monitoring, increasing the risk of gadolinium deposition.
We present a deep learning based approach for contrast-enhanced T1 synthesis on brain tumor patients.
- Score: 7.892005877717236
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gadolinium-based contrast agents (GBCAs) have been widely used to better
visualize disease in brain magnetic resonance imaging (MRI). However,
gadolinium deposition within the brain and body has raised safety concerns
about the use of GBCAs. Therefore, the development of novel approaches that can
decrease or even eliminate GBCA exposure while providing similar contrast
information would be of significant use clinically. For brain tumor patients,
standard-of-care includes repeated MRI with gadolinium-based contrast for
disease monitoring, increasing the risk of gadolinium deposition. In this work,
we present a deep learning based approach for contrast-enhanced T1 synthesis on
brain tumor patients. A 3D high-resolution fully convolutional network (FCN),
which maintains high resolution information through processing and aggregates
multi-scale information in parallel, is designed to map pre-contrast MRI
sequences to contrast-enhanced MRI sequences. Specifically, three pre-contrast
MRI sequences, T1, T2 and apparent diffusion coefficient map (ADC), are
utilized as inputs and the post-contrast T1 sequences are utilized as target
output. To alleviate the data imbalance problem between normal tissues and the
tumor regions, we introduce a local loss to improve the contribution of the
tumor regions, which leads to better enhancement results on tumors. Extensive
quantitative and visual assessments are performed, with our proposed model
achieving a PSNR of 28.24dB in the brain and 21.2dB in tumor regions. Our
results suggests the potential of substituting GBCAs with synthetic contrast
images generated via deep learning.
Related papers
- Two-Stage Approach for Brain MR Image Synthesis: 2D Image Synthesis and 3D Refinement [1.5683566370372715]
It is crucial to synthesize the missing MR images that reflect the unique characteristics of the absent modality with precise tumor representation.
We propose a two-stage approach that first synthesizes MR images from 2D slices using a novel intensity encoding method and then refines the synthesized MRI.
arXiv Detail & Related papers (2024-10-14T08:21:08Z) - Conditional Generative Models for Contrast-Enhanced Synthesis of T1w and T1 Maps in Brain MRI [1.6124737486286778]
We study the potential of generative models, more precisely conditional diffusion and flow matching, for virtual enhancement.
We examine the performance of T1 scans from quantitive MRI versus T1-weighted scans.
Across models, we observe better segmentations with T1 scans than with T1-weighted scans.
arXiv Detail & Related papers (2024-10-11T15:11:24Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Pre- to Post-Contrast Breast MRI Synthesis for Enhanced Tumour Segmentation [0.9722528000969453]
This study explores the feasibility of producing synthetic contrast enhancements by translating pre-contrast T1-weighted fat-saturated breast MRI to their corresponding first DCE-MRI sequence using a generative adversarial network (GAN)
We assess the generated DCE-MRI data using quantitative image quality metrics and apply them to the downstream task of 3D breast tumour segmentation.
Our results highlight the potential of post-contrast DCE-MRI synthesis in enhancing the robustness of breast tumour segmentation models via data augmentation.
arXiv Detail & Related papers (2023-11-17T21:48:41Z) - Synthesis of Contrast-Enhanced Breast MRI Using Multi-b-Value DWI-based
Hierarchical Fusion Network with Attention Mechanism [15.453470023481932]
Contrast-enhanced MRI (CE-MRI) provides superior differentiation between tumors and invaded healthy tissue.
The use of gadolinium-based contrast agents (GBCA) to obtain CE-MRI may be associated with nephrogenic systemic fibrosis and may lead to bioaccumulation in the brain.
To reduce the use of contrast agents, diffusion-weighted imaging (DWI) is emerging as a key imaging technique.
arXiv Detail & Related papers (2023-07-03T09:46:12Z) - Faithful Synthesis of Low-dose Contrast-enhanced Brain MRI Scans using
Noise-preserving Conditional GANs [102.47542231659521]
Gadolinium-based contrast agents (GBCA) are indispensable in Magnetic Resonance Imaging (MRI) for diagnosing various diseases.
GBCAs are expensive and may accumulate in patients with potential side effects.
It is unclear to which extent the GBCA dose can be reduced while preserving the diagnostic value.
arXiv Detail & Related papers (2023-06-26T13:19:37Z) - View-Disentangled Transformer for Brain Lesion Detection [50.4918615815066]
We propose a novel view-disentangled transformer to enhance the extraction of MRI features for more accurate tumour detection.
First, the proposed transformer harvests long-range correlation among different positions in a 3D brain scan.
Second, the transformer models a stack of slice features as multiple 2D views and enhance these features view-by-view.
Third, we deploy the proposed transformer module in a transformer backbone, which can effectively detect the 2D regions surrounding brain lesions.
arXiv Detail & Related papers (2022-09-20T11:58:23Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - Scale-Space Autoencoders for Unsupervised Anomaly Segmentation in Brain
MRI [47.26574993639482]
We show improved anomaly segmentation performance and the general capability to obtain much more crisp reconstructions of input data at native resolution.
The modeling of the laplacian pyramid further enables the delineation and aggregation of lesions at multiple scales.
arXiv Detail & Related papers (2020-06-23T09:20:42Z) - High Tissue Contrast MRI Synthesis Using Multi-Stage Attention-GAN for
Glioma Segmentation [25.408175460840802]
This paper demonstrates the potential benefits of image-to-image translation techniques to generate synthetic high tissue contrast (HTC) images.
We adopt a new cycle generative adversarial network (CycleGAN) with an attention mechanism to increase the contrast within underlying tissues.
We show the application of our method for synthesizing HTC images on brain MR scans, including glioma tumor.
arXiv Detail & Related papers (2020-06-09T03:21:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.