Converting T1-weighted MRI from 3T to 7T quality using deep learning
- URL: http://arxiv.org/abs/2507.13782v1
- Date: Fri, 18 Jul 2025 09:54:59 GMT
- Title: Converting T1-weighted MRI from 3T to 7T quality using deep learning
- Authors: Malo Gicquel, Ruoyi Zhao, Anika Wuestefeld, Nicola Spotorno, Olof Strandberg, Kalle Åström, Yu Xiao, Laura EM Wisse, Danielle van Westen, Rik Ossenkoppele, Niklas Mattsson-Carlgren, David Berron, Oskar Hansson, Gabrielle Flood, Jacob Vogel,
- Abstract summary: Ultra-high resolution 7 tesla (7T) magnetic resonance imaging (MRI) provides detailed anatomical views.<n>We present an advanced deep learning model for synthesizing 7T brain MRI from 3T brain MRI.<n>Our models outperformed two additional state-of-the-art 3T-to-7T models in image-based evaluation metrics.
- Score: 7.220190703291239
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultra-high resolution 7 tesla (7T) magnetic resonance imaging (MRI) provides detailed anatomical views, offering better signal-to-noise ratio, resolution and tissue contrast than 3T MRI, though at the cost of accessibility. We present an advanced deep learning model for synthesizing 7T brain MRI from 3T brain MRI. Paired 7T and 3T T1-weighted images were acquired from 172 participants (124 cognitively unimpaired, 48 impaired) from the Swedish BioFINDER-2 study. To synthesize 7T MRI from 3T images, we trained two models: a specialized U-Net, and a U-Net integrated with a generative adversarial network (GAN U-Net). Our models outperformed two additional state-of-the-art 3T-to-7T models in image-based evaluation metrics. Four blinded MRI professionals judged our synthetic 7T images as comparable in detail to real 7T images, and superior in subjective visual quality to 7T images, apparently due to the reduction of artifacts. Importantly, automated segmentations of the amygdalae of synthetic GAN U-Net 7T images were more similar to manually segmented amygdalae (n=20), than automated segmentations from the 3T images that were used to synthesize the 7T images. Finally, synthetic 7T images showed similar performance to real 3T images in downstream prediction of cognitive status using MRI derivatives (n=3,168). In all, we show that synthetic T1-weighted brain images approaching 7T quality can be generated from 3T images, which may improve image quality and segmentation, without compromising performance in downstream tasks. Future directions, possible clinical use cases, and limitations are discussed.
Related papers
- Schrödinger Diffusion Driven Signal Recovery in 3T BOLD fMRI Using Unmatched 7T Observations [1.8091533096543726]
We introduce a new computational approach designed to enhance the quality of 3T BOLD fMRI acquisitions.<n>We employ a lightweight, unsupervised Schr"odinger Bridge framework to infer a high-SNR, high-resolution counterpart of the 3T data.<n>Our findings suggest that it is feasible to computationally approximate 7T-level quality from standard 3T acquisitions.
arXiv Detail & Related papers (2025-04-01T17:41:24Z) - Towards General Text-guided Image Synthesis for Customized Multimodal Brain MRI Generation [51.28453192441364]
Multimodal brain magnetic resonance (MR) imaging is indispensable in neuroscience and neurology.
Current MR image synthesis approaches are typically trained on independent datasets for specific tasks.
We present TUMSyn, a Text-guided Universal MR image Synthesis model, which can flexibly generate brain MR images.
arXiv Detail & Related papers (2024-09-25T11:14:47Z) - Reconstructing Retinal Visual Images from 3T fMRI Data Enhanced by Unsupervised Learning [2.1597860906272803]
We propose a novel framework that generates enhanced 3T fMRI data through an unsupervised Generative Adversarial Network (GAN)
In this paper, we demonstrate the reconstruction capabilities of the enhanced 3T fMRI data, highlighting its proficiency in generating superior input visual images.
arXiv Detail & Related papers (2024-04-07T23:31:37Z) - 7T MRI Synthesization from 3T Acquisitions [1.1549572298362787]
Supervised deep learning techniques can be used to generate synthetic 7T MRIs from 3T MRI inputs.
In this paper, we introduce multiple novel 7T synthesization algorithms based on custom-designed variants of the V-Net convolutional neural network.
arXiv Detail & Related papers (2024-03-13T22:06:44Z) - wmh_seg: Transformer based U-Net for Robust and Automatic White Matter
Hyperintensity Segmentation across 1.5T, 3T and 7T [1.583327010995414]
White matter hyperintensity (WMH) remains the top imaging biomarker for neurodegenerative diseases.
Recent deep learning models exhibit promise in WMH segmentation but still face challenges.
We introduce wmh_seg, a novel deep learning model leveraging a transformer-based encoder from SegFormer.
arXiv Detail & Related papers (2024-02-20T03:57:16Z) - Transferring Ultrahigh-Field Representations for Intensity-Guided Brain
Segmentation of Low-Field Magnetic Resonance Imaging [51.92395928517429]
The use of 7T MRI is limited by its high cost and lower accessibility compared to low-field (LF) MRI.
This study proposes a deep-learning framework that fuses the input LF magnetic resonance feature representations with the inferred 7T-like feature representations for brain image segmentation tasks.
arXiv Detail & Related papers (2024-02-13T12:21:06Z) - Spatial and Modal Optimal Transport for Fast Cross-Modal MRI Reconstruction [54.19448988321891]
We propose an end-to-end deep learning framework that utilizes T1-weighted images (T1WIs) as auxiliary modalities to expedite T2WIs' acquisitions.
We employ Optimal Transport (OT) to synthesize T2WIs by aligning T1WIs and performing cross-modal synthesis.
We prove that the reconstructed T2WIs and the synthetic T2WIs become closer on the T2 image manifold with iterations increasing.
arXiv Detail & Related papers (2023-05-04T12:20:51Z) - Video4MRI: An Empirical Study on Brain Magnetic Resonance Image
Analytics with CNN-based Video Classification Frameworks [60.42012344842292]
3D CNN-based models dominate the field of magnetic resonance image (MRI) analytics.
In this paper, four datasets of Alzheimer's and Parkinson's disease recognition are utilized in experiments.
In terms of efficiency, the video framework performs better than 3D-CNN models by 5% - 11% with 50% - 66% less trainable parameters.
arXiv Detail & Related papers (2023-02-24T15:26:31Z) - Motion correction in MRI using deep learning and a novel hybrid loss
function [11.424624100447332]
Deep learning method (MC-Net) developed to suppress motion artifacts in brain magnetic resonance imaging (MRI)
MC-Net was derived from a UNet combined with a two-stage multi-loss function.
arXiv Detail & Related papers (2022-10-19T14:40:41Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.