7T MRI Synthesization from 3T Acquisitions
- URL: http://arxiv.org/abs/2403.08979v2
- Date: Mon, 8 Jul 2024 21:07:33 GMT
- Title: 7T MRI Synthesization from 3T Acquisitions
- Authors: Qiming Cui, Duygu Tosun, Pratik Mukherjee, Reza Abbasi-Asl,
- Abstract summary: Supervised deep learning techniques can be used to generate synthetic 7T MRIs from 3T MRI inputs.
In this paper, we introduce multiple novel 7T synthesization algorithms based on custom-designed variants of the V-Net convolutional neural network.
- Score: 1.1549572298362787
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised deep learning techniques can be used to generate synthetic 7T MRIs from 3T MRI inputs. This image enhancement process leverages the advantages of ultra-high-field MRI to improve the signal-to-noise and contrast-to-noise ratios of 3T acquisitions. In this paper, we introduce multiple novel 7T synthesization algorithms based on custom-designed variants of the V-Net convolutional neural network. We demonstrate that the V-Net based model has superior performance in enhancing both single-site and multi-site MRI datasets compared to the existing benchmark model. When trained on 3T-7T MRI pairs from 8 subjects with mild Traumatic Brain Injury (TBI), our model achieves state-of-the-art 7T synthesization performance. Compared to previous works, synthetic 7T images generated from our pipeline also display superior enhancement of pathological tissue. Additionally, we implement and test a data augmentation scheme for training models that are robust to variations in the input distribution. This allows synthetic 7T models to accommodate intra-scanner and inter-scanner variability in multisite datasets. On a harmonized dataset consisting of 18 3T-7T MRI pairs from two institutions, including both healthy subjects and those with mild TBI, our model maintains its performance and can generalize to 3T MRI inputs with lower resolution. Our findings demonstrate the promise of V-Net based models for MRI enhancement and offer a preliminary probe into improving the generalizability of synthetic 7T models with data augmentation.
Related papers
- Triad: Vision Foundation Model for 3D Magnetic Resonance Imaging [3.7942449131350413]
We propose Triad, a vision foundation model for 3D MRI.
Triad adopts a widely used autoencoder architecture to learn robust representations from 131,170 3D MRI volumes.
We evaluate Triad across three tasks, namely, organ/tumor segmentation, organ/cancer classification, and medical image registration.
arXiv Detail & Related papers (2025-02-19T19:31:52Z) - Distillation-Driven Diffusion Model for Multi-Scale MRI Super-Resolution: Make 1.5T MRI Great Again [8.193689534916988]
7T MRI provides significantly enhanced spatial resolution, enabling finer visualization of anatomical structures.
Super-Resolution (SR) model is proposed to generate 7T-like MRI from standard 1.5T MRI scans.
Student model refines the 7T SR task with steps, leveraging feature maps from the inference phase of the teacher model as guidance.
arXiv Detail & Related papers (2025-01-30T20:21:11Z) - Residual Vision Transformer (ResViT) Based Self-Supervised Learning Model for Brain Tumor Classification [0.08192907805418585]
Self-supervised learning models provide data-efficient and remarkable solutions to limited dataset problems.
This paper introduces a generative SSL model for brain tumor classification in two stages.
The proposed model attains the highest accuracy, achieving 90.56% on the BraTs dataset with T1 sequence, 98.53% on the Figshare, and 98.47% on the Kaggle brain tumor datasets.
arXiv Detail & Related papers (2024-11-19T21:42:57Z) - Guided Synthesis of Labeled Brain MRI Data Using Latent Diffusion Models for Segmentation of Enlarged Ventricles [0.4188114563181614]
Deep learning models in medical contexts face challenges like data scarcity, inhomogeneity, and privacy concerns.
This study focuses on improving ventricular segmentation in brain MRI images using synthetic data.
arXiv Detail & Related papers (2024-11-02T19:44:10Z) - Reconstructing Retinal Visual Images from 3T fMRI Data Enhanced by Unsupervised Learning [2.1597860906272803]
We propose a novel framework that generates enhanced 3T fMRI data through an unsupervised Generative Adversarial Network (GAN)
In this paper, we demonstrate the reconstruction capabilities of the enhanced 3T fMRI data, highlighting its proficiency in generating superior input visual images.
arXiv Detail & Related papers (2024-04-07T23:31:37Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging [59.78761085714715]
This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework for liver lesion classification.
The proposed framework has been validated through comprehensive experiments on two clinical datasets.
To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public.
arXiv Detail & Related papers (2024-02-27T06:32:56Z) - Transferring Ultrahigh-Field Representations for Intensity-Guided Brain
Segmentation of Low-Field Magnetic Resonance Imaging [51.92395928517429]
The use of 7T MRI is limited by its high cost and lower accessibility compared to low-field (LF) MRI.
This study proposes a deep-learning framework that fuses the input LF magnetic resonance feature representations with the inferred 7T-like feature representations for brain image segmentation tasks.
arXiv Detail & Related papers (2024-02-13T12:21:06Z) - Spatial and Modal Optimal Transport for Fast Cross-Modal MRI Reconstruction [54.19448988321891]
We propose an end-to-end deep learning framework that utilizes T1-weighted images (T1WIs) as auxiliary modalities to expedite T2WIs' acquisitions.
We employ Optimal Transport (OT) to synthesize T2WIs by aligning T1WIs and performing cross-modal synthesis.
We prove that the reconstructed T2WIs and the synthetic T2WIs become closer on the T2 image manifold with iterations increasing.
arXiv Detail & Related papers (2023-05-04T12:20:51Z) - From Sound Representation to Model Robustness [82.21746840893658]
We investigate the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
Averaged over various experiments on three environmental sound datasets, we found the ResNet-18 model outperforms other deep learning architectures.
arXiv Detail & Related papers (2020-07-27T17:30:49Z) - Towards a Competitive End-to-End Speech Recognition for CHiME-6 Dinner
Party Transcription [73.66530509749305]
In this paper, we argue that, even in difficult cases, some end-to-end approaches show performance close to the hybrid baseline.
We experimentally compare and analyze CTC-Attention versus RNN-Transducer approaches along with RNN versus Transformer architectures.
Our best end-to-end model based on RNN-Transducer, together with improved beam search, reaches quality by only 3.8% WER abs. worse than the LF-MMI TDNN-F CHiME-6 Challenge baseline.
arXiv Detail & Related papers (2020-04-22T19:08:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.