7T MRI Synthesization from 3T Acquisitions
- URL: http://arxiv.org/abs/2403.08979v2
- Date: Mon, 8 Jul 2024 21:07:33 GMT
- Title: 7T MRI Synthesization from 3T Acquisitions
- Authors: Qiming Cui, Duygu Tosun, Pratik Mukherjee, Reza Abbasi-Asl,
- Abstract summary: Supervised deep learning techniques can be used to generate synthetic 7T MRIs from 3T MRI inputs.
In this paper, we introduce multiple novel 7T synthesization algorithms based on custom-designed variants of the V-Net convolutional neural network.
- Score: 1.1549572298362787
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised deep learning techniques can be used to generate synthetic 7T MRIs from 3T MRI inputs. This image enhancement process leverages the advantages of ultra-high-field MRI to improve the signal-to-noise and contrast-to-noise ratios of 3T acquisitions. In this paper, we introduce multiple novel 7T synthesization algorithms based on custom-designed variants of the V-Net convolutional neural network. We demonstrate that the V-Net based model has superior performance in enhancing both single-site and multi-site MRI datasets compared to the existing benchmark model. When trained on 3T-7T MRI pairs from 8 subjects with mild Traumatic Brain Injury (TBI), our model achieves state-of-the-art 7T synthesization performance. Compared to previous works, synthetic 7T images generated from our pipeline also display superior enhancement of pathological tissue. Additionally, we implement and test a data augmentation scheme for training models that are robust to variations in the input distribution. This allows synthetic 7T models to accommodate intra-scanner and inter-scanner variability in multisite datasets. On a harmonized dataset consisting of 18 3T-7T MRI pairs from two institutions, including both healthy subjects and those with mild TBI, our model maintains its performance and can generalize to 3T MRI inputs with lower resolution. Our findings demonstrate the promise of V-Net based models for MRI enhancement and offer a preliminary probe into improving the generalizability of synthetic 7T models with data augmentation.
Related papers
- UAVTwin: Neural Digital Twins for UAVs using Gaussian Splatting [57.63613048492219]
We present UAVTwin, a method for creating digital twins from real-world environments and facilitating data augmentation for training downstream models embedded in unmanned aerial vehicles (UAVs)
This is achieved by integrating 3D Gaussian Splatting (3DGS) for reconstructing backgrounds along with controllable synthetic human models that display diverse appearances and actions in multiple poses.
arXiv Detail & Related papers (2025-04-02T22:17:30Z) - Enhancing 3T BOLD fMRI SNR using Unpaired 7T Data with Schrödinger Bridge Diffusion [1.8091533096543726]
Most research relies on 3T MRI systems which offer lower spatial resolution and temporal and SNR.
We propose a novel framework that aligns 7T and 3T fMRI data from different subjects datasets.
We then apply an unpaired Brain Disk Schr"odinger Bridge diffusion model to enhance the SNR of the 3T data.
arXiv Detail & Related papers (2025-04-01T17:41:24Z) - SNRAware: Improved Deep Learning MRI Denoising with SNR Unit Training and G-factor Map Augmentation [4.45678909876146]
This retrospective study trained 14 different transformer and convolutional models with two backbone architectures on a large dataset of 2,885,236 images from 96,605 cardiac retro-gated cine complex series acquired at 3T.
The proposed training scheme, termed SNRAware, leverages knowledge of the MRI reconstruction process to improve denoising performance by simulating large, high quality, and diverse synthetic datasets, and providing quantitative information about the noise distribution to the model.
arXiv Detail & Related papers (2025-03-23T18:16:36Z) - Residual Vision Transformer (ResViT) Based Self-Supervised Learning Model for Brain Tumor Classification [0.08192907805418585]
Self-supervised learning models provide data-efficient and remarkable solutions to limited dataset problems.
This paper introduces a generative SSL model for brain tumor classification in two stages.
The proposed model attains the highest accuracy, achieving 90.56% on the BraTs dataset with T1 sequence, 98.53% on the Figshare, and 98.47% on the Kaggle brain tumor datasets.
arXiv Detail & Related papers (2024-11-19T21:42:57Z) - Guided Synthesis of Labeled Brain MRI Data Using Latent Diffusion Models for Segmentation of Enlarged Ventricles [0.4188114563181614]
Deep learning models in medical contexts face challenges like data scarcity, inhomogeneity, and privacy concerns.
This study focuses on improving ventricular segmentation in brain MRI images using synthetic data.
arXiv Detail & Related papers (2024-11-02T19:44:10Z) - Reconstructing Retinal Visual Images from 3T fMRI Data Enhanced by Unsupervised Learning [2.1597860906272803]
We propose a novel framework that generates enhanced 3T fMRI data through an unsupervised Generative Adversarial Network (GAN)
In this paper, we demonstrate the reconstruction capabilities of the enhanced 3T fMRI data, highlighting its proficiency in generating superior input visual images.
arXiv Detail & Related papers (2024-04-07T23:31:37Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging [59.78761085714715]
This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework for liver lesion classification.
The proposed framework has been validated through comprehensive experiments on two clinical datasets.
To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public.
arXiv Detail & Related papers (2024-02-27T06:32:56Z) - Transferring Ultrahigh-Field Representations for Intensity-Guided Brain
Segmentation of Low-Field Magnetic Resonance Imaging [51.92395928517429]
The use of 7T MRI is limited by its high cost and lower accessibility compared to low-field (LF) MRI.
This study proposes a deep-learning framework that fuses the input LF magnetic resonance feature representations with the inferred 7T-like feature representations for brain image segmentation tasks.
arXiv Detail & Related papers (2024-02-13T12:21:06Z) - Spatial and Modal Optimal Transport for Fast Cross-Modal MRI Reconstruction [54.19448988321891]
We propose an end-to-end deep learning framework that utilizes T1-weighted images (T1WIs) as auxiliary modalities to expedite T2WIs' acquisitions.
We employ Optimal Transport (OT) to synthesize T2WIs by aligning T1WIs and performing cross-modal synthesis.
We prove that the reconstructed T2WIs and the synthetic T2WIs become closer on the T2 image manifold with iterations increasing.
arXiv Detail & Related papers (2023-05-04T12:20:51Z) - Multiscale Metamorphic VAE for 3D Brain MRI Synthesis [5.060516201839319]
Generative modeling of 3D brain MRIs presents difficulties in achieving high visual fidelity while ensuring sufficient coverage of the data distribution.
In this work, we propose to address this challenge with composable, multiscale morphological transformations in a variational autoencoder framework.
We show substantial performance improvements in FID while retaining comparable, or superior, reconstruction quality compared to prior work based on VAEs and generative adversarial networks (GANs)
arXiv Detail & Related papers (2023-01-09T09:15:30Z) - A Comparative Study on 1.5T-3T MRI Conversion through Deep Neural
Network Models [0.0]
We explore the capabilities of a number of deep neural network models in generating whole-brain 3T-like MR images from 1.5T MRIs.
To the best of our knowledge, this study is the first work to evaluate multiple deep learning solutions for whole-brain MRI conversion.
arXiv Detail & Related papers (2022-10-12T16:14:42Z) - From Sound Representation to Model Robustness [82.21746840893658]
We investigate the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
Averaged over various experiments on three environmental sound datasets, we found the ResNet-18 model outperforms other deep learning architectures.
arXiv Detail & Related papers (2020-07-27T17:30:49Z) - Towards a Competitive End-to-End Speech Recognition for CHiME-6 Dinner
Party Transcription [73.66530509749305]
In this paper, we argue that, even in difficult cases, some end-to-end approaches show performance close to the hybrid baseline.
We experimentally compare and analyze CTC-Attention versus RNN-Transducer approaches along with RNN versus Transformer architectures.
Our best end-to-end model based on RNN-Transducer, together with improved beam search, reaches quality by only 3.8% WER abs. worse than the LF-MMI TDNN-F CHiME-6 Challenge baseline.
arXiv Detail & Related papers (2020-04-22T19:08:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.