MRI Cross-Modal Synthesis: A Comparative Study of Generative Models for T1-to-T2 Reconstruction
- URL: http://arxiv.org/abs/2602.07068v1
- Date: Thu, 05 Feb 2026 19:07:58 GMT
- Title: MRI Cross-Modal Synthesis: A Comparative Study of Generative Models for T1-to-T2 Reconstruction
- Authors: Ali Alqutayfi, Sadam Al-Azani,
- Abstract summary: Cross-modal MRI synthesis involves generating images from one acquisition protocol using another.<n>This paper presents a comparison of three state-of-the-art generative models for T1-to-T2 MRI reconstruction: Pix2Pix GAN, CycleGAN, and Variational Autoencoder (VAE)
- Score: 0.42970700836450487
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: MRI cross-modal synthesis involves generating images from one acquisition protocol using another, offering considerable clinical value by reducing scan time while maintaining diagnostic information. This paper presents a comprehensive comparison of three state-of-the-art generative models for T1-to-T2 MRI reconstruction: Pix2Pix GAN, CycleGAN, and Variational Autoencoder (VAE). Using the BraTS 2020 dataset (11,439 training and 2,000 testing slices), we evaluate these models based on established metrics including Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index (SSIM). Our experiments demonstrate that all models can successfully synthesize T2 images from T1 inputs, with CycleGAN achieving the highest PSNR (32.28 dB) and SSIM (0.9008), while Pix2Pix GAN provides the lowest MSE (0.005846). The VAE, though showing lower quantitative performance (MSE: 0.006949, PSNR: 24.95 dB, SSIM: 0.6573), offers advantages in latent space representation and sampling capabilities. This comparative study provides valuable insights for researchers and clinicians selecting appropriate generative models for MRI synthesis applications based on their specific requirements and data constraints.
Related papers
- Pattern-Aware Diffusion Synthesis of fMRI/dMRI with Tissue and Microstructural Refinement [34.55493442995441]
We propose PDS, a pattern-aware dual-modal 3D diffusion framework for cross-modality learning.<n>We also introduce a tissue refinement network integrated with a efficient microstructure refinement to maintain structural fidelity and fine details.<n>PDS achieves state-of-the-art results, with PSNR/SSIM scores of 29.83 dB/90.84% for fMRI synthesis and 30.00 dB/77.55% for dMRI synthesis.
arXiv Detail & Related papers (2025-11-07T03:51:00Z) - An Efficient 3D Latent Diffusion Model for T1-contrast Enhanced MRI Generation [7.487974687364868]
Gadolinium-based contrast agents (GBCAs) are commonly employed with T1w MRI to enhance lesion visualization but are restricted in patients at risk of nephrogenic systemic fibrosis.<n>This study develops a 3D deep-learning framework to generate T1-contrast enhanced images (T1C) from pre-contrast multiparametric MRI.
arXiv Detail & Related papers (2025-09-29T02:22:55Z) - PanoDiff-SR: Synthesizing Dental Panoramic Radiographs using Diffusion and Super-resolution [60.970656010712275]
We propose a combination of diffusion-based generation (PanoDiff) and Super-Resolution (SR) for generating synthetic dental panoramic radiographs (PRs)<n>The former generates a low-resolution (LR) seed of a PR which is then processed by the SR model to yield a high-resolution (HR) PR of size 1024 X 512.<n>For SR, we propose a state-of-the-art transformer that learns local-global relationships, resulting in sharper edges and textures.
arXiv Detail & Related papers (2025-07-12T09:52:10Z) - SNRAware: Improved Deep Learning MRI Denoising with SNR Unit Training and G-factor Map Augmentation [4.45678909876146]
This retrospective study trained 14 different transformer and convolutional models with two backbone architectures on a large dataset of 2,885,236 images from 96,605 cardiac retro-gated cine complex series acquired at 3T.<n>The proposed training scheme, termed SNRAware, leverages knowledge of the MRI reconstruction process to improve denoising performance by simulating large, high quality, and diverse synthetic datasets, and providing quantitative information about the noise distribution to the model.
arXiv Detail & Related papers (2025-03-23T18:16:36Z) - A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
We propose a unified MRI reconstruction model robust to various measurement undersampling patterns and image resolutions.<n>Our model improves SSIM by 11% and PSNR by 4 dB over a state-of-the-art CNN (End-to-End VarNet) with 600$times$ faster inference than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Self-Supervised Adversarial Diffusion Models for Fast MRI Reconstruction [1.167578793004766]
We propose a self-Supervised deep learning compressed Diffusion sensing MRI (DL)" method.
We used 1,376 and singlecoil brain axial post T1 dataset (T1-w) 50 patients.
It was compared with ReconFormer Transformer and SS-MRI, assessing performance using normalized mean error (NMSE), peak signal-to-noise ratio (PSNR), and similarity index (SSIM)
arXiv Detail & Related papers (2024-06-21T21:22:17Z) - Generalizable synthetic MRI with physics-informed convolutional networks [57.628770497971246]
We develop a physics-informed deep learning-based method to synthesize multiple brain magnetic resonance imaging (MRI) contrasts from a single five-minute acquisition.
We investigate its ability to generalize to arbitrary contrasts to accelerate neuroimaging protocols.
arXiv Detail & Related papers (2023-05-21T21:16:20Z) - Cycle-guided Denoising Diffusion Probability Model for 3D Cross-modality
MRI Synthesis [1.9632065069564202]
Cycle-guided Denoising Diffusion Probability Model (CG-DDPM) for cross-modality MRI synthesis.
Two DDPMs condition each other to generate synthetic images from two different MRI pulse sequences.
Two DDPMs exchange random latent noise in the reverse processes, which helps to regularize both DDPMs and generate matching images in two modalities.
arXiv Detail & Related papers (2023-04-28T18:28:54Z) - Fast T2w/FLAIR MRI Acquisition by Optimal Sampling of Information
Complementary to Pre-acquired T1w MRI [52.656075914042155]
We propose an iterative framework to optimize the under-sampling pattern for MRI acquisition of another modality.
We have demonstrated superior performance of our learned under-sampling patterns on a public dataset.
arXiv Detail & Related papers (2021-11-11T04:04:48Z) - Confidence-guided Lesion Mask-based Simultaneous Synthesis of Anatomic
and Molecular MR Images in Patients with Post-treatment Malignant Gliomas [65.64363834322333]
Confidence Guided SAMR (CG-SAMR) synthesizes data from lesion information to multi-modal anatomic sequences.
module guides the synthesis based on confidence measure about the intermediate results.
experiments on real clinical data demonstrate that the proposed model can perform better than the state-of-theart synthesis methods.
arXiv Detail & Related papers (2020-08-06T20:20:22Z) - Lesion Mask-based Simultaneous Synthesis of Anatomic and MolecularMR
Images using a GAN [59.60954255038335]
The proposed framework consists of a stretch-out up-sampling module, a brain atlas encoder, a segmentation consistency module, and multi-scale label-wise discriminators.
Experiments on real clinical data demonstrate that the proposed model can perform significantly better than the state-of-the-art synthesis methods.
arXiv Detail & Related papers (2020-06-26T02:50:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.