CRUNet-MR-Univ: A Foundation Model for Diverse Cardiac MRI Reconstruction
- URL: http://arxiv.org/abs/2601.04428v1
- Date: Wed, 07 Jan 2026 22:23:56 GMT
- Title: CRUNet-MR-Univ: A Foundation Model for Diverse Cardiac MRI Reconstruction
- Authors: Donghang Lyu, Marius Staring, Hildo Lamb, Mariya Doneva,
- Abstract summary: Deep learning has attracted increasing interest in the field of Cardiac MRI reconstruction.<n>CMR scans exhibit wide variability in image contrast, sampling patterns, vendors, anatomical structures, and scanner types.<n>Most existing models are designed to handle only a single or nar row subset of these variations, leading to performance degradation when faced with distribution shifts.
- Score: 2.695455737934403
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, deep learning has attracted increasing at- tention in the field of Cardiac MRI (CMR) reconstruction due to its superior performance over traditional methods, particularly in handling higher acceleration factors, highlighting its potential for real-world clini- cal applications. However, current deep learning methods remain limited in generalizability. CMR scans exhibit wide variability in image contrast, sampling patterns, scanner vendors, anatomical structures, and disease types. Most existing models are designed to handle only a single or nar- row subset of these variations, leading to performance degradation when faced with distribution shifts. Therefore, it is beneficial to develop a unified model capable of generalizing across diverse CMR scenarios. To this end, we propose CRUNet-MR-Univ, a foundation model that lever- ages spatio-temporal correlations and prompt-based priors to effectively handle the full diversity of CMR scans. Our approach consistently out- performs baseline methods across a wide range of settings, highlighting its effectiveness and promise.
Related papers
- Enabling Ultra-Fast Cardiovascular Imaging Across Heterogeneous Clinical Environments with a Generalist Foundation Model and Multimodal Database [64.65360708629485]
MMCMR-427K is the largest and most comprehensive multimodal cardiovascular magnetic resonance k-space database.<n> CardioMM is a reconstruction foundation model capable of adapting to heterogeneous fast CMR imaging scenarios.<n> CardioMM unifies semantic contextual understanding with physics-informed data consistency to deliver robust reconstructions.
arXiv Detail & Related papers (2025-12-25T12:47:50Z) - Mixture of Ranks with Degradation-Aware Routing for One-Step Real-World Image Super-Resolution [76.66229730098759]
In real-world image super-resolution (Real-ISR), existing approaches mainly rely on fine-tuning pre-trained diffusion models.<n>We propose a Mixture-of-Ranks (MoR) architecture for single-step image super-resolution.<n>We introduce a fine-grained expert partitioning strategy that treats each rank in LoRA as an independent expert.
arXiv Detail & Related papers (2025-11-20T04:11:44Z) - UPCMR: A Universal Prompt-guided Model for Random Sampling Cardiac MRI Reconstruction [1.2773749417703923]
We introduce UPCMR, a universal unrolled model designed for cardiac magnetic resonance imaging reconstruction.<n>It incorporates two kinds of learnable prompts, undersampling-specific prompt and spatial-specific prompt, and integrates them with a UNet structure in each block.<n>It highly enhances reconstructed image quality across all random sampling scenarios through an effective training strategy.
arXiv Detail & Related papers (2025-02-18T07:44:35Z) - MedMAP: Promoting Incomplete Multi-modal Brain Tumor Segmentation with Alignment [20.358300924109162]
In clinical practice, certain modalities of MRI may be missing, which presents a more difficult scenario.
Knowledge Distillation, Domain Adaption, and Shared Latent Space have emerged as commonly promising strategies.
We propose a novel paradigm that aligns latent features of involved modalities to a well-defined distribution anchor as the substitution of the pre-trained model.
arXiv Detail & Related papers (2024-08-18T13:16:30Z) - Joint Edge Optimization Deep Unfolding Network for Accelerated MRI Reconstruction [3.9681863841849623]
We build a joint edge optimization model that not only incorporates individual regularizers specific to both the MR image and the edges, but also enforces a co-regularizer to effectively establish a stronger correlation between them.
Specifically, the edge information is defined through a non-edge probability map to guide the image reconstruction during the optimization process.
Meanwhile, the regularizers pertaining to images and edges are incorporated into a deep unfolding network to automatically learn their respective inherent a-priori information.
arXiv Detail & Related papers (2024-05-09T05:51:33Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - Generalized Deep Learning-based Proximal Gradient Descent for MR
Reconstruction [3.128676265663467]
The data consistency for the physical forward model is crucial in inverse problems, especially in MR imaging reconstruction.
The deep learning-based proximal gradient descent was proposed and use a network as regularization term that is independent of the forward model.
This one-time pre-trained regularization is applied to different MR acquisition settings and was compared to conventional L1 regularization showing 3 dB improvement in the peak signal-to-noise ratio.
arXiv Detail & Related papers (2022-11-30T10:31:06Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Multi-institutional Collaborations for Improving Deep Learning-based
Magnetic Resonance Image Reconstruction Using Federated Learning [62.17532253489087]
Deep learning methods have been shown to produce superior performance on MR image reconstruction.
These methods require large amounts of data which is difficult to collect and share due to the high cost of acquisition and medical data privacy regulations.
We propose a federated learning (FL) based solution in which we take advantage of the MR data available at different institutions while preserving patients' privacy.
arXiv Detail & Related papers (2021-03-03T03:04:40Z) - Robust Image Reconstruction with Misaligned Structural Information [0.27074235008521236]
We propose a variational framework which jointly performs reconstruction and registration.
Our approach is the first to achieve this for different modalities and outranks established approaches in terms of accuracy of both reconstruction and registration.
arXiv Detail & Related papers (2020-04-01T17:21:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.