MDPG: Multi-domain Diffusion Prior Guidance for MRI Reconstruction
- URL: http://arxiv.org/abs/2506.23701v1
- Date: Mon, 30 Jun 2025 10:25:08 GMT
- Title: MDPG: Multi-domain Diffusion Prior Guidance for MRI Reconstruction
- Authors: Lingtong Zhang, Mengdie Song, Xiaohan Hao, Huayu Mai, Bensheng Qiu,
- Abstract summary: We propose Multi-domain Diffusion Prior Guidance (MDPG) to enhance data consistency in MRI reconstruction tasks.<n> Specifically, we first construct a Visual-Mamba-based backbone, which enables efficient encoding and reconstruction of under-sampled images.<n>A novel Latent Guided Attention (LGA) is proposed for efficient fusion in multi-level latent domains.
- Score: 0.4893345190925178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Magnetic Resonance Imaging (MRI) reconstruction is essential in medical diagnostics. As the latest generative models, diffusion models (DMs) have struggled to produce high-fidelity images due to their stochastic nature in image domains. Latent diffusion models (LDMs) yield both compact and detailed prior knowledge in latent domains, which could effectively guide the model towards more effective learning of the original data distribution. Inspired by this, we propose Multi-domain Diffusion Prior Guidance (MDPG) provided by pre-trained LDMs to enhance data consistency in MRI reconstruction tasks. Specifically, we first construct a Visual-Mamba-based backbone, which enables efficient encoding and reconstruction of under-sampled images. Then pre-trained LDMs are integrated to provide conditional priors in both latent and image domains. A novel Latent Guided Attention (LGA) is proposed for efficient fusion in multi-level latent domains. Simultaneously, to effectively utilize a prior in both the k-space and image domain, under-sampled images are fused with generated full-sampled images by the Dual-domain Fusion Branch (DFB) for self-adaption guidance. Lastly, to further enhance the data consistency, we propose a k-space regularization strategy based on the non-auto-calibration signal (NACS) set. Extensive experiments on two public MRI datasets fully demonstrate the effectiveness of the proposed methodology. The code is available at https://github.com/Zolento/MDPG.
Related papers
- Self-Consistent Nested Diffusion Bridge for Accelerated MRI Reconstruction [22.589087990596887]
We focus on the underexplored task of magnitude-image-based MRI reconstruction.<n>Recent advancements in diffusion models, particularly denoising diffusion probabilistic models, have demonstrated strong capabilities in modeling image priors.<n>We propose a novel Self-Consistent Nested Diffusion Bridge (SC-NDB) framework that models accelerated MRI reconstruction.
arXiv Detail & Related papers (2024-12-13T09:35:34Z) - LDPM: Towards undersampled MRI reconstruction with MR-VAE and Latent Diffusion Prior [4.499605583818247]
Some works attempted to solve MRI reconstruction with diffusion models, but these methods operate directly in pixel space.<n>Latent diffusion models, pre-trained on natural images with rich visual priors, are expected to solve the high computational cost problem in MRI reconstruction.<n>A novel Latent Diffusion Prior-based undersampled MRI reconstruction (LDPM) method is proposed.
arXiv Detail & Related papers (2024-11-05T09:51:59Z) - Cross-conditioned Diffusion Model for Medical Image to Image Translation [22.020931436223204]
We introduce a Cross-conditioned Diffusion Model (CDM) for medical image-to-image translation.
First, we propose a Modality-specific Representation Model (MRM) to model the distribution of target modalities.
Then, we design a Modality-decoupled Diffusion Network (MDN) to efficiently and effectively learn the distribution from MRM.
arXiv Detail & Related papers (2024-09-13T02:48:56Z) - Diffuse-UDA: Addressing Unsupervised Domain Adaptation in Medical Image Segmentation with Appearance and Structure Aligned Diffusion Models [31.006056670998852]
The scarcity and complexity of voxel-level annotations in 3D medical imaging present significant challenges.
This disparity affects the fairness of artificial intelligence algorithms in healthcare.
We introduce Diffuse-UDA, a novel method leveraging diffusion models to tackle Unsupervised Domain Adaptation (UDA) in medical image segmentation.
arXiv Detail & Related papers (2024-08-12T08:21:04Z) - DP-MDM: Detail-Preserving MR Reconstruction via Multiple Diffusion Models [7.601874398726257]
We propose a comprehensive detail-preserving reconstruction method using multiple diffusion models.
The framework effective-ly represents multi-scale sampled data, taking into ac-count the sparsity of the inverted pyramid architecture.
The proposed method was evaluated by con-ducting experiments on clinical and public datasets.
arXiv Detail & Related papers (2024-05-09T13:37:18Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - MRPD: Undersampled MRI reconstruction by prompting a large latent diffusion model [18.46762698682188]
We propose a novel framework for undersampled MRI Reconstruction by Prompting a large latent Diffusion model (MRPD)
For unsupervised reconstruction, MRSampler guides LLDM with a random-phase-modulated hard-to-soft control.
Experiments on FastMRI and IXI show that MRPD is the only model that supports both MRI database-free and database-available scenarios.
arXiv Detail & Related papers (2024-02-16T11:54:34Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Adaptive Diffusion Priors for Accelerated MRI Reconstruction [0.9895793818721335]
Deep MRI reconstruction is commonly performed with conditional models that de-alias undersampled acquisitions to recover images consistent with fully-sampled data.
Unconditional models instead learn generative image priors decoupled from the operator to improve reliability against domain shifts related to the imaging operator.
Here we propose the first adaptive diffusion prior for MRI reconstruction, AdaDiff, to improve performance and reliability against domain shifts.
arXiv Detail & Related papers (2022-07-12T22:45:08Z) - Federated Learning of Generative Image Priors for MRI Reconstruction [5.3963856146595095]
Multi-institutional efforts can facilitate training of deep MRI reconstruction models, albeit privacy risks arise during cross-site sharing of imaging data.
We introduce a novel method for MRI reconstruction based on Federated learning of Generative IMage Priors (FedGIMP)
FedGIMP leverages a two-stage approach: cross-site learning of a generative MRI prior, and subject-specific injection of the imaging operator.
arXiv Detail & Related papers (2022-02-08T22:17:57Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Multi-institutional Collaborations for Improving Deep Learning-based
Magnetic Resonance Image Reconstruction Using Federated Learning [62.17532253489087]
Deep learning methods have been shown to produce superior performance on MR image reconstruction.
These methods require large amounts of data which is difficult to collect and share due to the high cost of acquisition and medical data privacy regulations.
We propose a federated learning (FL) based solution in which we take advantage of the MR data available at different institutions while preserving patients' privacy.
arXiv Detail & Related papers (2021-03-03T03:04:40Z) - Multifold Acceleration of Diffusion MRI via Slice-Interleaved Diffusion
Encoding (SIDE) [50.65891535040752]
We propose a diffusion encoding scheme, called Slice-Interleaved Diffusion.
SIDE, that interleaves each diffusion-weighted (DW) image volume with slices encoded with different diffusion gradients.
We also present a method based on deep learning for effective reconstruction of DW images from the highly slice-undersampled data.
arXiv Detail & Related papers (2020-02-25T14:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.