Sub-DM:Subspace Diffusion Model with Orthogonal Decomposition for MRI Reconstruction
- URL: http://arxiv.org/abs/2411.03758v1
- Date: Wed, 06 Nov 2024 08:33:07 GMT
- Title: Sub-DM:Subspace Diffusion Model with Orthogonal Decomposition for MRI Reconstruction
- Authors: Yu Guan, Qinrong Cai, Wei Li, Qiuyun Fan, Dong Liang, Qiegen Liu,
- Abstract summary: Sub-DM is a subspace diffusion model that restricts the diffusion process via projections onto subspace as the k-space data distribution evolves toward noise.
It circumvents the inference challenges posed by the com-plex and high-dimensional characteristics of k-space data.
It allows the diffusion processes in different spaces to refine models through a mutual feedback mechanism, enabling the learning of ac-curate prior even when dealing with complex k-space data.
- Score: 13.418240070456987
- License:
- Abstract: Diffusion model-based approaches recently achieved re-markable success in MRI reconstruction, but integration into clinical routine remains challenging due to its time-consuming convergence. This phenomenon is partic-ularly notable when directly apply conventional diffusion process to k-space data without considering the inherent properties of k-space sampling, limiting k-space learning efficiency and image reconstruction quality. To tackle these challenges, we introduce subspace diffusion model with orthogonal decomposition, a method (referred to as Sub-DM) that restrict the diffusion process via projections onto subspace as the k-space data distribution evolves toward noise. Particularly, the subspace diffusion model circumvents the inference challenges posed by the com-plex and high-dimensional characteristics of k-space data, so the highly compact subspace ensures that diffusion process requires only a few simple iterations to produce accurate prior information. Furthermore, the orthogonal decomposition strategy based on wavelet transform hin-ders the information loss during the migration of the vanilla diffusion process to the subspace. Considering the strate-gy is approximately reversible, such that the entire pro-cess can be reversed. As a result, it allows the diffusion processes in different spaces to refine models through a mutual feedback mechanism, enabling the learning of ac-curate prior even when dealing with complex k-space data. Comprehensive experiments on different datasets clearly demonstrate that the superiority of Sub-DM against state of-the-art methods in terms of reconstruction speed and quality.
Related papers
- Diffusion State-Guided Projected Gradient for Inverse Problems [82.24625224110099]
We propose Diffusion State-Guided Projected Gradient (DiffStateGrad) for inverse problems.
DiffStateGrad projects the measurement gradient onto a subspace that is a low-rank approximation of an intermediate state of the diffusion process.
We highlight that DiffStateGrad improves the robustness of diffusion models in terms of the choice of measurement guidance step size and noise.
arXiv Detail & Related papers (2024-10-04T14:26:54Z) - Trivialized Momentum Facilitates Diffusion Generative Modeling on Lie Groups [37.78638937228254]
This article demonstrates how a technique called trivialization' can transfer the effectiveness of diffusion models in Euclidean spaces to Lie groups.
A momentum variable was algorithmically introduced to help transport the position variable between data distribution and a fixed, easy-to-sample distribution.
The resulting method achieves state-of-the-art performance on protein and RNA torsion angle generation and sophisticated torus datasets.
arXiv Detail & Related papers (2024-05-25T23:53:07Z) - Mitigating Data Consistency Induced Discrepancy in Cascaded Diffusion Models for Sparse-view CT Reconstruction [4.227116189483428]
This study introduces a novel Cascaded Diffusion with Discrepancy Mitigation framework.
It includes the low-quality image generation in latent space and the high-quality image generation in pixel space.
It minimizes computational costs by moving some inference steps from pixel space to latent space.
arXiv Detail & Related papers (2024-03-14T12:58:28Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - SPIRiT-Diffusion: Self-Consistency Driven Diffusion Model for Accelerated MRI [14.545736786515837]
We introduce SPIRiT-Diffusion, a diffusion model for k-space inspired by the iterative self-consistent SPIRiT method.
We evaluate the proposed SPIRiT-Diffusion method using a 3D joint intracranial and carotid vessel wall imaging dataset.
arXiv Detail & Related papers (2023-04-11T08:43:52Z) - DDS2M: Self-Supervised Denoising Diffusion Spatio-Spectral Model for
Hyperspectral Image Restoration [103.79030498369319]
Self-supervised diffusion model for hyperspectral image restoration is proposed.
textttDDS2M enjoys stronger ability to generalization compared to existing diffusion-based methods.
Experiments on HSI denoising, noisy HSI completion and super-resolution on a variety of HSIs demonstrate textttDDS2M's superiority over the existing task-specific state-of-the-arts.
arXiv Detail & Related papers (2023-03-12T14:57:04Z) - Decomposed Diffusion Sampler for Accelerating Large-Scale Inverse
Problems [64.29491112653905]
We propose a novel and efficient diffusion sampling strategy that synergistically combines the diffusion sampling and Krylov subspace methods.
Specifically, we prove that if tangent space at a denoised sample by Tweedie's formula forms a Krylov subspace, then the CG with the denoised data ensures the data consistency update to remain in the tangent space.
Our proposed method achieves more than 80 times faster inference time than the previous state-of-the-art method.
arXiv Detail & Related papers (2023-03-10T07:42:49Z) - Dimensionality-Varying Diffusion Process [52.52681373641533]
Diffusion models learn to reverse a signal destruction process to generate new data.
We make a theoretical generalization of the forward diffusion process via signal decomposition.
We show that our strategy facilitates high-resolution image synthesis and improves FID of diffusion model trained on FFHQ at $1024times1024$ resolution from 52.40 to 10.46.
arXiv Detail & Related papers (2022-11-29T09:05:55Z) - High-Frequency Space Diffusion Models for Accelerated MRI [7.985113617260289]
Diffusion models with continuous differential equations (SDEs) have shown superior performances in image generation.
We propose a novel SDE tailored specifically for magnetic resonance (MR) reconstruction with the diffusion process in high-frequency space.
This approach ensures determinism in the fully sampled low-frequency regions and accelerates the sampling procedure of reverse diffusion.
arXiv Detail & Related papers (2022-08-10T14:04:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.