Adaptive Diffusion Priors for Accelerated MRI Reconstruction
- URL: http://arxiv.org/abs/2207.05876v3
- Date: Sun, 17 Sep 2023 18:44:44 GMT
- Title: Adaptive Diffusion Priors for Accelerated MRI Reconstruction
- Authors: Alper G\"ung\"or, Salman UH Dar, \c{S}aban \"Ozt\"urk, Yilmaz Korkmaz,
Gokberk Elmas, Muzaffer \"Ozbey, Tolga \c{C}ukur
- Abstract summary: Deep MRI reconstruction is commonly performed with conditional models that de-alias undersampled acquisitions to recover images consistent with fully-sampled data.
Unconditional models instead learn generative image priors decoupled from the operator to improve reliability against domain shifts related to the imaging operator.
Here we propose the first adaptive diffusion prior for MRI reconstruction, AdaDiff, to improve performance and reliability against domain shifts.
- Score: 0.9895793818721335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep MRI reconstruction is commonly performed with conditional models that
de-alias undersampled acquisitions to recover images consistent with
fully-sampled data. Since conditional models are trained with knowledge of the
imaging operator, they can show poor generalization across variable operators.
Unconditional models instead learn generative image priors decoupled from the
operator to improve reliability against domain shifts related to the imaging
operator. Recent diffusion models are particularly promising given their high
sample fidelity. Nevertheless, inference with a static image prior can perform
suboptimally. Here we propose the first adaptive diffusion prior for MRI
reconstruction, AdaDiff, to improve performance and reliability against domain
shifts. AdaDiff leverages an efficient diffusion prior trained via adversarial
mapping over large reverse diffusion steps. A two-phase reconstruction is
executed following training: a rapid-diffusion phase that produces an initial
reconstruction with the trained prior, and an adaptation phase that further
refines the result by updating the prior to minimize data-consistency loss.
Demonstrations on multi-contrast brain MRI clearly indicate that AdaDiff
outperforms competing conditional and unconditional methods under domain
shifts, and achieves superior or on par within-domain performance.
Related papers
- pcaGAN: Improving Posterior-Sampling cGANs via Principal Component Regularization [11.393603788068777]
In ill-posed imaging inverse problems, there can exist many hypotheses that fit both the observed measurements and prior knowledge of the true image.
We propose a fast and accurate posterior-sampling conditional generative adversarial network (cGAN) that, through a novel form of regularization, aims for correctness in the posterior mean.
arXiv Detail & Related papers (2024-11-01T14:09:28Z) - Fast constrained sampling in pre-trained diffusion models [77.21486516041391]
Diffusion models have dominated the field of large, generative image models.
We propose an algorithm for fast-constrained sampling in large pre-trained diffusion models.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - Model Inversion Attacks Through Target-Specific Conditional Diffusion Models [54.69008212790426]
Model attacks (MIAs) aim to reconstruct private images from a target classifier's training set, thereby raising privacy concerns in AI applications.
Previous GAN-based MIAs tend to suffer from inferior generative fidelity due to GAN's inherent flaws and biased optimization within latent space.
We propose Diffusion-based Model Inversion (Diff-MI) attacks to alleviate these issues.
arXiv Detail & Related papers (2024-07-16T06:38:49Z) - Self-Consistent Recursive Diffusion Bridge for Medical Image Translation [6.850683267295248]
Denoising diffusion models (DDMs) have gained recent traction in medical image translation given improved training stability over adversarial models.
We propose a novel self-consistent iterative diffusion bridge (SelfRDB) for improved performance in medical image translation.
Comprehensive analyses in multi-contrast MRI and MRI-CT translation indicate that SelfRDB offers superior performance against competing methods.
arXiv Detail & Related papers (2024-05-10T19:39:55Z) - BlindDiff: Empowering Degradation Modelling in Diffusion Models for Blind Image Super-Resolution [52.47005445345593]
BlindDiff is a DM-based blind SR method to tackle the blind degradation settings in SISR.
BlindDiff seamlessly integrates the MAP-based optimization into DMs.
Experiments on both synthetic and real-world datasets show that BlindDiff achieves the state-of-the-art performance.
arXiv Detail & Related papers (2024-03-15T11:21:34Z) - JoReS-Diff: Joint Retinex and Semantic Priors in Diffusion Model for Low-light Image Enhancement [69.6035373784027]
Low-light image enhancement (LLIE) has achieved promising performance by employing conditional diffusion models.
Previous methods may neglect the importance of a sufficient formulation of task-specific condition strategy.
We propose JoReS-Diff, a novel approach that incorporates Retinex- and semantic-based priors as the additional pre-processing condition.
arXiv Detail & Related papers (2023-12-20T08:05:57Z) - Steerable Conditional Diffusion for Out-of-Distribution Adaptation in Medical Image Reconstruction [75.91471250967703]
We introduce a novel sampling framework called Steerable Conditional Diffusion.
This framework adapts the diffusion model, concurrently with image reconstruction, based solely on the information provided by the available measurement.
We achieve substantial enhancements in out-of-distribution performance across diverse imaging modalities.
arXiv Detail & Related papers (2023-08-28T08:47:06Z) - DiffusionAD: Norm-guided One-step Denoising Diffusion for Anomaly
Detection [89.49600182243306]
We reformulate the reconstruction process using a diffusion model into a noise-to-norm paradigm.
We propose a rapid one-step denoising paradigm, significantly faster than the traditional iterative denoising in diffusion models.
The segmentation sub-network predicts pixel-level anomaly scores using the input image and its anomaly-free restoration.
arXiv Detail & Related papers (2023-03-15T16:14:06Z) - Stable Deep MRI Reconstruction using Generative Priors [13.400444194036101]
We propose a novel deep neural network based regularizer which is trained in a generative setting on reference magnitude images only.
The results demonstrate competitive performance, on par with state-of-the-art end-to-end deep learning methods.
arXiv Detail & Related papers (2022-10-25T08:34:29Z) - Federated Learning of Generative Image Priors for MRI Reconstruction [5.3963856146595095]
Multi-institutional efforts can facilitate training of deep MRI reconstruction models, albeit privacy risks arise during cross-site sharing of imaging data.
We introduce a novel method for MRI reconstruction based on Federated learning of Generative IMage Priors (FedGIMP)
FedGIMP leverages a two-stage approach: cross-site learning of a generative MRI prior, and subject-specific injection of the imaging operator.
arXiv Detail & Related papers (2022-02-08T22:17:57Z) - Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial
Transformers [0.0]
We introduce a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adrial TransformERs (SLATER)
A zero-shot reconstruction is performed on undersampled test data, where inference is performed by optimizing network parameters.
Experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against several state-of-the-art unsupervised methods.
arXiv Detail & Related papers (2021-05-15T02:01:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.