Fast and Stable Diffusion Inverse Solver with History Gradient Update
- URL: http://arxiv.org/abs/2307.12070v2
- Date: Mon, 11 Mar 2024 09:11:44 GMT
- Title: Fast and Stable Diffusion Inverse Solver with History Gradient Update
- Authors: Linchao He, Hongyu Yan, Mengting Luo, Hongjie Wu, Kunming Luo, Wang
Wang, Wenchao Du, Hu Chen, Hongyu Yang, Yi Zhang, Jiancheng Lv
- Abstract summary: We introduce the incorporation of historical gradients into this optimization process, termed History Gradient Update (HGU)
Experimental results demonstrate that, compared to previous sampling algorithms, sampling algorithms with HGU achieves state-of-the-art results in medical image reconstruction.
- Score: 28.13197297970759
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion models have recently been recognised as efficient inverse problem
solvers due to their ability to produce high-quality reconstruction results
without relying on pairwise data training. Existing diffusion-based solvers
utilize Gradient Descent strategy to get a optimal sample solution. However,
these solvers only calculate the current gradient and have not utilized any
history information of sampling process, thus resulting in unstable
optimization progresses and suboptimal solutions. To address this issue, we
propose to utilize the history information of the diffusion-based inverse
solvers. In this paper, we first prove that, in previous work, using the
gradient descent method to optimize the data fidelity term is convergent.
Building on this, we introduce the incorporation of historical gradients into
this optimization process, termed History Gradient Update (HGU). We also
provide theoretical evidence that HGU ensures the convergence of the entire
algorithm. It's worth noting that HGU is applicable to both pixel-based and
latent-based diffusion model solvers. Experimental results demonstrate that,
compared to previous sampling algorithms, sampling algorithms with HGU achieves
state-of-the-art results in medical image reconstruction, surpassing even
supervised learning methods. Additionally, it achieves competitive results on
natural images.
Related papers
- Diffusion State-Guided Projected Gradient for Inverse Problems [82.24625224110099]
We propose Diffusion State-Guided Projected Gradient (DiffStateGrad) for inverse problems.
DiffStateGrad projects the measurement gradient onto a subspace that is a low-rank approximation of an intermediate state of the diffusion process.
We highlight that DiffStateGrad improves the robustness of diffusion models in terms of the choice of measurement guidance step size and noise.
arXiv Detail & Related papers (2024-10-04T14:26:54Z) - Gaussian is All You Need: A Unified Framework for Solving Inverse Problems via Diffusion Posterior Sampling [16.683393726483978]
Diffusion models can generate a variety of high-quality images by modeling complex data distributions.
Most of the existing diffusion-based methods integrate data consistency steps within the diffusion reverse sampling process.
We show that the existing approximations are either insufficient or computationally inefficient.
arXiv Detail & Related papers (2024-09-13T15:20:03Z) - Improving Diffusion Inverse Problem Solving with Decoupled Noise Annealing [84.97865583302244]
We propose a new method called Decoupled Annealing Posterior Sampling (DAPS) that relies on a novel noise annealing process.
DAPS significantly improves sample quality and stability across multiple image restoration tasks.
For example, we achieve a PSNR of 30.72dB on the FFHQ 256 dataset for phase retrieval, which is an improvement of 9.12dB compared to existing methods.
arXiv Detail & Related papers (2024-07-01T17:59:23Z) - Deep Data Consistency: a Fast and Robust Diffusion Model-based Solver for Inverse Problems [0.0]
We propose Deep Data Consistency (DDC) to update the data consistency step with a deep learning model when solving inverse problems with diffusion models.
In comparison with state-of-the-art methods in linear and non-linear tasks, DDC demonstrates its outstanding performance of both similarity and realness metrics.
arXiv Detail & Related papers (2024-05-17T12:54:43Z) - Deep Equilibrium Diffusion Restoration with Parallel Sampling [120.15039525209106]
Diffusion model-based image restoration (IR) aims to use diffusion models to recover high-quality (HQ) images from degraded images, achieving promising performance.
Most existing methods need long serial sampling chains to restore HQ images step-by-step, resulting in expensive sampling time and high computation costs.
In this work, we aim to rethink the diffusion model-based IR models through a different perspective, i.e., a deep equilibrium (DEQ) fixed point system, called DeqIR.
arXiv Detail & Related papers (2023-11-20T08:27:56Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Improving Diffusion Models for Inverse Problems using Manifold Constraints [55.91148172752894]
We show that current solvers throw the sample path off the data manifold, and hence the error accumulates.
To address this, we propose an additional correction term inspired by the manifold constraint.
We show that our method is superior to the previous methods both theoretically and empirically.
arXiv Detail & Related papers (2022-06-02T09:06:10Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Score-based diffusion models for accelerated MRI [35.3148116010546]
We introduce a way to sample data from a conditional distribution given the measurements, such that the model can be readily used for solving inverse problems in imaging.
Our model requires magnitude images only for training, and yet is able to reconstruct complex-valued data, and even extends to parallel imaging.
arXiv Detail & Related papers (2021-10-08T08:42:03Z) - Learning regularization and intensity-gradient-based fidelity for single
image super resolution [0.0]
We study the image degradation progress, and establish degradation model both in intensity and gradient space.
A comprehensive data consistency constraint is established for the reconstruction.
The proposed fidelity term and designed regularization term are embedded into the regularization framework.
arXiv Detail & Related papers (2020-03-24T07:03:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.