An Iteration-Free Fixed-Point Estimator for Diffusion Inversion
- URL: http://arxiv.org/abs/2512.08547v1
- Date: Tue, 09 Dec 2025 12:44:51 GMT
- Title: An Iteration-Free Fixed-Point Estimator for Diffusion Inversion
- Authors: Yifei Chen, Kaiyu Song, Yan Pan, Jianxing Yu, Jian Yin, Hanjiang Lai,
- Abstract summary: We propose an iteration-free fixed-point estimator for diffusion inversion.<n>We evaluate reconstruction performance on two text-image datasets, NOCAPS and MS-COCO.
- Score: 26.669535386141778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion inversion aims to recover the initial noise corresponding to a given image such that this noise can reconstruct the original image through the denoising diffusion process. The key component of diffusion inversion is to minimize errors at each inversion step, thereby mitigating cumulative inaccuracies. Recently, fixed-point iteration has emerged as a widely adopted approach to minimize reconstruction errors at each inversion step. However, it suffers from high computational costs due to its iterative nature and the complexity of hyperparameter selection. To address these issues, we propose an iteration-free fixed-point estimator for diffusion inversion. First, we derive an explicit expression of the fixed point from an ideal inversion step. Unfortunately, it inherently contains an unknown data prediction error. Building upon this, we introduce the error approximation, which uses the calculable error from the previous inversion step to approximate the unknown error at the current inversion step. This yields a calculable, approximate expression for the fixed point, which is an unbiased estimator characterized by low variance, as shown by our theoretical analysis. We evaluate reconstruction performance on two text-image datasets, NOCAPS and MS-COCO. Compared to DDIM inversion and other inversion methods based on the fixed-point iteration, our method achieves consistent and superior performance in reconstruction tasks without additional iterations or training.
Related papers
- POLARIS: Projection-Orthogonal Least Squares for Robust and Adaptive Inversion in Diffusion Models [9.779983925649857]
Inversion-Denoising Paradigm, which is based on diffusion models, excels in diverse image editing and restoration tasks.<n>We revisit its mechanism and reveal a critical, overlooked factor in reconstruction degradation: the approximate noise error.<n>We introduce Projection-Orthogonal Least Squares for Robust and Adaptive Inversion (POLARIS), which reformulates inversion from an error-compensation problem into an error-origin problem.
arXiv Detail & Related papers (2025-11-29T07:35:20Z) - Enhancing Diffusion Models for Inverse Problems with Covariance-Aware Posterior Sampling [3.866047645663101]
In computer vision, for example, tasks such as inpainting, deblurring, and super resolution can be effectively modeled as inverse problems.<n>DDPMs are shown to provide a promising solution to noisy linear inverse problems without the need for additional task specific training.
arXiv Detail & Related papers (2024-12-28T06:17:44Z) - Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Score-Based Variational Inference for Inverse Problems [19.848238197979157]
In applications that posterior mean is preferred, we have to generate multiple samples from the posterior which is time-consuming.
We establish a framework termed reverse mean propagation (RMP) that targets the posterior mean directly.
We develop an algorithm that optimize the reverse KL divergence with natural gradient descent using score functions and propagates the mean at each reverse step.
arXiv Detail & Related papers (2024-10-08T02:55:16Z) - A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Amortized Posterior Sampling with Diffusion Prior Distillation [55.03585818289934]
Amortized Posterior Sampling is a novel variational inference approach for efficient posterior sampling in inverse problems.<n>Our method trains a conditional flow model to minimize the divergence between the variational distribution and the posterior distribution implicitly defined by the diffusion model.<n>Unlike existing methods, our approach is unsupervised, requires no paired training data, and is applicable to both Euclidean and non-Euclidean domains.
arXiv Detail & Related papers (2024-07-25T09:53:12Z) - Improving Diffusion Inverse Problem Solving with Decoupled Noise Annealing [84.97865583302244]
We propose a new method called Decoupled Annealing Posterior Sampling (DAPS)<n>DAPS relies on a novel noise annealing process.<n>We demonstrate that DAPS significantly improves sample quality and stability across multiple image restoration tasks.
arXiv Detail & Related papers (2024-07-01T17:59:23Z) - Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance [52.093434664236014]
Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems.
Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation.
arXiv Detail & Related papers (2024-02-03T13:35:39Z) - Adaptive operator learning for infinite-dimensional Bayesian inverse problems [7.716833952167609]
We develop an adaptive operator learning framework that can reduce modeling error gradually by forcing the surrogate to be accurate in local areas.
We present a rigorous convergence guarantee in the linear case using the UKI framework.
The numerical results show that our method can significantly reduce computational costs while maintaining inversion accuracy.
arXiv Detail & Related papers (2023-10-27T01:50:33Z) - Refining Amortized Posterior Approximations using Gradient-Based Summary
Statistics [0.9176056742068814]
We present an iterative framework to improve the amortized approximations of posterior distributions in the context of inverse problems.
We validate our method in a controlled setting by applying it to a stylized problem, and observe improved posterior approximations with each iteration.
arXiv Detail & Related papers (2023-05-15T15:47:19Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.