POLARIS: Projection-Orthogonal Least Squares for Robust and Adaptive Inversion in Diffusion Models
- URL: http://arxiv.org/abs/2512.00369v1
- Date: Sat, 29 Nov 2025 07:35:20 GMT
- Title: POLARIS: Projection-Orthogonal Least Squares for Robust and Adaptive Inversion in Diffusion Models
- Authors: Wenshuo Chen, Haosen Li, Shaofeng Liang, Lei Wang, Haozhe Jia, Kaishen Yuan, Jieming Wu, Bowen Tian, Yutao Yue,
- Abstract summary: Inversion-Denoising Paradigm, which is based on diffusion models, excels in diverse image editing and restoration tasks.<n>We revisit its mechanism and reveal a critical, overlooked factor in reconstruction degradation: the approximate noise error.<n>We introduce Projection-Orthogonal Least Squares for Robust and Adaptive Inversion (POLARIS), which reformulates inversion from an error-compensation problem into an error-origin problem.
- Score: 9.779983925649857
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The Inversion-Denoising Paradigm, which is based on diffusion models, excels in diverse image editing and restoration tasks. We revisit its mechanism and reveal a critical, overlooked factor in reconstruction degradation: the approximate noise error. This error stems from approximating the noise at step t with the prediction at step t-1, resulting in severe error accumulation throughout the inversion process. We introduce Projection-Orthogonal Least Squares for Robust and Adaptive Inversion (POLARIS), which reformulates inversion from an error-compensation problem into an error-origin problem. Rather than optimizing embeddings or latent codes to offset accumulated drift, POLARIS treats the guidance scale ω as a step-wise variable and derives a mathematically grounded formula to minimize inversion error at each step. Remarkably, POLARIS improves inversion latent quality with just one line of code. With negligible performance overhead, it substantially mitigates noise approximation errors and consistently improves the accuracy of downstream tasks.
Related papers
- Plug-and-Play Fidelity Optimization for Diffusion Transformer Acceleration via Cumulative Error Minimization [26.687056294842083]
Caching-based methods achieve training-free acceleration, while suffering from considerable computational error.<n>Existing methods typically incorporate error correction strategies such as pruning or prediction to mitigate it.<n>We propose a novel fidelity-optimization plugin for existing error correction methods via cumulative error minimization, named CEM.
arXiv Detail & Related papers (2025-12-29T07:36:36Z) - An Iteration-Free Fixed-Point Estimator for Diffusion Inversion [26.669535386141778]
We propose an iteration-free fixed-point estimator for diffusion inversion.<n>We evaluate reconstruction performance on two text-image datasets, NOCAPS and MS-COCO.
arXiv Detail & Related papers (2025-12-09T12:44:51Z) - The Hidden Cost of Approximation in Online Mirror Descent [56.99972253009168]
Online mirror descent (OMD) is a fundamental algorithmic paradigm that underlies many algorithms in optimization, machine learning and sequential decision-making.<n>In this work we initiate a systematic study into inexact OMD, and uncover an intricate relation between regularizer smoothness and robustness to approximation errors.
arXiv Detail & Related papers (2025-11-27T10:09:07Z) - Noise is All You Need: Solving Linear Inverse Problems by Noise Combination Sampling with Diffusion Models [7.219077740523681]
We propose a novel method that synthesizes an optimal noise vector from a noise subspace to approximate the measurement score.<n>Our method can be applied to a wide range of inverse problem solvers, including image compression.
arXiv Detail & Related papers (2025-10-24T07:46:23Z) - Diffusion Models for Solving Inverse Problems via Posterior Sampling with Piecewise Guidance [52.705112811734566]
A novel diffusion-based framework is introduced for solving inverse problems using a piecewise guidance scheme.<n>The proposed method is problem-agnostic and readily adaptable to a variety of inverse problems.<n>The framework achieves a reduction in inference time of (25%) for inpainting with both random and center masks, and (23%) and (24%) for (4times) and (8times) super-resolution tasks.
arXiv Detail & Related papers (2025-07-22T19:35:14Z) - Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Adaptive operator learning for infinite-dimensional Bayesian inverse problems [7.716833952167609]
We develop an adaptive operator learning framework that can reduce modeling error gradually by forcing the surrogate to be accurate in local areas.
We present a rigorous convergence guarantee in the linear case using the UKI framework.
The numerical results show that our method can significantly reduce computational costs while maintaining inversion accuracy.
arXiv Detail & Related papers (2023-10-27T01:50:33Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Diffusion Posterior Sampling for General Noisy Inverse Problems [50.873313752797124]
We extend diffusion solvers to handle noisy (non)linear inverse problems via approximation of the posterior sampling.
Our method demonstrates that diffusion models can incorporate various measurement noise statistics.
arXiv Detail & Related papers (2022-09-29T11:12:27Z) - Learned convex regularizers for inverse problems [3.294199808987679]
We propose to learn a data-adaptive input- neural network (ICNN) as a regularizer for inverse problems.
We prove the existence of a sub-gradient-based algorithm that leads to a monotonically decreasing error in the parameter space with iterations.
We show that the proposed convex regularizer is at least competitive with and sometimes superior to state-of-the-art data-driven techniques for inverse problems.
arXiv Detail & Related papers (2020-08-06T18:58:35Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.