Regularization by Texts for Latent Diffusion Inverse Solvers
- URL: http://arxiv.org/abs/2311.15658v3
- Date: Tue, 11 Mar 2025 08:04:26 GMT
- Title: Regularization by Texts for Latent Diffusion Inverse Solvers
- Authors: Jeongsol Kim, Geon Yeong Park, Hyungjin Chung, Jong Chul Ye,
- Abstract summary: We introduce a novel latent diffusion inverse solver, regularization by text (TReg), inspired by the human ability to resolve visual ambiguities through perceptual biases.<n>Our experimental results demonstrate that TReg effectively mitigates ambiguity in inverse problems, improving both accuracy and efficiency.
- Score: 55.97917698941313
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The recent development of diffusion models has led to significant progress in solving inverse problems by leveraging these models as powerful generative priors. However, challenges persist due to the ill-posed nature of such problems, often arising from ambiguities in measurements or intrinsic system symmetries. To address this, here we introduce a novel latent diffusion inverse solver, regularization by text (TReg), inspired by the human ability to resolve visual ambiguities through perceptual biases. TReg integrates textual descriptions of preconceptions about the solution during reverse diffusion sampling, dynamically reinforcing these descriptions through null-text optimization, which we refer to as adaptive negation. Our comprehensive experimental results demonstrate that TReg effectively mitigates ambiguity in inverse problems, improving both accuracy and efficiency.
Related papers
- ReGuLaR: Variational Latent Reasoning Guided by Rendered Chain-of-Thought [49.203970812338916]
Explicit reasoning chains introduce substantial computational redundancy.<n>Recent latent reasoning methods attempt to mitigate this by compressing reasoning processes into latent space.<n>We propose Rendered CoT-Guided variational Latent Reasoning (ReGuLaR)
arXiv Detail & Related papers (2026-01-30T17:08:06Z) - DAPS++: Rethinking Diffusion Inverse Problems with Decoupled Posterior Annealing [5.215481191227242]
We introduce textbfDAPS++, which allows the likelihood term to guide inference more directly while maintaining numerical stability.<n>textbfDAPS++ achieves high computational efficiency and robust reconstruction performance across diverse image restoration tasks.
arXiv Detail & Related papers (2025-11-21T08:28:36Z) - Align & Invert: Solving Inverse Problems with Diffusion and Flow-based Models via Representational Alignment [13.028121107802127]
In inverse problems, pretrained generative models are employed as priors.<n>We propose applying representation alignment (REPA) between diffusion or flow-based models and a pretrained self-supervised visual encoder.<n>We show that aligning model representations with approximate target features can substantially enhance reconstruction fidelity and perceptual realism.
arXiv Detail & Related papers (2025-11-21T00:37:04Z) - Diffusion models for inverse problems [57.87606622211111]
We review the various different approaches that were proposed over the years.<n>We cover the extension to more challenging situations, including blind cases, high-dimensional data, and problems under data scarcity and distribution mismatch.
arXiv Detail & Related papers (2025-08-04T01:26:06Z) - EquiReg: Equivariance Regularized Diffusion for Inverse Problems [67.01847869495558]
We propose EquiReg diffusion, a framework for regularizing posterior sampling in diffusion-based inverse problem solvers.<n>When applied to a variety of solvers, EquiReg outperforms state-of-the-art diffusion models in both linear and nonlinear image restoration tasks.
arXiv Detail & Related papers (2025-05-29T01:25:43Z) - Improving Diffusion-based Inverse Algorithms under Few-Step Constraint via Learnable Linear Extrapolation [22.6710110305133]
Learnable Linear Extrapolation (LLE) is a lightweight approach that universally enhances the performance of any diffusion-based inverse algorithm.
Our experiments demonstrate consistent improvements of the proposed LLE method across multiple algorithms and tasks.
arXiv Detail & Related papers (2025-03-13T07:00:27Z) - MAP-based Problem-Agnostic diffusion model for Inverse Problems [8.161067848524976]
We propose a problem-agnostic diffusion model called the maximum a posteriori (MAP)-based guided term estimation method for inverse problems.
This innovation allows us to better capture the intrinsic properties of the data, leading to improved performance.
arXiv Detail & Related papers (2025-01-25T08:30:15Z) - G2D2: Gradient-guided Discrete Diffusion for image inverse problem solving [55.185588994883226]
This paper presents a novel method for addressing linear inverse problems by leveraging image-generation models based on discrete diffusion as priors.
To the best of our knowledge, this is the first approach to use discrete diffusion model-based priors for solving image inverse problems.
arXiv Detail & Related papers (2024-10-09T06:18:25Z) - Diffusion Prior-Based Amortized Variational Inference for Noisy Inverse Problems [12.482127049881026]
We propose a novel approach to solve inverse problems with a diffusion prior from an amortized variational inference perspective.
Our amortized inference learns a function that directly maps measurements to the implicit posterior distributions of corresponding clean data, enabling a single-step posterior sampling even for unseen measurements.
arXiv Detail & Related papers (2024-07-23T02:14:18Z) - Stability and Generalizability in SDE Diffusion Models with Measure-Preserving Dynamics [11.919291977879801]
Inverse problems describe the process of estimating the causal factors from a set of measurements or data.
Diffusion models have shown promise as potent generative tools for solving inverse problems.
arXiv Detail & Related papers (2024-06-19T15:55:12Z) - ODE-DPS: ODE-based Diffusion Posterior Sampling for Inverse Problems in Partial Differential Equation [1.8356973269166506]
We introduce a novel unsupervised inversion methodology tailored for solving inverse problems arising from PDEs.
Our approach operates within the Bayesian inversion framework, treating the task of solving the posterior distribution as a conditional generation process.
To enhance the accuracy of inversion results, we propose an ODE-based Diffusion inversion algorithm.
arXiv Detail & Related papers (2024-04-21T00:57:13Z) - Debiasing Text-to-Image Diffusion Models [84.46750441518697]
Learning-based Text-to-Image (TTI) models have revolutionized the way visual content is generated in various domains.
Recent research has shown that nonnegligible social bias exists in current state-of-the-art TTI systems.
arXiv Detail & Related papers (2024-02-22T14:33:23Z) - Text Diffusion with Reinforced Conditioning [92.17397504834825]
This paper thoroughly analyzes text diffusion models and uncovers two significant limitations: degradation of self-conditioning during training and misalignment between training and sampling.
Motivated by our findings, we propose a novel Text Diffusion model called TREC, which mitigates the degradation with Reinforced Conditioning and the misalignment by Time-Aware Variance Scaling.
arXiv Detail & Related papers (2024-02-19T09:24:02Z) - Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance [52.093434664236014]
Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems.
Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation.
arXiv Detail & Related papers (2024-02-03T13:35:39Z) - Convex Latent-Optimized Adversarial Regularizers for Imaging Inverse
Problems [8.33626757808923]
We introduce Convex Latent-d Adrial Regularizers (CLEAR), a novel and interpretable data-driven paradigm.
CLEAR represents a fusion of deep learning (DL) and variational regularization.
Our method consistently outperforms conventional data-driven techniques and traditional regularization approaches.
arXiv Detail & Related papers (2023-09-17T12:06:04Z) - Solving Inverse Problems with Latent Diffusion Models via Hard Data Consistency [7.671153315762146]
Training diffusion models in the pixel space are both data-intensive and computationally demanding.
Latent diffusion models, which operate in a much lower-dimensional space, offer a solution to these challenges.
We propose textitReSample, an algorithm that can solve general inverse problems with pre-trained latent diffusion models.
arXiv Detail & Related papers (2023-07-16T18:42:01Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.