Training-free Linear Image Inverses via Flows
- URL: http://arxiv.org/abs/2310.04432v2
- Date: Sun, 10 Mar 2024 22:01:18 GMT
- Title: Training-free Linear Image Inverses via Flows
- Authors: Ashwini Pokle, Matthew J. Muckley, Ricky T. Q. Chen, Brian Karrer
- Abstract summary: We propose a training-free method for solving linear inverse problems by using pretrained flow models.
Our approach requires no problem-specific tuning across an extensive suite of noisy linear inverse problems on high-dimensional datasets.
- Score: 17.291903204982326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Solving inverse problems without any training involves using a pretrained
generative model and making appropriate modifications to the generation process
to avoid finetuning of the generative model. While recent methods have explored
the use of diffusion models, they still require the manual tuning of many
hyperparameters for different inverse problems. In this work, we propose a
training-free method for solving linear inverse problems by using pretrained
flow models, leveraging the simplicity and efficiency of Flow Matching models,
using theoretically-justified weighting schemes, and thereby significantly
reducing the amount of manual tuning. In particular, we draw inspiration from
two main sources: adopting prior gradient correction methods to the flow
regime, and a solver scheme based on conditional Optimal Transport paths. As
pretrained diffusion models are widely accessible, we also show how to
practically adapt diffusion models for our method. Empirically, our approach
requires no problem-specific tuning across an extensive suite of noisy linear
inverse problems on high-dimensional datasets, ImageNet-64/128 and AFHQ-256,
and we observe that our flow-based method for solving inverse problems improves
upon closely-related diffusion-based methods in most settings.
Related papers
- Diffusion State-Guided Projected Gradient for Inverse Problems [82.24625224110099]
We propose Diffusion State-Guided Projected Gradient (DiffStateGrad) for inverse problems.
DiffStateGrad projects the measurement gradient onto a subspace that is a low-rank approximation of an intermediate state of the diffusion process.
We highlight that DiffStateGrad improves the robustness of diffusion models in terms of the choice of measurement guidance step size and noise.
arXiv Detail & Related papers (2024-10-04T14:26:54Z) - Ensemble Kalman Diffusion Guidance: A Derivative-free Method for Inverse Problems [21.95946380639509]
In inverse problems, it is increasingly popular to use pre-trained diffusion models as plug-and-play priors.
Most existing methods rely on privileged information such as derivative, pseudo-inverse, or full knowledge about the forward model.
We propose Ensemble Kalman Diffusion Guidance (EnKG) for diffusion models, a derivative-free approach that can solve inverse problems by only accessing forward model evaluations and a pre-trained diffusion model prior.
arXiv Detail & Related papers (2024-09-30T10:36:41Z) - CoSIGN: Few-Step Guidance of ConSIstency Model to Solve General INverse Problems [3.3969056208620128]
We propose to push the boundary of inference steps to 1-2 NFEs while still maintaining high reconstruction quality.
Our method achieves new state-of-the-art in diffusion-based inverse problem solving.
arXiv Detail & Related papers (2024-07-17T15:57:50Z) - Fast Samplers for Inverse Problems in Iterative Refinement Models [19.099632445326826]
We propose a plug-and-play framework for constructing efficient samplers for inverse problems.
Our method can generate high-quality samples in as few as 5 conditional sampling steps and outperforms competing baselines requiring 20-1000 steps.
arXiv Detail & Related papers (2024-05-27T21:50:16Z) - Learning Diffusion Priors from Observations by Expectation Maximization [6.224769485481242]
We present a novel method based on the expectation-maximization algorithm for training diffusion models from incomplete and noisy observations only.
As part of our method, we propose and motivate an improved posterior sampling scheme for unconditional diffusion models.
arXiv Detail & Related papers (2024-05-22T15:04:06Z) - Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance [52.093434664236014]
Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems.
Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation.
arXiv Detail & Related papers (2024-02-03T13:35:39Z) - Unsupervised Discovery of Interpretable Directions in h-space of
Pre-trained Diffusion Models [63.1637853118899]
We propose the first unsupervised and learning-based method to identify interpretable directions in h-space of pre-trained diffusion models.
We employ a shift control module that works on h-space of pre-trained diffusion models to manipulate a sample into a shifted version of itself.
By jointly optimizing them, the model will spontaneously discover disentangled and interpretable directions.
arXiv Detail & Related papers (2023-10-15T18:44:30Z) - Prompt-tuning latent diffusion models for inverse problems [72.13952857287794]
We propose a new method for solving imaging inverse problems using text-to-image latent diffusion models as general priors.
Our method, called P2L, outperforms both image- and latent-diffusion model-based inverse problem solvers on a variety of tasks, such as super-resolution, deblurring, and inpainting.
arXiv Detail & Related papers (2023-10-02T11:31:48Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - JPEG Artifact Correction using Denoising Diffusion Restoration Models [110.1244240726802]
We build upon Denoising Diffusion Restoration Models (DDRM) and propose a method for solving some non-linear inverse problems.
We leverage the pseudo-inverse operator used in DDRM and generalize this concept for other measurement operators.
arXiv Detail & Related papers (2022-09-23T23:47:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.