Score-Guided Intermediate Layer Optimization: Fast Langevin Mixing for
Inverse Problem
- URL: http://arxiv.org/abs/2206.09104v1
- Date: Sat, 18 Jun 2022 03:47:37 GMT
- Title: Score-Guided Intermediate Layer Optimization: Fast Langevin Mixing for
Inverse Problem
- Authors: Giannis Daras and Yuval Dagan, Alexandros G. Dimakis, Constantinos
Daskalakis
- Abstract summary: We prove fast mixing and characterize the stationary distribution of the Langevin Algorithm for inverting random weighted DNN generators.
We propose to do posterior sampling in the latent space of a pre-trained generative model.
- Score: 97.64313409741614
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We prove fast mixing and characterize the stationary distribution of the
Langevin Algorithm for inverting random weighted DNN generators. This result
extends the work of Hand and Voroninski from efficient inversion to efficient
posterior sampling. In practice, to allow for increased expressivity, we
propose to do posterior sampling in the latent space of a pre-trained
generative model. To achieve that, we train a score-based model in the latent
space of a StyleGAN-2 and we use it to solve inverse problems. Our framework,
Score-Guided Intermediate Layer Optimization (SGILO), extends prior work by
replacing the sparsity regularization with a generative prior in the
intermediate layer. Experimentally, we obtain significant improvements over the
previous state-of-the-art, especially in the low measurement regime.
Related papers
- Posterior sampling via Langevin dynamics based on generative priors [31.84543941736757]
Posterior sampling in high-dimensional spaces using generative models holds significant promise for various applications.
Existing methods require restarting the entire generative process for each new sample, making the procedure computationally expensive.
We propose efficient posterior sampling by simulating Langevin dynamics in the noise space of a pre-trained generative model.
arXiv Detail & Related papers (2024-10-02T22:57:47Z) - Covariance-Adaptive Sequential Black-box Optimization for Diffusion Targeted Generation [60.41803046775034]
We show how to perform user-preferred targeted generation via diffusion models with only black-box target scores of users.
Experiments on both numerical test problems and target-guided 3D-molecule generation tasks show the superior performance of our method in achieving better target scores.
arXiv Detail & Related papers (2024-06-02T17:26:27Z) - Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance [52.093434664236014]
Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems.
Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation.
arXiv Detail & Related papers (2024-02-03T13:35:39Z) - Improving sample efficiency of high dimensional Bayesian optimization
with MCMC [7.241485121318798]
We propose a new method based on Markov Chain Monte Carlo to efficiently sample from an approximated posterior.
We show experimentally that both the Metropolis-Hastings and the Langevin Dynamics version of our algorithm outperform state-of-the-art methods in high-dimensional sequential optimization and reinforcement learning benchmarks.
arXiv Detail & Related papers (2024-01-05T05:56:42Z) - Generative Modeling with Phase Stochastic Bridges [49.4474628881673]
Diffusion models (DMs) represent state-of-the-art generative models for continuous inputs.
We introduce a novel generative modeling framework grounded in textbfphase space dynamics
Our framework demonstrates the capability to generate realistic data points at an early stage of dynamics propagation.
arXiv Detail & Related papers (2023-10-11T18:38:28Z) - Reweighted Interacting Langevin Diffusions: an Accelerated Sampling
Methodfor Optimization [28.25662317591378]
We propose a new technique to accelerate sampling methods for solving difficult optimization problems.
Our method investigates the connection between posterior distribution sampling and optimization with Langevin dynamics.
arXiv Detail & Related papers (2023-01-30T03:48:20Z) - Regularized Training of Intermediate Layers for Generative Models for
Inverse Problems [9.577509224534323]
We introduce a principle that if a generative model is intended for inversion using an algorithm based on optimization of intermediate layers, it should be trained in a way that regularizes those intermediate layers.
We instantiate this principle for two notable recent inversion algorithms: Intermediate Layer Optimization and the Multi-Code GAN prior.
For both of these inversion algorithms, we introduce a new regularized GAN training algorithm and demonstrate that the learned generative model results in lower reconstruction errors across a wide range of under sampling ratios.
arXiv Detail & Related papers (2022-03-08T20:30:49Z) - Intermediate Layer Optimization for Inverse Problems using Deep
Generative Models [86.29330440222199]
ILO is a novel optimization algorithm for solving inverse problems with deep generative models.
We empirically show that our approach outperforms state-of-the-art methods introduced in StyleGAN-2 and PULSE for a wide range of inverse problems.
arXiv Detail & Related papers (2021-02-15T06:52:22Z) - Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient
Clipping [69.9674326582747]
We propose a new accelerated first-order method called clipped-SSTM for smooth convex optimization with heavy-tailed distributed noise in gradients.
We prove new complexity that outperform state-of-the-art results in this case.
We derive the first non-trivial high-probability complexity bounds for SGD with clipping without light-tails assumption on the noise.
arXiv Detail & Related papers (2020-05-21T17:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.