When and How Can Deep Generative Models be Inverted?
- URL: http://arxiv.org/abs/2006.15555v1
- Date: Sun, 28 Jun 2020 09:37:52 GMT
- Title: When and How Can Deep Generative Models be Inverted?
- Authors: Aviad Aberdam, Dror Simon, Michael Elad
- Abstract summary: Deep generative models (GANs and VAEs) have been developed quite extensively in recent years.
We define conditions that are applicable to any inversion algorithm (gradient descent, deep encoder, etc.) under which such generative models are invertible.
We show that our method outperforms gradient descent when inverting such generators, both for clean and corrupted signals.
- Score: 28.83334026125828
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep generative models (e.g. GANs and VAEs) have been developed quite
extensively in recent years. Lately, there has been an increased interest in
the inversion of such a model, i.e. given a (possibly corrupted) signal, we
wish to recover the latent vector that generated it. Building upon sparse
representation theory, we define conditions that are applicable to any
inversion algorithm (gradient descent, deep encoder, etc.), under which such
generative models are invertible with a unique solution. Importantly, the
proposed analysis is applicable to any trained model, and does not depend on
Gaussian i.i.d. weights. Furthermore, we introduce two layer-wise inversion
pursuit algorithms for trained generative networks of arbitrary depth, and
accompany these with recovery guarantees. Finally, we validate our theoretical
results numerically and show that our method outperforms gradient descent when
inverting such generators, both for clean and corrupted signals.
Related papers
- Outlier Detection Using Generative Models with Theoretical Performance
Guarantees [11.985270449383272]
We establish theoretical recovery guarantees for reconstruction of signals using generative models in the presence of outliers.
Our results are applicable to both linear generator neural networks and the nonlinear generator neural networks with an arbitrary number of layers.
arXiv Detail & Related papers (2023-10-16T01:25:34Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Differentiable Gaussianization Layers for Inverse Problems Regularized by Deep Generative Models [5.439020425819001]
We show that latent tensors of deep generative models can fall out of the desired high-dimensional standard Gaussian distribution during inversion.
Our approach achieves state-of-the-art performance in terms of accuracy and consistency.
arXiv Detail & Related papers (2021-12-07T17:53:09Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Hidden Convexity of Wasserstein GANs: Interpretable Generative Models
with Closed-Form Solutions [31.952858521063277]
We analyze the impact of Wasserstein GANs with two-layer neural network discriminators through the lens of convex duality.
We further demonstrate the power of different activation functions of discriminator.
arXiv Detail & Related papers (2021-07-12T18:33:49Z) - The Effects of Invertibility on the Representational Complexity of
Encoders in Variational Autoencoders [16.27499951949733]
We show that if the generative map is "strongly invertible" (in a sense we suitably formalize), the inferential model need not be much more complex.
Importantly, we do not require the generative model to be layerwise invertible.
We provide theoretical support for the empirical wisdom that learning deep generative models is harder when data lies on a low-dimensional manifold.
arXiv Detail & Related papers (2021-07-09T19:53:29Z) - Provable Compressed Sensing with Generative Priors via Langevin Dynamics [43.59745920150787]
We introduce the use of gradient Langevin dynamics (SGLD) for compressed sensing with a generative prior.
Under mild assumptions on the generative model, we prove the convergence of SGLD to the true signal.
arXiv Detail & Related papers (2021-02-25T02:35:14Z) - Intermediate Layer Optimization for Inverse Problems using Deep
Generative Models [86.29330440222199]
ILO is a novel optimization algorithm for solving inverse problems with deep generative models.
We empirically show that our approach outperforms state-of-the-art methods introduced in StyleGAN-2 and PULSE for a wide range of inverse problems.
arXiv Detail & Related papers (2021-02-15T06:52:22Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Robust Compressed Sensing using Generative Models [98.64228459705859]
In this paper we propose an algorithm inspired by the Median-of-Means (MOM)
Our algorithm guarantees recovery for heavy-tailed data, even in the presence of outliers.
arXiv Detail & Related papers (2020-06-16T19:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.