Solving Inverse Problems with Conditional-GAN Prior via Fast
Network-Projected Gradient Descent
- URL: http://arxiv.org/abs/2109.01105v1
- Date: Thu, 2 Sep 2021 17:28:05 GMT
- Title: Solving Inverse Problems with Conditional-GAN Prior via Fast
Network-Projected Gradient Descent
- Authors: Muhammad Fadli Damara, Gregor Kornhardt, Peter Jung
- Abstract summary: In this work we investigate a network-based projected gradient descent (NPGD) algorithm for measurement-conditional generative models.
We show that the combination of measurement conditional model with NPGD works well in recovering the compressed signal while achieving similar or in some cases even better performance along with a much faster reconstruction.
- Score: 11.247580943940918
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The projected gradient descent (PGD) method has shown to be effective in
recovering compressed signals described in a data-driven way by a generative
model, i.e., a generator which has learned the data distribution. Further
reconstruction improvements for such inverse problems can be achieved by
conditioning the generator on the measurement. The boundary equilibrium
generative adversarial network (BEGAN) implements an equilibrium based loss
function and an auto-encoding discriminator to better balance the performance
of the generator and the discriminator. In this work we investigate a
network-based projected gradient descent (NPGD) algorithm for
measurement-conditional generative models to solve the inverse problem much
faster than regular PGD. We combine the NPGD with conditional GAN/BEGAN to
evaluate their effectiveness in solving compressed sensing type problems. Our
experiments on the MNIST and CelebA datasets show that the combination of
measurement conditional model with NPGD works well in recovering the compressed
signal while achieving similar or in some cases even better performance along
with a much faster reconstruction. The achieved reconstruction speed-up in our
experiments is up to 140-175.
Related papers
- GE-AdvGAN: Improving the transferability of adversarial samples by
gradient editing-based adversarial generative model [69.71629949747884]
Adversarial generative models, such as Generative Adversarial Networks (GANs), are widely applied for generating various types of data.
In this work, we propose a novel algorithm named GE-AdvGAN to enhance the transferability of adversarial samples.
arXiv Detail & Related papers (2024-01-11T16:43:16Z) - Outlier Detection Using Generative Models with Theoretical Performance
Guarantees [11.985270449383272]
We establish theoretical recovery guarantees for reconstruction of signals using generative models in the presence of outliers.
Our results are applicable to both linear generator neural networks and the nonlinear generator neural networks with an arbitrary number of layers.
arXiv Detail & Related papers (2023-10-16T01:25:34Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Signal Recovery with Non-Expansive Generative Network Priors [1.52292571922932]
We study compressive sensing with a deep generative network prior.
We prove that a signal in the range of a Gaussian generative network can be recovered from a few linear measurements.
arXiv Detail & Related papers (2022-04-24T18:47:32Z) - The efficacy and generalizability of conditional GANs for posterior
inference in physics-based inverse problems [0.4588028371034407]
We train conditional Wasserstein generative adversarial networks to effectively sample from the posterior of physics-based Bayesian inference problems.
We show the generator can learn inverse maps which are local in nature, which in turn promotes generalizability when testing with out-of-distribution samples.
arXiv Detail & Related papers (2022-02-15T22:57:05Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Demonstrating the Evolution of GANs through t-SNE [0.4588028371034407]
Evolutionary algorithms, such as COEGAN, were recently proposed as a solution to improve the GAN training.
In this work, we propose an evaluation method based on t-distributed Neighbour Embedding (t-SNE) to assess the progress of GANs.
A metric based on the resulting t-SNE maps and the Jaccard index is proposed to represent the model quality.
arXiv Detail & Related papers (2021-01-31T20:07:08Z) - Self Sparse Generative Adversarial Networks [73.590634413751]
Generative Adversarial Networks (GANs) are an unsupervised generative model that learns data distribution through adversarial training.
We propose a Self Sparse Generative Adversarial Network (Self-Sparse GAN) that reduces the parameter space and alleviates the zero gradient problem.
arXiv Detail & Related papers (2021-01-26T04:49:12Z) - When and How Can Deep Generative Models be Inverted? [28.83334026125828]
Deep generative models (GANs and VAEs) have been developed quite extensively in recent years.
We define conditions that are applicable to any inversion algorithm (gradient descent, deep encoder, etc.) under which such generative models are invertible.
We show that our method outperforms gradient descent when inverting such generators, both for clean and corrupted signals.
arXiv Detail & Related papers (2020-06-28T09:37:52Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.