Concurrent Density Estimation with Wasserstein Autoencoders: Some
Statistical Insights
- URL: http://arxiv.org/abs/2312.06591v1
- Date: Mon, 11 Dec 2023 18:27:25 GMT
- Title: Concurrent Density Estimation with Wasserstein Autoencoders: Some
Statistical Insights
- Authors: Anish Chakrabarty, Arkaprabha Basu, Swagatam Das
- Abstract summary: Wasserstein Autoencoders (WAEs) have been a pioneering force in the realm of deep generative models.
Our work is an attempt to offer a theoretical understanding of the machinery behind WAEs.
- Score: 20.894503281724052
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Variational Autoencoders (VAEs) have been a pioneering force in the realm of
deep generative models. Amongst its legions of progenies, Wasserstein
Autoencoders (WAEs) stand out in particular due to the dual offering of
heightened generative quality and a strong theoretical backbone. WAEs consist
of an encoding and a decoding network forming a bottleneck with the prime
objective of generating new samples resembling the ones it was catered to. In
the process, they aim to achieve a target latent representation of the encoded
data. Our work is an attempt to offer a theoretical understanding of the
machinery behind WAEs. From a statistical viewpoint, we pose the problem as
concurrent density estimation tasks based on neural network-induced
transformations. This allows us to establish deterministic upper bounds on the
realized errors WAEs commit. We also analyze the propagation of these
stochastic errors in the presence of adversaries. As a result, both the large
sample properties of the reconstructed distribution and the resilience of WAE
models are explored.
Related papers
- A Statistical Analysis of Wasserstein Autoencoders for Intrinsically
Low-dimensional Data [38.964624328622]
We show that Wasserstein Autoencoders (WAEs) can learn the data distributions when the network architectures are properly chosen.
We show that the convergence rates of the expected excess risk in the number of samples for WAEs are independent of the high feature dimension.
arXiv Detail & Related papers (2024-02-24T04:13:40Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Do Bayesian Variational Autoencoders Know What They Don't Know? [0.6091702876917279]
The problem of detecting the Out-of-Distribution (OoD) inputs is paramount importance for Deep Neural Networks.
It has been previously shown that even Deep Generative Models that allow estimating the density of the inputs may not be reliable.
This paper investigates three approaches to inference: Markov chain Monte Carlo, Bayes gradient by Backpropagation and Weight Averaging-Gaussian.
arXiv Detail & Related papers (2022-12-29T11:48:01Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Defending Variational Autoencoders from Adversarial Attacks with MCMC [74.36233246536459]
Variational autoencoders (VAEs) are deep generative models used in various domains.
As previous work has shown, one can easily fool VAEs to produce unexpected latent representations and reconstructions for a visually slightly modified input.
Here, we examine several objective functions for adversarial attacks construction, suggest metrics assess the model robustness, and propose a solution.
arXiv Detail & Related papers (2022-03-18T13:25:18Z) - Inferential Wasserstein Generative Adversarial Networks [9.859829604054127]
We introduce a novel inferential Wasserstein GAN (iWGAN) model, which is a principled framework to fuse auto-encoders and WGANs.
The iWGAN greatly mitigates the symptom of mode collapse, speeds up the convergence, and is able to provide a measurement of quality check for each individual sample.
arXiv Detail & Related papers (2021-09-13T00:43:21Z) - P-WAE: Generalized Patch-Wasserstein Autoencoder for Anomaly Screening [17.24628770042803]
We propose a novel Patch-wise Wasserstein AutoEncoder (P-WAE) architecture to alleviate those challenges.
In particular, a patch-wise variational inference model coupled with solving the jigsaw puzzle is designed.
Comprehensive experiments, conducted on the MVTec AD dataset, demonstrate the superior performance of our propo
arXiv Detail & Related papers (2021-08-09T05:31:45Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z) - Adversarial Attack on Deep Product Quantization Network for Image
Retrieval [74.85736968193879]
Deep product quantization network (DPQN) has recently received much attention in fast image retrieval tasks.
Recent studies show that deep neural networks (DNNs) are vulnerable to input with small and maliciously designed perturbations.
We propose product quantization adversarial generation (PQ-AG) to generate adversarial examples for product quantization based retrieval systems.
arXiv Detail & Related papers (2020-02-26T09:25:58Z) - Regularized Autoencoders via Relaxed Injective Probability Flow [35.39933775720789]
Invertible flow-based generative models are an effective method for learning to generate samples, while allowing for tractable likelihood computation and inference.
We propose a generative model based on probability flows that does away with the bijectivity requirement on the model and only assumes injectivity.
arXiv Detail & Related papers (2020-02-20T18:22:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.