Laplacian Autoencoders for Learning Stochastic Representations
- URL: http://arxiv.org/abs/2206.15078v2
- Date: Sun, 3 Jul 2022 09:26:19 GMT
- Title: Laplacian Autoencoders for Learning Stochastic Representations
- Authors: Marco Miani and Frederik Warburg and Pablo Moreno-Mu\~noz and Nicke
Skafte Detlefsen and S{\o}ren Hauberg
- Abstract summary: We present a Bayesian autoencoder for unsupervised representation learning, which is trained using a novel variational lower-bound of the autoencoder evidence.
We show that our Laplacian autoencoder estimates well-calibrated uncertainties in both latent and output space.
- Score: 0.6999740786886537
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Established methods for unsupervised representation learning such as
variational autoencoders produce none or poorly calibrated uncertainty
estimates making it difficult to evaluate if learned representations are stable
and reliable. In this work, we present a Bayesian autoencoder for unsupervised
representation learning, which is trained using a novel variational lower-bound
of the autoencoder evidence. This is maximized using Monte Carlo EM with a
variational distribution that takes the shape of a Laplace approximation. We
develop a new Hessian approximation that scales linearly with data size
allowing us to model high-dimensional data. Empirically, we show that our
Laplacian autoencoder estimates well-calibrated uncertainties in both latent
and output space. We demonstrate that this results in improved performance
across a multitude of downstream tasks.
Related papers
- Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Fully Bayesian Autoencoders with Latent Sparse Gaussian Processes [23.682509357305406]
Autoencoders and their variants are among the most widely used models in representation learning and generative modeling.
We propose a novel Sparse Gaussian Process Bayesian Autoencoder model in which we impose fully sparse Gaussian Process priors on the latent space of a Bayesian Autoencoder.
arXiv Detail & Related papers (2023-02-09T09:57:51Z) - Propagating Variational Model Uncertainty for Bioacoustic Call Label
Smoothing [15.929064190849665]
We focus on using the predictive uncertainty signal calculated by Bayesian neural networks to guide learning in the self-same task the model is being trained on.
Not opting for costly Monte Carlo sampling of weights, we propagate the approximate hidden variance in an end-to-end manner.
We show that, through the explicit usage of the uncertainty in the loss calculation, the variational model is led to improved predictive and calibration performance.
arXiv Detail & Related papers (2022-10-19T13:04:26Z) - On the Regularization of Autoencoders [14.46779433267854]
We show that the unsupervised setting by itself induces strong additional regularization, i.e., a severe reduction in the model-capacity of the learned autoencoder.
We derive that a deep nonlinear autoencoder cannot fit the training data more accurately than a linear autoencoder does if both models have the same dimensionality in their last layer.
We demonstrate that it is an accurate approximation across all model-ranks in our experiments on three well-known data sets.
arXiv Detail & Related papers (2021-10-21T18:28:25Z) - Training on Test Data with Bayesian Adaptation for Covariate Shift [96.3250517412545]
Deep neural networks often make inaccurate predictions with unreliable uncertainty estimates.
We derive a Bayesian model that provides for a well-defined relationship between unlabeled inputs under distributional shift and model parameters.
We show that our method improves both accuracy and uncertainty estimation.
arXiv Detail & Related papers (2021-09-27T01:09:08Z) - Consistency Regularization for Variational Auto-Encoders [14.423556966548544]
Variational auto-encoders (VAEs) are a powerful approach to unsupervised learning.
We propose a regularization method to enforce consistency in VAEs.
arXiv Detail & Related papers (2021-05-31T10:26:32Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.