AAVAE: Augmentation-Augmented Variational Autoencoders
- URL: http://arxiv.org/abs/2107.12329v1
- Date: Mon, 26 Jul 2021 17:04:30 GMT
- Title: AAVAE: Augmentation-Augmented Variational Autoencoders
- Authors: William Falcon, Ananya Harsh Jha, Teddy Koker and Kyunghyun Cho
- Abstract summary: We introduce augmentation-augmented variational autoencoders (AAVAE), a third approach to self-supervised learning based on autoencoding.
We empirically evaluate the proposed AAVAE on image classification, similar to how recent contrastive and non-contrastive learning algorithms have been evaluated.
- Score: 43.73699420145321
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent methods for self-supervised learning can be grouped into two
paradigms: contrastive and non-contrastive approaches. Their success can
largely be attributed to data augmentation pipelines which generate multiple
views of a single input that preserve the underlying semantics. In this work,
we introduce augmentation-augmented variational autoencoders (AAVAE), a third
approach to self-supervised learning based on autoencoding. We derive AAVAE
starting from the conventional variational autoencoder (VAE), by replacing the
KL divergence regularization, which is agnostic to the input domain, with data
augmentations that explicitly encourage the internal representations to encode
domain-specific invariances and equivariances. We empirically evaluate the
proposed AAVAE on image classification, similar to how recent contrastive and
non-contrastive learning algorithms have been evaluated. Our experiments
confirm the effectiveness of data augmentation as a replacement for KL
divergence regularization. The AAVAE outperforms the VAE by 30% on CIFAR-10 and
40% on STL-10. The results for AAVAE are largely comparable to the
state-of-the-art for self-supervised learning.
Related papers
- Denoising Diffusion Autoencoders are Unified Self-supervised Learners [58.194184241363175]
This paper shows that the networks in diffusion models, namely denoising diffusion autoencoders (DDAE), are unified self-supervised learners.
DDAE has already learned strongly linear-separable representations within its intermediate layers without auxiliary encoders.
Our diffusion-based approach achieves 95.9% and 50.0% linear evaluation accuracies on CIFAR-10 and Tiny-ImageNet.
arXiv Detail & Related papers (2023-03-17T04:20:47Z) - Covariate-informed Representation Learning with Samplewise Optimal
Identifiable Variational Autoencoders [15.254297587065595]
Recently proposed identifiable variational autoencoder (iVAE) provides a promising approach for learning latent independent components of the data.
We develop a new approach, co-informed identifiable VAE (CI-iVAE)
In doing so, the objective function enforces the inverse relation, and learned representation contains more information of observations.
arXiv Detail & Related papers (2022-02-09T00:18:33Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Boosting the Generalization Capability in Cross-Domain Few-shot Learning
via Noise-enhanced Supervised Autoencoder [23.860842627883187]
We teach the model to capture broader variations of the feature distributions with a novel noise-enhanced supervised autoencoder (NSAE)
NSAE trains the model by jointly reconstructing inputs and predicting the labels of inputs as well as their reconstructed pairs.
We also take advantage of NSAE structure and propose a two-step fine-tuning procedure that achieves better adaption and improves classification performance in the target domain.
arXiv Detail & Related papers (2021-08-11T04:45:56Z) - Consistency Regularization for Variational Auto-Encoders [14.423556966548544]
Variational auto-encoders (VAEs) are a powerful approach to unsupervised learning.
We propose a regularization method to enforce consistency in VAEs.
arXiv Detail & Related papers (2021-05-31T10:26:32Z) - Hierarchical Variational Autoencoder for Visual Counterfactuals [79.86967775454316]
Conditional Variational Autos (VAE) are gathering significant attention as an Explainable Artificial Intelligence (XAI) tool.
In this paper we show how relaxing the effect of the posterior leads to successful counterfactuals.
We introduce VAEX an Hierarchical VAE designed for this approach that can visually audit a classifier in applications.
arXiv Detail & Related papers (2021-02-01T14:07:11Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Dual Adversarial Auto-Encoders for Clustering [152.84443014554745]
We propose Dual Adversarial Auto-encoder (Dual-AAE) for unsupervised clustering.
By performing variational inference on the objective function of Dual-AAE, we derive a new reconstruction loss which can be optimized by training a pair of Auto-encoders.
Experiments on four benchmarks show that Dual-AAE achieves superior performance over state-of-the-art clustering methods.
arXiv Detail & Related papers (2020-08-23T13:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.