How do Variational Autoencoders Learn? Insights from Representational
Similarity
- URL: http://arxiv.org/abs/2205.08399v1
- Date: Tue, 17 May 2022 14:31:57 GMT
- Title: How do Variational Autoencoders Learn? Insights from Representational
Similarity
- Authors: Lisa Bonheme and Marek Grzes
- Abstract summary: We study the internal behaviour of Variational Autoencoders (VAEs) using representational similarity techniques.
Using the CKA and Procrustes similarities, we found that the encoders' representations are learned long before the decoders'.
- Score: 2.969705152497174
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The ability of Variational Autoencoders (VAEs) to learn disentangled
representations has made them popular for practical applications. However,
their behaviour is not yet fully understood. For example, the questions of when
they can provide disentangled representations, or suffer from posterior
collapse are still areas of active research. Despite this, there are no
layerwise comparisons of the representations learned by VAEs, which would
further our understanding of these models. In this paper, we thus look into the
internal behaviour of VAEs using representational similarity techniques.
Specifically, using the CKA and Procrustes similarities, we found that the
encoders' representations are learned long before the decoders', and this
behaviour is independent of hyperparameters, learning objectives, and datasets.
Moreover, the encoders' representations up to the mean and variance layers are
similar across hyperparameters and learning objectives.
Related papers
- How good are variational autoencoders at transfer learning? [2.969705152497174]
We use Centred Kernel Alignment to evaluate the similarity of VAEs trained on different datasets.
We discuss the implications for selecting which components of a VAE to retrain and propose a method to visually assess whether transfer learning is likely to help on classification tasks.
arXiv Detail & Related papers (2023-04-21T06:32:32Z) - What Are You Token About? Dense Retrieval as Distributions Over the
Vocabulary [68.77983831618685]
We propose to interpret the vector representations produced by dual encoders by projecting them into the model's vocabulary space.
We show that the resulting projections contain rich semantic information, and draw connection between them and sparse retrieval.
arXiv Detail & Related papers (2022-12-20T16:03:25Z) - Improving VAE-based Representation Learning [26.47244578124654]
We study what properties are required for good representations and how different VAE structure choices could affect the learned properties.
We show that by using a decoder that prefers to learn local features, the remaining global features can be well captured by the latent.
arXiv Detail & Related papers (2022-05-28T23:00:18Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - Consistency Regularization for Variational Auto-Encoders [14.423556966548544]
Variational auto-encoders (VAEs) are a powerful approach to unsupervised learning.
We propose a regularization method to enforce consistency in VAEs.
arXiv Detail & Related papers (2021-05-31T10:26:32Z) - Diffusion-Based Representation Learning [65.55681678004038]
We augment the denoising score matching framework to enable representation learning without any supervised signal.
In contrast, the introduced diffusion-based representation learning relies on a new formulation of the denoising score matching objective.
Using the same approach, we propose to learn an infinite-dimensional latent code that achieves improvements of state-of-the-art models on semi-supervised image classification.
arXiv Detail & Related papers (2021-05-29T09:26:02Z) - Hierarchical Variational Autoencoder for Visual Counterfactuals [79.86967775454316]
Conditional Variational Autos (VAE) are gathering significant attention as an Explainable Artificial Intelligence (XAI) tool.
In this paper we show how relaxing the effect of the posterior leads to successful counterfactuals.
We introduce VAEX an Hierarchical VAE designed for this approach that can visually audit a classifier in applications.
arXiv Detail & Related papers (2021-02-01T14:07:11Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.