How good are variational autoencoders at transfer learning?
- URL: http://arxiv.org/abs/2304.10767v1
- Date: Fri, 21 Apr 2023 06:32:32 GMT
- Title: How good are variational autoencoders at transfer learning?
- Authors: Lisa Bonheme, Marek Grzes
- Abstract summary: We use Centred Kernel Alignment to evaluate the similarity of VAEs trained on different datasets.
We discuss the implications for selecting which components of a VAE to retrain and propose a method to visually assess whether transfer learning is likely to help on classification tasks.
- Score: 2.969705152497174
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Variational autoencoders (VAEs) are used for transfer learning across various
research domains such as music generation or medical image analysis. However,
there is no principled way to assess before transfer which components to
retrain or whether transfer learning is likely to help on a target task. We
propose to explore this question through the lens of representational
similarity. Specifically, using Centred Kernel Alignment (CKA) to evaluate the
similarity of VAEs trained on different datasets, we show that encoders'
representations are generic but decoders' specific. Based on these insights, we
discuss the implications for selecting which components of a VAE to retrain and
propose a method to visually assess whether transfer learning is likely to help
on classification tasks.
Related papers
- With a Little Help from your own Past: Prototypical Memory Networks for
Image Captioning [47.96387857237473]
We devise a network which can perform attention over activations obtained while processing other training samples.
Our memory models the distribution of past keys and values through the definition of prototype vectors.
We demonstrate that our proposal can increase the performance of an encoder-decoder Transformer by 3.7 CIDEr points both when training in cross-entropy only and when fine-tuning with self-critical sequence training.
arXiv Detail & Related papers (2023-08-23T18:53:00Z) - Explaining Cross-Domain Recognition with Interpretable Deep Classifier [100.63114424262234]
Interpretable Deep (IDC) learns the nearest source samples of a target sample as evidence upon which the classifier makes the decision.
Our IDC leads to a more explainable model with almost no accuracy degradation and effectively calibrates classification for optimum reject options.
arXiv Detail & Related papers (2022-11-15T15:58:56Z) - How do Variational Autoencoders Learn? Insights from Representational
Similarity [2.969705152497174]
We study the internal behaviour of Variational Autoencoders (VAEs) using representational similarity techniques.
Using the CKA and Procrustes similarities, we found that the encoders' representations are learned long before the decoders'.
arXiv Detail & Related papers (2022-05-17T14:31:57Z) - Visual Transformer for Task-aware Active Learning [49.903358393660724]
We present a novel pipeline for pool-based Active Learning.
Our method exploits accessible unlabelled examples during training to estimate their co-relation with the labelled examples.
Visual Transformer models non-local visual concept dependency between labelled and unlabelled examples.
arXiv Detail & Related papers (2021-06-07T17:13:59Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - Simple and Effective VAE Training with Calibrated Decoders [123.08908889310258]
Variational autoencoders (VAEs) provide an effective and simple method for modeling complex distributions.
We study the impact of calibrated decoders, which learn the uncertainty of the decoding distribution.
We propose a simple but novel modification to the commonly used Gaussian decoder, which computes the prediction variance analytically.
arXiv Detail & Related papers (2020-06-23T17:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.