A manifold learning perspective on representation learning: Learning
decoder and representations without an encoder
- URL: http://arxiv.org/abs/2108.13910v1
- Date: Tue, 31 Aug 2021 15:08:50 GMT
- Title: A manifold learning perspective on representation learning: Learning
decoder and representations without an encoder
- Authors: Viktoria Schuster and Anders Krogh
- Abstract summary: Autoencoders are commonly used in representation learning.
Inspired by manifold learning, we show that the decoder can be trained on its own by learning the representations of the training samples.
Our approach of training the decoder alone facilitates representation learning even on small data sets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autoencoders are commonly used in representation learning. They consist of an
encoder and a decoder, which provide a straightforward way to map
$n$-dimensional data in input space to a lower $m$-dimensional representation
space and back. The decoder itself defines an $m$-dimensional manifold in input
space. Inspired by manifold learning, we show that the decoder can be trained
on its own by learning the representations of the training samples along with
the decoder weights using gradient descent. A sum-of-squares loss then
corresponds to optimizing the manifold to have the smallest Euclidean distance
to the training samples, and similarly for other loss functions. We derive
expressions for the number of samples needed to specify the encoder and decoder
and show that the decoder generally requires much less training samples to be
well-specified compared to the encoder. We discuss training of autoencoders in
this perspective and relate to previous work in the field that use noisy
training examples and other types of regularization. On the natural image data
sets MNIST and CIFAR10, we demonstrate that the decoder is much better suited
to learn a low-dimensional representation, especially when trained on small
data sets. Using simulated gene regulatory data, we further show that the
decoder alone leads to better generalization and meaningful representations.
Our approach of training the decoder alone facilitates representation learning
even on small data sets and can lead to improved training of autoencoders. We
hope that the simple analyses presented will also contribute to an improved
conceptual understanding of representation learning.
Related papers
- Regress Before Construct: Regress Autoencoder for Point Cloud
Self-supervised Learning [18.10704604275133]
Masked Autoencoders (MAE) have demonstrated promising performance in self-supervised learning for 2D and 3D computer vision.
We propose Point Regress AutoEncoder (Point-RAE), a new scheme for regressive autoencoders for point cloud self-supervised learning.
Our approach is efficient during pre-training and generalizes well on various downstream tasks.
arXiv Detail & Related papers (2023-09-25T17:23:33Z) - Geometric Autoencoders -- What You See is What You Decode [12.139222986297263]
We propose a differential geometric perspective on the decoder, leading to insightful diagnostics for an embedding's distortion, and a new regularizer mitigating such distortion.
Our Geometric Autoencoder'' avoids stretching the embedding spuriously, so that the visualization captures the data structure more faithfully.
arXiv Detail & Related papers (2023-06-30T13:24:31Z) - Transfer Learning for Segmentation Problems: Choose the Right Encoder
and Skip the Decoder [0.0]
It is common practice to reuse models initially trained on different data to increase downstream task performance.
In this work, we investigate the impact of transfer learning for segmentation problems, being pixel-wise classification problems.
We find that transfer learning the decoder does not help downstream segmentation tasks, while transfer learning the encoder is truly beneficial.
arXiv Detail & Related papers (2022-07-29T07:02:05Z) - KRNet: Towards Efficient Knowledge Replay [50.315451023983805]
A knowledge replay technique has been widely used in many tasks such as continual learning and continuous domain adaptation.
We propose a novel and efficient knowledge recording network (KRNet) which directly maps an arbitrary sample identity number to the corresponding datum.
Our KRNet requires significantly less storage cost for the latent codes and can be trained without the encoder sub-network.
arXiv Detail & Related papers (2022-05-23T08:34:17Z) - Toward a Geometrical Understanding of Self-supervised Contrastive
Learning [55.83778629498769]
Self-supervised learning (SSL) is one of the premier techniques to create data representations that are actionable for transfer learning in the absence of human annotations.
Mainstream SSL techniques rely on a specific deep neural network architecture with two cascaded neural networks: the encoder and the projector.
In this paper, we investigate how the strength of the data augmentation policies affects the data embedding.
arXiv Detail & Related papers (2022-05-13T23:24:48Z) - Small Lesion Segmentation in Brain MRIs with Subpixel Embedding [105.1223735549524]
We present a method to segment MRI scans of the human brain into ischemic stroke lesion and normal tissues.
We propose a neural network architecture in the form of a standard encoder-decoder where predictions are guided by a spatial expansion embedding network.
arXiv Detail & Related papers (2021-09-18T00:21:17Z) - EncoderMI: Membership Inference against Pre-trained Encoders in
Contrastive Learning [27.54202989524394]
We proposeMI, the first membership inference method against image encoders pre-trained by contrastive learning.
We evaluateMI on image encoders pre-trained on multiple datasets by ourselves as well as the Contrastive Language-Image Pre-training (CLIP) image encoder, which is pre-trained on 400 million (image, text) pairs collected from the Internet and released by OpenAI.
arXiv Detail & Related papers (2021-08-25T03:00:45Z) - Dynamic Neural Representational Decoders for High-Resolution Semantic
Segmentation [98.05643473345474]
We propose a novel decoder, termed dynamic neural representational decoder (NRD)
As each location on the encoder's output corresponds to a local patch of the semantic labels, in this work, we represent these local patches of labels with compact neural networks.
This neural representation enables our decoder to leverage the smoothness prior in the semantic label space, and thus makes our decoder more efficient.
arXiv Detail & Related papers (2021-07-30T04:50:56Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - BinPlay: A Binary Latent Autoencoder for Generative Replay Continual
Learning [11.367079056418957]
We introduce a binary latent space autoencoder architecture to rehearse training samples for the continual learning of neural networks.
BinPlay is able to compute the binary embeddings of rehearsed samples on the fly without the need to keep them in memory.
arXiv Detail & Related papers (2020-11-25T08:50:58Z) - Simple and Effective VAE Training with Calibrated Decoders [123.08908889310258]
Variational autoencoders (VAEs) provide an effective and simple method for modeling complex distributions.
We study the impact of calibrated decoders, which learn the uncertainty of the decoding distribution.
We propose a simple but novel modification to the commonly used Gaussian decoder, which computes the prediction variance analytically.
arXiv Detail & Related papers (2020-06-23T17:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.