A Generic Self-Supervised Framework of Learning Invariant Discriminative
Features
- URL: http://arxiv.org/abs/2202.06914v1
- Date: Mon, 14 Feb 2022 18:09:43 GMT
- Title: A Generic Self-Supervised Framework of Learning Invariant Discriminative
Features
- Authors: Foivos Ntelemis, Yaochu Jin, Spencer A. Thomas
- Abstract summary: This paper proposes a generic SSL framework based on a constrained self-labelling assignment process.
The proposed training strategy outperforms a majority of state-of-the-art representation learning methods based on AE structures.
- Score: 9.614694312155798
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning (SSL) has become a popular method for generating
invariant representations without the need for human annotations. Nonetheless,
the desired invariant representation is achieved by utilising prior online
transformation functions on the input data. As a result, each SSL framework is
customised for a particular data type, e.g., visual data, and further
modifications are required if it is used for other dataset types. On the other
hand, autoencoder (AE), which is a generic and widely applicable framework,
mainly focuses on dimension reduction and is not suited for learning invariant
representation. This paper proposes a generic SSL framework based on a
constrained self-labelling assignment process that prevents degenerate
solutions. Specifically, the prior transformation functions are replaced with a
self-transformation mechanism, derived through an unsupervised training process
of adversarial training, for imposing invariant representations. Via the
self-transformation mechanism, pairs of augmented instances can be generated
from the same input data. Finally, a training objective based on contrastive
learning is designed by leveraging both the self-labelling assignment and the
self-transformation mechanism. Despite the fact that the self-transformation
process is very generic, the proposed training strategy outperforms a majority
of state-of-the-art representation learning methods based on AE structures. To
validate the performance of our method, we conduct experiments on four types of
data, namely visual, audio, text, and mass spectrometry data, and compare them
in terms of four quantitative metrics. Our comparison results indicate that the
proposed method demonstrate robustness and successfully identify patterns
within the datasets.
Related papers
- Unsupervised Representation Learning from Sparse Transformation Analysis [79.94858534887801]
We propose to learn representations from sequence data by factorizing the transformations of the latent variables into sparse components.
Input data are first encoded as distributions of latent activations and subsequently transformed using a probability flow model.
arXiv Detail & Related papers (2024-10-07T23:53:25Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Equivariant Contrastive Learning [20.369942206674576]
In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good representations.
We extend popular SSL methods to a more general framework which we name Equivariant Self-Supervised Learning (E-SSL)
We demonstrate E-SSL's effectiveness empirically on several popular computer vision benchmarks.
arXiv Detail & Related papers (2021-10-28T17:21:33Z) - Towards a Unified View of Parameter-Efficient Transfer Learning [108.94786930869473]
Fine-tuning large pre-trained language models on downstream tasks has become the de-facto learning paradigm in NLP.
Recent work has proposed a variety of parameter-efficient transfer learning methods that only fine-tune a small number of (extra) parameters to attain strong performance.
We break down the design of state-of-the-art parameter-efficient transfer learning methods and present a unified framework that establishes connections between them.
arXiv Detail & Related papers (2021-10-08T20:22:26Z) - Model-agnostic and Scalable Counterfactual Explanations via
Reinforcement Learning [0.5729426778193398]
We propose a deep reinforcement learning approach that transforms the optimization procedure into an end-to-end learnable process.
Our experiments on real-world data show that our method is model-agnostic, relying only on feedback from model predictions.
arXiv Detail & Related papers (2021-06-04T16:54:36Z) - Self-supervised Detransformation Autoencoder for Representation Learning
in Open Set Recognition [0.0]
We propose a self-supervision method, Detransformation Autoencoder (DTAE) for the Open set recognition problem.
Our proposed self-supervision method achieves significant gains in detecting the unknown class and classifying the known classes.
arXiv Detail & Related papers (2021-05-28T02:45:57Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - BasisVAE: Translation-invariant feature-level clustering with
Variational Autoencoders [9.51828574518325]
Variational Autoencoders (VAEs) provide a flexible and scalable framework for non-linear dimensionality reduction.
We show how a collapsed variational inference scheme leads to scalable and efficient inference for BasisVAE.
arXiv Detail & Related papers (2020-03-06T23:10:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.