Towards Unsupervised Representation Learning: Learning, Evaluating and
Transferring Visual Representations
- URL: http://arxiv.org/abs/2312.00101v1
- Date: Thu, 30 Nov 2023 15:57:55 GMT
- Title: Towards Unsupervised Representation Learning: Learning, Evaluating and
Transferring Visual Representations
- Authors: Bonifaz Stuhr
- Abstract summary: We contribute to the field of unsupervised (visual) representation learning from three perspectives.
We design unsupervised, backpropagation-free Convolutional Self-Organizing Neural Networks (CSNNs)
We build upon the widely used (non-)linear evaluation protocol to define pretext- and target-objective-independent metrics.
We contribute CARLANE, the first 3-way sim-to-real domain adaptation benchmark for 2D lane detection, and a method based on self-supervised learning.
- Score: 1.8130068086063336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised representation learning aims at finding methods that learn
representations from data without annotation-based signals. Abstaining from
annotations not only leads to economic benefits but may - and to some extent
already does - result in advantages regarding the representation's structure,
robustness, and generalizability to different tasks. In the long run,
unsupervised methods are expected to surpass their supervised counterparts due
to the reduction of human intervention and the inherently more general setup
that does not bias the optimization towards an objective originating from
specific annotation-based signals. While major advantages of unsupervised
representation learning have been recently observed in natural language
processing, supervised methods still dominate in vision domains for most tasks.
In this dissertation, we contribute to the field of unsupervised (visual)
representation learning from three perspectives: (i) Learning representations:
We design unsupervised, backpropagation-free Convolutional Self-Organizing
Neural Networks (CSNNs) that utilize self-organization- and Hebbian-based
learning rules to learn convolutional kernels and masks to achieve deeper
backpropagation-free models. (ii) Evaluating representations: We build upon the
widely used (non-)linear evaluation protocol to define pretext- and
target-objective-independent metrics for measuring and investigating the
objective function mismatch between various unsupervised pretext tasks and
target tasks. (iii) Transferring representations: We contribute CARLANE, the
first 3-way sim-to-real domain adaptation benchmark for 2D lane detection, and
a method based on prototypical self-supervised learning. Finally, we contribute
a content-consistent unpaired image-to-image translation method that utilizes
masks, global and local discriminators, and similarity sampling to mitigate
content inconsistencies.
Related papers
- Disentangled and Self-Explainable Node Representation Learning [1.4002424249260854]
We introduce DiSeNE, a framework that generates self-explainable embeddings in an unsupervised manner.
Our method employs disentangled representation learning to produce dimension-wise interpretable embeddings.
We formalize novel desiderata for disentangled and interpretable embeddings, which drive our new objective functions.
arXiv Detail & Related papers (2024-10-28T13:58:52Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Structural Adversarial Objectives for Self-Supervised Representation
Learning [19.471586646254373]
We propose objectives that task the discriminator for self-supervised representation learning via additional structural modeling responsibilities.
In combination with an efficient smoothness regularizer imposed on the network, these objectives guide the discriminator to learn to extract informative representations.
Experiments demonstrate that equipping GANs with our self-supervised objectives suffices to produce discriminators which, evaluated in terms of representation learning, compete with networks trained by contrastive learning approaches.
arXiv Detail & Related papers (2023-09-30T12:27:53Z) - A Study of Forward-Forward Algorithm for Self-Supervised Learning [65.268245109828]
We study the performance of forward-forward vs. backpropagation for self-supervised representation learning.
Our main finding is that while the forward-forward algorithm performs comparably to backpropagation during (self-supervised) training, the transfer performance is significantly lagging behind in all the studied settings.
arXiv Detail & Related papers (2023-09-21T10:14:53Z) - Semi-supervised learning made simple with self-supervised clustering [65.98152950607707]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
arXiv Detail & Related papers (2023-06-13T01:09:18Z) - Explaining, Evaluating and Enhancing Neural Networks' Learned
Representations [2.1485350418225244]
We show how explainability can be an aid, rather than an obstacle, towards better and more efficient representations.
We employ such attributions to define two novel scores for evaluating the informativeness and the disentanglement of latent embeddings.
We show that adopting our proposed scores as constraints during the training of a representation learning task improves the downstream performance of the model.
arXiv Detail & Related papers (2022-02-18T19:00:01Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z) - Self-Supervised Learning Across Domains [33.86614301708017]
We propose to apply a similar approach to the problem of object recognition across domains.
Our model learns the semantic labels in a supervised fashion, and broadens its understanding of the data by learning from self-supervised signals on the same images.
This secondary task helps the network to focus on object shapes, learning concepts like spatial orientation and part correlation, while acting as a regularizer for the classification task.
arXiv Detail & Related papers (2020-07-24T06:19:53Z) - Self-supervised Learning from a Multi-view Perspective [121.63655399591681]
We show that self-supervised representations can extract task-relevant information and discard task-irrelevant information.
Our theoretical framework paves the way to a larger space of self-supervised learning objective design.
arXiv Detail & Related papers (2020-06-10T00:21:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.