Leveraging Hidden Structure in Self-Supervised Learning
- URL: http://arxiv.org/abs/2106.16060v1
- Date: Wed, 30 Jun 2021 13:35:36 GMT
- Title: Leveraging Hidden Structure in Self-Supervised Learning
- Authors: Emanuele Sansone
- Abstract summary: We propose a principled framework based on a mutual information objective, which integrates self-supervised and structure learning.
Preliminary experiments on CIFAR-10 show that the proposed framework achieves higher generalization performance in downstream classification tasks.
- Score: 2.385916960125935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work considers the problem of learning structured representations from
raw images using self-supervised learning. We propose a principled framework
based on a mutual information objective, which integrates self-supervised and
structure learning. Furthermore, we devise a post-hoc procedure to interpret
the meaning of the learnt representations. Preliminary experiments on CIFAR-10
show that the proposed framework achieves higher generalization performance in
downstream classification tasks and provides more interpretable representations
compared to the ones learnt through traditional self-supervised learning.
Related papers
- Decorrelation-based Self-Supervised Visual Representation Learning for Writer Identification [10.55096104577668]
We explore the decorrelation-based paradigm of self-supervised learning and apply the same to learning disentangled stroke features for writer identification.
We show that the proposed framework outperforms the contemporary self-supervised learning framework on the writer identification benchmark.
To the best of our knowledge, this work is the first of its kind to apply self-supervised learning for learning representations for writer verification tasks.
arXiv Detail & Related papers (2024-10-02T11:43:58Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Structural Adversarial Objectives for Self-Supervised Representation
Learning [19.471586646254373]
We propose objectives that task the discriminator for self-supervised representation learning via additional structural modeling responsibilities.
In combination with an efficient smoothness regularizer imposed on the network, these objectives guide the discriminator to learn to extract informative representations.
Experiments demonstrate that equipping GANs with our self-supervised objectives suffices to produce discriminators which, evaluated in terms of representation learning, compete with networks trained by contrastive learning approaches.
arXiv Detail & Related papers (2023-09-30T12:27:53Z) - Semi-supervised learning made simple with self-supervised clustering [65.98152950607707]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
arXiv Detail & Related papers (2023-06-13T01:09:18Z) - Homomorphic Self-Supervised Learning [1.0742675209112622]
We introduce a general framework we call Homomorphic Self-Supervised Learning.
We show how it may subsume the use of input-augmentations provided an augmentation-homomorphic feature extractor.
arXiv Detail & Related papers (2022-11-15T16:32:36Z) - An Empirical Investigation of Representation Learning for Imitation [76.48784376425911]
Recent work in vision, reinforcement learning, and NLP has shown that auxiliary representation learning objectives can reduce the need for large amounts of expensive, task-specific data.
We propose a modular framework for constructing representation learning algorithms, then use our framework to evaluate the utility of representation learning for imitation.
arXiv Detail & Related papers (2022-05-16T11:23:42Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Learning from Extrinsic and Intrinsic Supervisions for Domain
Generalization [95.73898853032865]
We present a new domain generalization framework that learns how to generalize across domains simultaneously.
We demonstrate the effectiveness of our approach on two standard object recognition benchmarks.
arXiv Detail & Related papers (2020-07-18T03:12:24Z) - Self-supervised Learning from a Multi-view Perspective [121.63655399591681]
We show that self-supervised representations can extract task-relevant information and discard task-irrelevant information.
Our theoretical framework paves the way to a larger space of self-supervised learning objective design.
arXiv Detail & Related papers (2020-06-10T00:21:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.