Improving Self-Supervised Learning by Characterizing Idealized
Representations
- URL: http://arxiv.org/abs/2209.06235v1
- Date: Tue, 13 Sep 2022 18:01:03 GMT
- Title: Improving Self-Supervised Learning by Characterizing Idealized
Representations
- Authors: Yann Dubois and Tatsunori Hashimoto and Stefano Ermon and Percy Liang
- Abstract summary: We prove necessary and sufficient conditions for any task invariant to given data augmentations.
For contrastive learning, our framework prescribes simple but significant improvements to previous methods.
For non-contrastive learning, we use our framework to derive a simple and novel objective.
- Score: 155.1457170539049
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the empirical successes of self-supervised learning (SSL) methods, it
is unclear what characteristics of their representations lead to high
downstream accuracies. In this work, we characterize properties that SSL
representations should ideally satisfy. Specifically, we prove necessary and
sufficient conditions such that for any task invariant to given data
augmentations, desired probes (e.g., linear or MLP) trained on that
representation attain perfect accuracy. These requirements lead to a unifying
conceptual framework for improving existing SSL methods and deriving new ones.
For contrastive learning, our framework prescribes simple but significant
improvements to previous methods such as using asymmetric projection heads. For
non-contrastive learning, we use our framework to derive a simple and novel
objective. Our resulting SSL algorithms outperform baselines on standard
benchmarks, including SwAV+multicrops on linear probing of ImageNet.
Related papers
- Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - Reinforcement Learning-Guided Semi-Supervised Learning [20.599506122857328]
We propose a novel Reinforcement Learning Guided SSL method, RLGSSL, that formulates SSL as a one-armed bandit problem.
RLGSSL incorporates a carefully designed reward function that balances the use of labeled and unlabeled data to enhance generalization performance.
We demonstrate the effectiveness of RLGSSL through extensive experiments on several benchmark datasets and show that our approach achieves consistent superior performance compared to state-of-the-art SSL methods.
arXiv Detail & Related papers (2024-05-02T21:52:24Z) - Progressive Feature Adjustment for Semi-supervised Learning from
Pretrained Models [39.42802115580677]
Semi-supervised learning (SSL) can leverage both labeled and unlabeled data to build a predictive model.
Recent literature suggests that naively applying state-of-the-art SSL with a pretrained model fails to unleash the full potential of training data.
We propose to use pseudo-labels from the unlabelled data to update the feature extractor that is less sensitive to incorrect labels.
arXiv Detail & Related papers (2023-09-09T01:57:14Z) - Rethinking Evaluation Protocols of Visual Representations Learned via
Self-supervised Learning [1.0499611180329804]
Self-supervised learning (SSL) is used to evaluate the quality of visual representations learned via self-supervised learning (SSL)
Existing SSL methods have shown good performances under those evaluation protocols.
We try to figure out the cause of performance sensitivity by conducting extensive experiments with state-of-the-art SSL methods.
arXiv Detail & Related papers (2023-04-07T03:03:19Z) - Benchmark for Uncertainty & Robustness in Self-Supervised Learning [0.0]
Self-Supervised Learning is crucial for real-world applications, especially in data-hungry domains such as healthcare and self-driving cars.
In this paper, we explore variants of SSL methods, including Jigsaw Puzzles, Context, Rotation, Geometric Transformations Prediction for vision, as well as BERT and GPT for language tasks.
Our goal is to create a benchmark with outputs from experiments, providing a starting point for new SSL methods in Reliable Machine Learning.
arXiv Detail & Related papers (2022-12-23T15:46:23Z) - Understanding and Improving the Role of Projection Head in
Self-Supervised Learning [77.59320917894043]
Self-supervised learning (SSL) aims to produce useful feature representations without access to human-labeled data annotations.
Current contrastive learning approaches append a parametrized projection head to the end of some backbone network to optimize the InfoNCE objective.
This raises a fundamental question: Why is a learnable projection head required if we are to discard it after training?
arXiv Detail & Related papers (2022-12-22T05:42:54Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - Self-Supervised Learning of Graph Neural Networks: A Unified Review [50.71341657322391]
Self-supervised learning is emerging as a new paradigm for making use of large amounts of unlabeled samples.
We provide a unified review of different ways of training graph neural networks (GNNs) using SSL.
Our treatment of SSL methods for GNNs sheds light on the similarities and differences of various methods, setting the stage for developing new methods and algorithms.
arXiv Detail & Related papers (2021-02-22T03:43:45Z) - On Data-Augmentation and Consistency-Based Semi-Supervised Learning [77.57285768500225]
Recently proposed consistency-based Semi-Supervised Learning (SSL) methods have advanced the state of the art in several SSL tasks.
Despite these advances, the understanding of these methods is still relatively limited.
arXiv Detail & Related papers (2021-01-18T10:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.