Improving Transferability of Representations via Augmentation-Aware
Self-Supervision
- URL: http://arxiv.org/abs/2111.09613v1
- Date: Thu, 18 Nov 2021 10:43:50 GMT
- Title: Improving Transferability of Representations via Augmentation-Aware
Self-Supervision
- Authors: Hankook Lee, Kibok Lee, Kimin Lee, Honglak Lee, Jinwoo Shin
- Abstract summary: AugSelf is an auxiliary self-supervised loss that learns the difference of augmentation parameters between two randomly augmented samples.
Our intuition is that AugSelf encourages to preserve augmentation-aware information in learned representations, which could be beneficial for their transferability.
AugSelf can easily be incorporated into recent state-of-the-art representation learning methods with a negligible additional training cost.
- Score: 117.15012005163322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent unsupervised representation learning methods have shown to be
effective in a range of vision tasks by learning representations invariant to
data augmentations such as random cropping and color jittering. However, such
invariance could be harmful to downstream tasks if they rely on the
characteristics of the data augmentations, e.g., location- or color-sensitive.
This is not an issue just for unsupervised learning; we found that this occurs
even in supervised learning because it also learns to predict the same label
for all augmented samples of an instance. To avoid such failures and obtain
more generalizable representations, we suggest to optimize an auxiliary
self-supervised loss, coined AugSelf, that learns the difference of
augmentation parameters (e.g., cropping positions, color adjustment
intensities) between two randomly augmented samples. Our intuition is that
AugSelf encourages to preserve augmentation-aware information in learned
representations, which could be beneficial for their transferability.
Furthermore, AugSelf can easily be incorporated into recent state-of-the-art
representation learning methods with a negligible additional training cost.
Extensive experiments demonstrate that our simple idea consistently improves
the transferability of representations learned by supervised and unsupervised
methods in various transfer learning scenarios. The code is available at
https://github.com/hankook/AugSelf.
Related papers
- Steerable Equivariant Representation Learning [36.138305341173414]
In this paper, we propose a method of learning representations that are instead equivariant to data augmentations.
We demonstrate that our resulting steerable and equivariant representations lead to better performance on transfer learning and robustness.
arXiv Detail & Related papers (2023-02-22T12:42:45Z) - CIPER: Combining Invariant and Equivariant Representations Using
Contrastive and Predictive Learning [6.117084972237769]
We introduce Contrastive Invariant and Predictive Equivariant Representation learning (CIPER)
CIPER comprises both invariant and equivariant learning objectives using one shared encoder and two different output heads on top of the encoder.
We evaluate our method on static image tasks and time-augmented image datasets.
arXiv Detail & Related papers (2023-02-05T07:50:46Z) - EquiMod: An Equivariance Module to Improve Self-Supervised Learning [77.34726150561087]
Self-supervised visual representation methods are closing the gap with supervised learning performance.
These methods rely on maximizing the similarity between embeddings of related synthetic inputs created through data augmentations.
We introduce EquiMod a generic equivariance module that structures the learned latent space.
arXiv Detail & Related papers (2022-11-02T16:25:54Z) - Is Self-Supervised Learning More Robust Than Supervised Learning? [29.129681691651637]
Self-supervised contrastive learning is a powerful tool to learn visual representation without labels.
We conduct a series of robustness tests to quantify the behavioral differences between contrastive learning and supervised learning.
Under pre-training corruptions, we find contrastive learning vulnerable to patch shuffling and pixel intensity change, yet less sensitive to dataset-level distribution change.
arXiv Detail & Related papers (2022-06-10T17:58:00Z) - Learning Instance-Specific Augmentations by Capturing Local Invariances [62.70897571389785]
InstaAug is a method for automatically learning input-specific augmentations from data.
We empirically demonstrate that InstaAug learns meaningful input-dependent augmentations for a wide range of transformation classes.
arXiv Detail & Related papers (2022-05-31T18:38:06Z) - Why Do Self-Supervised Models Transfer? Investigating the Impact of
Invariance on Downstream Tasks [79.13089902898848]
Self-supervised learning is a powerful paradigm for representation learning on unlabelled images.
We show that different tasks in computer vision require features to encode different (in)variances.
arXiv Detail & Related papers (2021-11-22T18:16:35Z) - What Should Not Be Contrastive in Contrastive Learning [110.14159883496859]
We introduce a contrastive learning framework which does not require prior knowledge of specific, task-dependent invariances.
Our model learns to capture varying and invariant factors for visual representations by constructing separate embedding spaces.
We use a multi-head network with a shared backbone which captures information across each augmentation and alone outperforms all baselines on downstream tasks.
arXiv Detail & Related papers (2020-08-13T03:02:32Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.