How Well Do Self-Supervised Models Transfer?
- URL: http://arxiv.org/abs/2011.13377v2
- Date: Mon, 29 Mar 2021 13:20:03 GMT
- Title: How Well Do Self-Supervised Models Transfer?
- Authors: Linus Ericsson, Henry Gouk and Timothy M. Hospedales
- Abstract summary: We evaluate the transfer performance of 13 top self-supervised models on 40 downstream tasks.
We find ImageNet Top-1 accuracy to be highly correlated with transfer to many-shot recognition.
No single self-supervised method dominates overall, suggesting that universal pre-training is still unsolved.
- Score: 92.16372657233394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised visual representation learning has seen huge progress
recently, but no large scale evaluation has compared the many models now
available. We evaluate the transfer performance of 13 top self-supervised
models on 40 downstream tasks, including many-shot and few-shot recognition,
object detection, and dense prediction. We compare their performance to a
supervised baseline and show that on most tasks the best self-supervised models
outperform supervision, confirming the recently observed trend in the
literature. We find ImageNet Top-1 accuracy to be highly correlated with
transfer to many-shot recognition, but increasingly less so for few-shot,
object detection and dense prediction. No single self-supervised method
dominates overall, suggesting that universal pre-training is still unsolved.
Our analysis of features suggests that top self-supervised learners fail to
preserve colour information as well as supervised alternatives, but tend to
induce better classifier calibration, and less attentive overfitting than
supervised learners.
Related papers
- How Close are Other Computer Vision Tasks to Deepfake Detection? [42.79190870582115]
We present a new measurement, "model separability," for assessing a model's raw capacity to separate data in an unsupervised manner.
Our analysis shows that pre-trained face recognition models are more closely related to deepfake detection than other models.
We found that self-supervised models deliver the best results, but there is a risk of overfitting.
arXiv Detail & Related papers (2023-10-02T06:32:35Z) - Self-Supervised Multi-Object Tracking For Autonomous Driving From
Consistency Across Timescales [53.55369862746357]
Self-supervised multi-object trackers have tremendous potential as they enable learning from raw domain-specific data.
However, their re-identification accuracy still falls short compared to their supervised counterparts.
We propose a training objective that enables self-supervised learning of re-identification features from multiple sequential frames.
arXiv Detail & Related papers (2023-04-25T20:47:29Z) - Diverse Imagenet Models Transfer Better [10.6046072921331]
We show that high diversity of features learnt by a model promotes transferability jointly with Imagenet accuracy.
We propose a method that combines self-supervised and supervised pretraining to generate models with both high diversity and high accuracy.
arXiv Detail & Related papers (2022-04-19T21:26:58Z) - Task-Agnostic Robust Representation Learning [31.818269301504564]
We study the problem of robust representation learning with unlabeled data in a task-agnostic manner.
We derive an upper bound on the adversarial loss of a prediction model on any downstream task, using its loss on the clean data and a robustness regularizer.
Our method achieves preferable adversarial performance compared to relevant baselines.
arXiv Detail & Related papers (2022-03-15T02:05:11Z) - Revisiting Weakly Supervised Pre-Training of Visual Perception Models [27.95816470075203]
Large-scale weakly supervised pre-training can outperform fully supervised approaches.
This paper revisits weakly-supervised pre-training of models using hashtag supervision.
Our results provide a compelling argument for the use of weakly supervised learning in the development of visual recognition systems.
arXiv Detail & Related papers (2022-01-20T18:55:06Z) - SLIP: Self-supervision meets Language-Image Pre-training [79.53764315471543]
We study whether self-supervised learning can aid in the use of language supervision for visual representation learning.
We introduce SLIP, a multi-task learning framework for combining self-supervised learning and CLIP pre-training.
We find that SLIP enjoys the best of both worlds: better performance than self-supervision and language supervision.
arXiv Detail & Related papers (2021-12-23T18:07:13Z) - On visual self-supervision and its effect on model robustness [9.313899406300644]
Self-supervision can indeed improve model robustness, however it turns out the devil is in the details.
Although self-supervised pre-training yields benefits in improving adversarial training, we observe no benefit in model robustness or accuracy if self-supervision is incorporated into adversarial training.
arXiv Detail & Related papers (2021-12-08T16:22:02Z) - Self-Supervised Models are Continual Learners [79.70541692930108]
We show that self-supervised loss functions can be seamlessly converted into distillation mechanisms for Continual Learning.
We devise a framework for Continual self-supervised visual representation Learning that significantly improves the quality of the learned representations.
arXiv Detail & Related papers (2021-12-08T10:39:13Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z) - Do Adversarially Robust ImageNet Models Transfer Better? [102.09335596483695]
adversarially robust models often perform better than their standard-trained counterparts when used for transfer learning.
Our results are consistent with (and in fact, add to) recent hypotheses stating that robustness leads to improved feature representations.
arXiv Detail & Related papers (2020-07-16T17:42:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.