Improve Unsupervised Pretraining for Few-label Transfer
- URL: http://arxiv.org/abs/2107.12369v1
- Date: Mon, 26 Jul 2021 17:59:56 GMT
- Title: Improve Unsupervised Pretraining for Few-label Transfer
- Authors: Suichan Li and Dongdong Chen and Yinpeng Chen and Lu Yuan and Lei
Zhang and Qi Chu and Bin Liu and Nenghai Yu
- Abstract summary: In this paper, we find this conclusion may not hold when the target dataset has very few labeled samples for finetuning.
We propose a new progressive few-label transfer algorithm for real applications.
- Score: 80.58625921631506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised pretraining has achieved great success and many recent works
have shown unsupervised pretraining can achieve comparable or even slightly
better transfer performance than supervised pretraining on downstream target
datasets. But in this paper, we find this conclusion may not hold when the
target dataset has very few labeled samples for finetuning, \ie, few-label
transfer. We analyze the possible reason from the clustering perspective: 1)
The clustering quality of target samples is of great importance to few-label
transfer; 2) Though contrastive learning is essential to learn how to cluster,
its clustering quality is still inferior to supervised pretraining due to lack
of label supervision. Based on the analysis, we interestingly discover that
only involving some unlabeled target domain into the unsupervised pretraining
can improve the clustering quality, subsequently reducing the transfer
performance gap with supervised pretraining. This finding also motivates us to
propose a new progressive few-label transfer algorithm for real applications,
which aims to maximize the transfer performance under a limited annotation
budget. To support our analysis and proposed method, we conduct extensive
experiments on nine different target datasets. Experimental results show our
proposed method can significantly boost the few-label transfer performance of
unsupervised pretraining.
Related papers
- Unsupervised Transfer Learning via Adversarial Contrastive Training [3.227277661633986]
We propose a novel unsupervised transfer learning approach using adversarial contrastive training (ACT)
Our experimental results demonstrate outstanding classification accuracy with both fine-tuned linear probe and K-NN protocol across various datasets.
arXiv Detail & Related papers (2024-08-16T05:11:52Z) - Efficient Transferability Assessment for Selection of Pre-trained Detectors [63.21514888618542]
This paper studies the efficient transferability assessment of pre-trained object detectors.
We build up a detector transferability benchmark which contains a large and diverse zoo of pre-trained detectors.
Experimental results demonstrate that our method outperforms other state-of-the-art approaches in assessing transferability.
arXiv Detail & Related papers (2024-03-14T14:23:23Z) - Weakly Supervised Video Anomaly Detection Based on Cross-Batch
Clustering Guidance [39.43891080713327]
Weakly supervised video anomaly detection (WSVAD) is a challenging task since only video-level labels are available for training.
We propose a novel WSVAD method based on cross-batch clustering guidance.
arXiv Detail & Related papers (2022-12-16T14:38:30Z) - Self-Distillation for Further Pre-training of Transformers [83.84227016847096]
We propose self-distillation as a regularization for a further pre-training stage.
We empirically validate the efficacy of self-distillation on a variety of benchmark datasets for image and text classification tasks.
arXiv Detail & Related papers (2022-09-30T02:25:12Z) - SURF: Semi-supervised Reward Learning with Data Augmentation for
Feedback-efficient Preference-based Reinforcement Learning [168.89470249446023]
We present SURF, a semi-supervised reward learning framework that utilizes a large amount of unlabeled samples with data augmentation.
In order to leverage unlabeled samples for reward learning, we infer pseudo-labels of the unlabeled samples based on the confidence of the preference predictor.
Our experiments demonstrate that our approach significantly improves the feedback-efficiency of the preference-based method on a variety of locomotion and robotic manipulation tasks.
arXiv Detail & Related papers (2022-03-18T16:50:38Z) - Revisiting the Transferability of Supervised Pretraining: an MLP
Perspective [78.51258076624046]
Recent progress on unsupervised pretraining methods shows superior transfer performance to their supervised counterparts.
This paper sheds new light on understanding the transferability gap between unsupervised and supervised pretraining from a multilayer perceptron (MLP) perspective.
We reveal that the projector is also the key factor to better transferability of unsupervised pretraining methods than supervised pretraining methods.
arXiv Detail & Related papers (2021-12-01T13:47:30Z) - Are Fewer Labels Possible for Few-shot Learning? [81.89996465197392]
Few-shot learning is challenging due to its very limited data and labels.
Recent studies in big transfer (BiT) show that few-shot learning can greatly benefit from pretraining on large scale labeled dataset in a different domain.
We propose eigen-finetuning to enable fewer shot learning by leveraging the co-evolution of clustering and eigen-samples in the finetuning.
arXiv Detail & Related papers (2020-12-10T18:59:29Z) - Self-Supervised Prototypical Transfer Learning for Few-Shot
Classification [11.96734018295146]
Self-supervised transfer learning approach ProtoTransfer outperforms state-of-the-art unsupervised meta-learning methods on few-shot tasks.
In few-shot experiments with domain shift, our approach even has comparable performance to supervised methods, but requires orders of magnitude fewer labels.
arXiv Detail & Related papers (2020-06-19T19:00:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.