Rethinking supervised pre-training for better downstream transferring
- URL: http://arxiv.org/abs/2110.06014v1
- Date: Tue, 12 Oct 2021 13:57:38 GMT
- Title: Rethinking supervised pre-training for better downstream transferring
- Authors: Yutong Feng, Jianwen Jiang, Mingqian Tang, Rong Jin, Yue Gao
- Abstract summary: We propose a new supervised pre-training method based on Leave-One-Out K-Nearest-Neighbor, or LOOK.
It relieves the problem of overfitting upstream tasks by only requiring each image to share its class label with most of its k nearest neighbors.
We developed efficient implementation of the proposed method that scales well to large datasets.
- Score: 46.09030708111374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The pretrain-finetune paradigm has shown outstanding performance on many
applications of deep learning, where a model is pre-trained on a upstream large
dataset (e.g. ImageNet), and is then fine-tuned to different downstream tasks.
Though for most cases, the pre-training stage is conducted based on supervised
methods, recent works on self-supervised pre-training have shown powerful
transferability and even outperform supervised pre-training on multiple
downstream tasks. It thus remains an open question how to better generalize
supervised pre-training model to downstream tasks. In this paper, we argue that
the worse transferability of existing supervised pre-training methods arise
from the negligence of valuable intra-class semantic difference. This is
because these methods tend to push images from the same class close to each
other despite of the large diversity in their visual contents, a problem to
which referred as "overfit of upstream tasks". To alleviate this problem, we
propose a new supervised pre-training method based on Leave-One-Out
K-Nearest-Neighbor, or LOOK for short. It relieves the problem of overfitting
upstream tasks by only requiring each image to share its class label with most
of its k nearest neighbors, thus allowing each class to exhibit a multi-mode
distribution and consequentially preserving part of intra-class difference for
better transferring to downstream tasks. We developed efficient implementation
of the proposed method that scales well to large datasets. Experimental studies
on multiple downstream tasks show that LOOK outperforms other state-of-the-art
methods for supervised and self-supervised pre-training.
Related papers
- Task-Robust Pre-Training for Worst-Case Downstream Adaptation [62.05108162160981]
Pre-training has achieved remarkable success when transferred to downstream tasks.
This paper considers pre-training a model that guarantees a uniformly good performance over the downstream tasks.
arXiv Detail & Related papers (2023-06-21T07:43:23Z) - On the Trade-off of Intra-/Inter-class Diversity for Supervised
Pre-training [72.8087629914444]
We study the impact of the trade-off between the intra-class diversity (the number of samples per class) and the inter-class diversity (the number of classes) of a supervised pre-training dataset.
With the size of the pre-training dataset fixed, the best downstream performance comes with a balance on the intra-/inter-class diversity.
arXiv Detail & Related papers (2023-05-20T16:23:50Z) - Multi-Level Contrastive Learning for Dense Prediction Task [59.591755258395594]
We present Multi-Level Contrastive Learning for Dense Prediction Task (MCL), an efficient self-supervised method for learning region-level feature representation for dense prediction tasks.
Our method is motivated by the three key factors in detection: localization, scale consistency and recognition.
Our method consistently outperforms the recent state-of-the-art methods on various datasets with significant margins.
arXiv Detail & Related papers (2023-04-04T17:59:04Z) - Effective Adaptation in Multi-Task Co-Training for Unified Autonomous
Driving [103.745551954983]
In this paper, we investigate the transfer performance of various types of self-supervised methods, including MoCo and SimCLR, on three downstream tasks.
We find that their performances are sub-optimal or even lag far behind the single-task baseline.
We propose a simple yet effective pretrain-adapt-finetune paradigm for general multi-task training.
arXiv Detail & Related papers (2022-09-19T12:15:31Z) - Task-Customized Self-Supervised Pre-training with Scalable Dynamic
Routing [76.78772372631623]
A common practice for self-supervised pre-training is to use as much data as possible.
For a specific downstream task, however, involving irrelevant data in pre-training may degenerate the downstream performance.
It is burdensome and infeasible to use different downstream-task-customized datasets in pre-training for different tasks.
arXiv Detail & Related papers (2022-05-26T10:49:43Z) - Deep Ensembles for Low-Data Transfer Learning [21.578470914935938]
We study different ways of creating ensembles from pre-trained models.
We show that the nature of pre-training itself is a performant source of diversity.
We propose a practical algorithm that efficiently identifies a subset of pre-trained models for any downstream dataset.
arXiv Detail & Related papers (2020-10-14T07:59:00Z) - Self-Supervised Prototypical Transfer Learning for Few-Shot
Classification [11.96734018295146]
Self-supervised transfer learning approach ProtoTransfer outperforms state-of-the-art unsupervised meta-learning methods on few-shot tasks.
In few-shot experiments with domain shift, our approach even has comparable performance to supervised methods, but requires orders of magnitude fewer labels.
arXiv Detail & Related papers (2020-06-19T19:00:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.