Understanding Cross-Domain Few-Shot Learning: An Experimental Study
- URL: http://arxiv.org/abs/2202.01339v1
- Date: Tue, 1 Feb 2022 12:35:25 GMT
- Title: Understanding Cross-Domain Few-Shot Learning: An Experimental Study
- Authors: Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song,
Se-Young Yun
- Abstract summary: Cross-domain few-shot learning has drawn increasing attention for handling large differences between the source and target domains.
Recent works have considered exploiting small-scale unlabeled data from the target domain during the pre-training stage.
This data enables self-supervised pre-training on the target domain, in addition to supervised pre-training on the source domain.
- Score: 17.81177649496765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-domain few-shot learning has drawn increasing attention for handling
large differences between the source and target domains--an important concern
in real-world scenarios. To overcome these large differences, recent works have
considered exploiting small-scale unlabeled data from the target domain during
the pre-training stage. This data enables self-supervised pre-training on the
target domain, in addition to supervised pre-training on the source domain. In
this paper, we empirically investigate scenarios under which it is advantageous
to use each pre-training scheme, based on domain similarity and few-shot
difficulty: performance gain of self-supervised pre-training over supervised
pre-training increases when domain similarity is smaller or few-shot difficulty
is lower. We further design two pre-training schemes, mixed-supervised and
two-stage learning, that improve performance. In this light, we present seven
findings for CD-FSL which are supported by extensive experiments and analyses
on three source and eight target benchmark datasets with varying levels of
domain similarity and few-shot difficulty. Our code is available at
https://anonymous.4open.science/r/understandingCDFSL.
Related papers
- In-Domain Self-Supervised Learning Improves Remote Sensing Image Scene
Classification [5.323049242720532]
Self-supervised learning has emerged as a promising approach for remote sensing image classification.
We present a study of different self-supervised pre-training strategies and evaluate their effect across 14 downstream datasets.
arXiv Detail & Related papers (2023-07-04T10:57:52Z) - Deep Learning for Cross-Domain Few-Shot Visual Recognition: A Survey [33.00835033658241]
Few-shot learning enables models to perform the target tasks with very few labeled examples.
To overcome this limitation, Cross-domain few-shot learning has gained attention.
This paper presents the first comprehensive review of Cross-domain Few-shot Learning.
arXiv Detail & Related papers (2023-03-15T12:18:16Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised
Pre-Training [67.71228426496013]
We show that using target domain data during pre-training leads to large performance improvements across a variety of setups.
We find that pre-training on multiple domains improves performance generalization on domains not seen during training.
arXiv Detail & Related papers (2021-04-02T12:53:15Z) - Self-training for Few-shot Transfer Across Extreme Task Differences [46.07212902030414]
Most few-shot learning techniques are pre-trained on a large, labeled "base dataset"
In problem domains where such large labeled datasets are not available for pre-training, one must resort to pre-training in a different "source" problem domain.
Traditional few-shot and transfer learning techniques fail in the presence of such extreme differences between the source and target tasks.
arXiv Detail & Related papers (2020-10-15T13:23:59Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Physically-Constrained Transfer Learning through Shared Abundance Space
for Hyperspectral Image Classification [14.840925517957258]
We propose a new transfer learning scheme to bridge the gap between the source and target domains.
The proposed method is referred to as physically-constrained transfer learning through shared abundance space.
arXiv Detail & Related papers (2020-08-19T17:41:37Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z) - Few-Shot Learning as Domain Adaptation: Algorithm and Analysis [120.75020271706978]
Few-shot learning uses prior knowledge learned from the seen classes to recognize the unseen classes.
This class-difference-caused distribution shift can be considered as a special case of domain shift.
We propose a prototypical domain adaptation network with attention (DAPNA) to explicitly tackle such a domain shift problem in a meta-learning framework.
arXiv Detail & Related papers (2020-02-06T01:04:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.