Cross-domain few-shot learning with unlabelled data
- URL: http://arxiv.org/abs/2101.07899v1
- Date: Tue, 19 Jan 2021 23:41:57 GMT
- Title: Cross-domain few-shot learning with unlabelled data
- Authors: Fupin Yao
- Abstract summary: Few shot learning aims to solve the data scarcity problem.
We propose a new setting some unlabelled data from the target domain is provided.
We come up with a self-supervised learning method to fully utilize the knowledge in the labeled training set and the unlabelled set.
- Score: 1.2183405753834562
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few shot learning aims to solve the data scarcity problem. If there is a
domain shift between the test set and the training set, their performance will
decrease a lot. This setting is called Cross-domain few-shot learning. However,
this is very challenging because the target domain is unseen during training.
Thus we propose a new setting some unlabelled data from the target domain is
provided, which can bridge the gap between the source domain and the target
domain. A benchmark for this setting is constructed using DomainNet
\cite{peng2018oment}. We come up with a self-supervised learning method to
fully utilize the knowledge in the labeled training set and the unlabelled set.
Extensive experiments show that our methods outperforms several baseline
methods by a large margin. We also carefully design an episodic training
pipeline which yields a significant performance boost.
Related papers
- CDFSL-V: Cross-Domain Few-Shot Learning for Videos [58.37446811360741]
Few-shot video action recognition is an effective approach to recognizing new categories with only a few labeled examples.
Existing methods in video action recognition rely on large labeled datasets from the same domain.
We propose a novel cross-domain few-shot video action recognition method that leverages self-supervised learning and curriculum learning.
arXiv Detail & Related papers (2023-09-07T19:44:27Z) - Adversarial Feature Augmentation for Cross-domain Few-shot
Classification [2.68796389443975]
We propose a novel adversarial feature augmentation (AFA) method to bridge the domain gap in few-shot learning.
The proposed method is a plug-and-play module that can be easily integrated into existing few-shot learning methods.
arXiv Detail & Related papers (2022-08-23T15:10:22Z) - Domain Adaptive Semantic Segmentation without Source Data [50.18389578589789]
We investigate domain adaptive semantic segmentation without source data, which assumes that the model is pre-trained on the source domain.
We propose an effective framework for this challenging problem with two components: positive learning and negative learning.
Our framework can be easily implemented and incorporated with other methods to further enhance the performance.
arXiv Detail & Related papers (2021-10-13T04:12:27Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - OVANet: One-vs-All Network for Universal Domain Adaptation [78.86047802107025]
Existing methods manually set a threshold to reject unknown samples based on validation or a pre-defined ratio of unknown samples.
We propose a method to learn the threshold using source samples and to adapt it to the target domain.
Our idea is that a minimum inter-class distance in the source domain should be a good threshold to decide between known or unknown in the target.
arXiv Detail & Related papers (2021-04-07T18:36:31Z) - Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised
Pre-Training [67.71228426496013]
We show that using target domain data during pre-training leads to large performance improvements across a variety of setups.
We find that pre-training on multiple domains improves performance generalization on domains not seen during training.
arXiv Detail & Related papers (2021-04-02T12:53:15Z) - Self-training for Few-shot Transfer Across Extreme Task Differences [46.07212902030414]
Most few-shot learning techniques are pre-trained on a large, labeled "base dataset"
In problem domains where such large labeled datasets are not available for pre-training, one must resort to pre-training in a different "source" problem domain.
Traditional few-shot and transfer learning techniques fail in the presence of such extreme differences between the source and target tasks.
arXiv Detail & Related papers (2020-10-15T13:23:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.