Dynamic Distillation Network for Cross-Domain Few-Shot Recognition with
Unlabeled Data
- URL: http://arxiv.org/abs/2106.07807v1
- Date: Mon, 14 Jun 2021 23:44:34 GMT
- Title: Dynamic Distillation Network for Cross-Domain Few-Shot Recognition with
Unlabeled Data
- Authors: Ashraful Islam, Chun-Fu Chen, Rameswar Panda, Leonid Karlinsky,
Rogerio Feris, Richard J. Radke
- Abstract summary: We tackle the problem of cross-domain few-shot recognition with unlabeled target data.
STARTUP was the first method that tackles this problem using self-training.
We propose a simple dynamic distillation-based approach to facilitate unlabeled images from the novel/base dataset.
- Score: 21.348965677980104
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most existing works in few-shot learning rely on meta-learning the network on
a large base dataset which is typically from the same domain as the target
dataset. We tackle the problem of cross-domain few-shot learning where there is
a large shift between the base and target domain. The problem of cross-domain
few-shot recognition with unlabeled target data is largely unaddressed in the
literature. STARTUP was the first method that tackles this problem using
self-training. However, it uses a fixed teacher pretrained on a labeled base
dataset to create soft labels for the unlabeled target samples. As the base
dataset and unlabeled dataset are from different domains, projecting the target
images in the class-domain of the base dataset with a fixed pretrained model
might be sub-optimal. We propose a simple dynamic distillation-based approach
to facilitate unlabeled images from the novel/base dataset. We impose
consistency regularization by calculating predictions from the weakly-augmented
versions of the unlabeled images from a teacher network and matching it with
the strongly augmented versions of the same images from a student network. The
parameters of the teacher network are updated as exponential moving average of
the parameters of the student network. We show that the proposed network learns
representation that can be easily adapted to the target domain even though it
has not been trained with target-specific classes during the pretraining phase.
Our model outperforms the current state-of-the art method by 4.4% for 1-shot
and 3.6% for 5-shot classification in the BSCD-FSL benchmark, and also shows
competitive performance on traditional in-domain few-shot learning task. Our
code will be available at: https://github.com/asrafulashiq/dynamic-cdfsl.
Related papers
- Cross-Level Distillation and Feature Denoising for Cross-Domain Few-Shot
Classification [49.36348058247138]
We tackle the problem of cross-domain few-shot classification by making a small proportion of unlabeled images in the target domain accessible in the training stage.
We meticulously design a cross-level knowledge distillation method, which can strengthen the ability of the model to extract more discriminative features in the target dataset.
Our approach can surpass the previous state-of-the-art method, Dynamic-Distillation, by 5.44% on 1-shot and 1.37% on 5-shot classification tasks.
arXiv Detail & Related papers (2023-11-04T12:28:04Z) - CDFSL-V: Cross-Domain Few-Shot Learning for Videos [58.37446811360741]
Few-shot video action recognition is an effective approach to recognizing new categories with only a few labeled examples.
Existing methods in video action recognition rely on large labeled datasets from the same domain.
We propose a novel cross-domain few-shot video action recognition method that leverages self-supervised learning and curriculum learning.
arXiv Detail & Related papers (2023-09-07T19:44:27Z) - Focus on Your Target: A Dual Teacher-Student Framework for
Domain-adaptive Semantic Segmentation [210.46684938698485]
We study unsupervised domain adaptation (UDA) for semantic segmentation.
We find that, by decreasing/increasing the proportion of training samples from the target domain, the 'learning ability' is strengthened/weakened.
We propose a novel dual teacher-student (DTS) framework and equip it with a bidirectional learning strategy.
arXiv Detail & Related papers (2023-03-16T05:04:10Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - OVANet: One-vs-All Network for Universal Domain Adaptation [78.86047802107025]
Existing methods manually set a threshold to reject unknown samples based on validation or a pre-defined ratio of unknown samples.
We propose a method to learn the threshold using source samples and to adapt it to the target domain.
Our idea is that a minimum inter-class distance in the source domain should be a good threshold to decide between known or unknown in the target.
arXiv Detail & Related papers (2021-04-07T18:36:31Z) - SB-MTL: Score-based Meta Transfer-Learning for Cross-Domain Few-Shot
Learning [3.6398662687367973]
We present a novel, flexible and effective method to address the Cross-Domain Few-Shot Learning problem.
Our method combines transfer-learning and meta-learning by using a MAML-optimized feature encoder and a score-based Graph Neural Network.
We observe significant improvements in accuracy across 5, 20 and 50 shot, and on the four target domains.
arXiv Detail & Related papers (2020-12-03T09:29:35Z) - Teacher-Student Consistency For Multi-Source Domain Adaptation [28.576613317253035]
In Multi-Source Domain Adaptation (MSDA), models are trained on samples from multiple source domains and used for inference on a different, target, domain.
We propose Multi-source Student Teacher (MUST), a novel procedure designed to alleviate these issues.
arXiv Detail & Related papers (2020-10-20T06:17:40Z) - Self-training for Few-shot Transfer Across Extreme Task Differences [46.07212902030414]
Most few-shot learning techniques are pre-trained on a large, labeled "base dataset"
In problem domains where such large labeled datasets are not available for pre-training, one must resort to pre-training in a different "source" problem domain.
Traditional few-shot and transfer learning techniques fail in the presence of such extreme differences between the source and target tasks.
arXiv Detail & Related papers (2020-10-15T13:23:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.