SB-MTL: Score-based Meta Transfer-Learning for Cross-Domain Few-Shot
Learning
- URL: http://arxiv.org/abs/2012.01784v1
- Date: Thu, 3 Dec 2020 09:29:35 GMT
- Title: SB-MTL: Score-based Meta Transfer-Learning for Cross-Domain Few-Shot
Learning
- Authors: John Cai, Bill Cai, Sheng Mei Shen
- Abstract summary: We present a novel, flexible and effective method to address the Cross-Domain Few-Shot Learning problem.
Our method combines transfer-learning and meta-learning by using a MAML-optimized feature encoder and a score-based Graph Neural Network.
We observe significant improvements in accuracy across 5, 20 and 50 shot, and on the four target domains.
- Score: 3.6398662687367973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While many deep learning methods have seen significant success in tackling
the problem of domain adaptation and few-shot learning separately, far fewer
methods are able to jointly tackle both problems in Cross-Domain Few-Shot
Learning (CD-FSL). This problem is exacerbated under sharp domain shifts that
typify common computer vision applications. In this paper, we present a novel,
flexible and effective method to address the CD-FSL problem. Our method, called
Score-based Meta Transfer-Learning (SB-MTL), combines transfer-learning and
meta-learning by using a MAML-optimized feature encoder and a score-based Graph
Neural Network. First, we have a feature encoder with specific layers designed
to be fine-tuned. To do so, we apply a first-order MAML algorithm to find good
initializations. Second, instead of directly taking the classification scores
after fine-tuning, we interpret the scores as coordinates by mapping the
pre-softmax classification scores onto a metric space. Subsequently, we apply a
Graph Neural Network to propagate label information from the support set to the
query set in our score-based metric space. We test our model on the Broader
Study of Cross-Domain Few-Shot Learning (BSCD-FSL) benchmark, which includes a
range of target domains with highly varying dissimilarity to the miniImagenet
source domain. We observe significant improvements in accuracy across 5, 20 and
50 shot, and on the four target domains. In terms of average accuracy, our
model outperforms previous transfer-learning methods by 5.93% and previous
meta-learning methods by 14.28%.
Related papers
- EMPL: A novel Efficient Meta Prompt Learning Framework for Few-shot Unsupervised Domain Adaptation [22.586094394391747]
We propose a novel Efficient Meta Prompt Learning Framework for FS-UDA.
Within this framework, we use pre-trained CLIP model as the feature learning base model.
Our method has the large improvement of at least 15.4% on 5-way 1-shot and 8.7% on 5-way 5-shot, compared with the state-of-the-art methods.
arXiv Detail & Related papers (2024-07-04T17:13:06Z) - Cross-Level Distillation and Feature Denoising for Cross-Domain Few-Shot
Classification [49.36348058247138]
We tackle the problem of cross-domain few-shot classification by making a small proportion of unlabeled images in the target domain accessible in the training stage.
We meticulously design a cross-level knowledge distillation method, which can strengthen the ability of the model to extract more discriminative features in the target dataset.
Our approach can surpass the previous state-of-the-art method, Dynamic-Distillation, by 5.44% on 1-shot and 1.37% on 5-shot classification tasks.
arXiv Detail & Related papers (2023-11-04T12:28:04Z) - Cross-Domain Cross-Set Few-Shot Learning via Learning Compact and
Aligned Representations [74.90423071048458]
Few-shot learning aims to recognize novel queries with only a few support samples.
We consider the domain shift problem in FSL and aim to address the domain gap between the support set and the query set.
We propose a novel approach, namely stabPA, to learn prototypical compact and cross-domain aligned representations.
arXiv Detail & Related papers (2022-07-16T03:40:38Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - Auto-Transfer: Learning to Route Transferrable Representations [77.30427535329571]
We propose a novel adversarial multi-armed bandit approach which automatically learns to route source representations to appropriate target representations.
We see upwards of 5% accuracy improvements compared with the state-of-the-art knowledge transfer methods.
arXiv Detail & Related papers (2022-02-02T13:09:27Z) - Dynamic Distillation Network for Cross-Domain Few-Shot Recognition with
Unlabeled Data [21.348965677980104]
We tackle the problem of cross-domain few-shot recognition with unlabeled target data.
STARTUP was the first method that tackles this problem using self-training.
We propose a simple dynamic distillation-based approach to facilitate unlabeled images from the novel/base dataset.
arXiv Detail & Related papers (2021-06-14T23:44:34Z) - DAMSL: Domain Agnostic Meta Score-based Learning [3.6398662687367973]
Domain Agnostic Meta Score-based Learning is a novel, versatile and highly effective solution for cross-domain few-shot learning.
We identify key problems in previous meta-learning methods over-fitting to the source domain, and previous transfer-learning methods under-utilizing the structure of the support set.
We show that our method overcomes the limitations of previous meta-learning and transfer-learning methods to deliver substantial improvements in accuracy across both smaller and larger domain shifts.
arXiv Detail & Related papers (2021-06-06T06:08:05Z) - Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation [78.28390172958643]
We identify two key aspects that can help to alleviate multiple domain-shifts in the multi-target domain adaptation (MTDA)
We propose Curriculum Graph Co-Teaching (CGCT) that uses a dual classifier head, with one of them being a graph convolutional network (GCN) which aggregates features from similar samples across the domains.
When the domain labels are available, we propose Domain-aware Curriculum Learning (DCL), a sequential adaptation strategy that first adapts on the easier target domains, followed by the harder ones.
arXiv Detail & Related papers (2021-04-01T23:41:41Z) - A Transductive Multi-Head Model for Cross-Domain Few-Shot Learning [72.30054522048553]
We present a new method, Transductive Multi-Head Few-Shot learning (TMHFS), to address the Cross-Domain Few-Shot Learning challenge.
The proposed methods greatly outperform the strong baseline, fine-tuning, on four different target domains.
arXiv Detail & Related papers (2020-06-08T02:39:59Z) - Cross-Domain Few-Shot Learning with Meta Fine-Tuning [8.062394790518297]
We tackle the new Cross-Domain Few-Shot Learning benchmark proposed by the CVPR 2020 Challenge.
We build upon state-of-the-art methods in domain adaptation and few-shot learning to create a system that can be trained to perform both tasks.
arXiv Detail & Related papers (2020-05-21T09:55:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.