Adaptive Parametric Prototype Learning for Cross-Domain Few-Shot
Classification
- URL: http://arxiv.org/abs/2309.01342v1
- Date: Mon, 4 Sep 2023 03:58:50 GMT
- Title: Adaptive Parametric Prototype Learning for Cross-Domain Few-Shot
Classification
- Authors: Marzi Heidari, Abdullah Alchihabi, Qing En, Yuhong Guo
- Abstract summary: We develop a novel Adaptive Parametric Prototype Learning (APPL) method under the meta-learning convention for cross-domain few-shot classification.
APPL yields superior performance than many state-of-the-art cross-domain few-shot learning methods.
- Score: 23.82751179819225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-domain few-shot classification induces a much more challenging problem
than its in-domain counterpart due to the existence of domain shifts between
the training and test tasks. In this paper, we develop a novel Adaptive
Parametric Prototype Learning (APPL) method under the meta-learning convention
for cross-domain few-shot classification. Different from existing prototypical
few-shot methods that use the averages of support instances to calculate the
class prototypes, we propose to learn class prototypes from the concatenated
features of the support set in a parametric fashion and meta-learn the model by
enforcing prototype-based regularization on the query set. In addition, we
fine-tune the model in the target domain in a transductive manner using a
weighted-moving-average self-training approach on the query instances. We
conduct experiments on multiple cross-domain few-shot benchmark datasets. The
empirical results demonstrate that APPL yields superior performance than many
state-of-the-art cross-domain few-shot learning methods.
Related papers
- Leveraging Normalization Layer in Adapters With Progressive Learning and
Adaptive Distillation for Cross-Domain Few-Shot Learning [27.757318834190443]
Cross-domain few-shot learning presents a formidable challenge, as models must be trained on base classes and tested on novel classes from various domains with only a few samples at hand.
We introduce a novel generic framework that leverages normalization layer in adapters with Progressive Learning and Adaptive Distillation (ProLAD)
We deploy two strategies: a progressive training of the two adapters and an adaptive distillation technique derived from features determined by the model solely with the adapter devoid of a normalization layer.
arXiv Detail & Related papers (2023-12-18T15:02:14Z) - Multi-level Relation Learning for Cross-domain Few-shot Hyperspectral
Image Classification [8.78907921615878]
Cross-domain few-shot hyperspectral image classification focuses on learning prior knowledge from a large number of labeled samples from source domains.
This paper proposes to learn sample relations on different levels and take them into the model learning process.
arXiv Detail & Related papers (2023-11-02T13:06:03Z) - Dual Adaptive Representation Alignment for Cross-domain Few-shot
Learning [58.837146720228226]
Few-shot learning aims to recognize novel queries with limited support samples by learning from base knowledge.
Recent progress in this setting assumes that the base knowledge and novel query samples are distributed in the same domains.
We propose to address the cross-domain few-shot learning problem where only extremely few samples are available in target domains.
arXiv Detail & Related papers (2023-06-18T09:52:16Z) - Semi-supervised Domain Adaptation via Prototype-based Multi-level
Learning [4.232614032390374]
In semi-supervised domain adaptation (SSDA), a few labeled target samples of each class help the model to transfer knowledge representation from the fully labeled source domain to the target domain.
We propose a Prototype-based Multi-level Learning (ProML) framework to better tap the potential of labeled target samples.
arXiv Detail & Related papers (2023-05-04T10:09:30Z) - Learning Instance-Specific Adaptation for Cross-Domain Segmentation [79.61787982393238]
We propose a test-time adaptation method for cross-domain image segmentation.
Given a new unseen instance at test time, we adapt a pre-trained model by conducting instance-specific BatchNorm calibration.
arXiv Detail & Related papers (2022-03-30T17:59:45Z) - Contrastive Prototype Learning with Augmented Embeddings for Few-Shot
Learning [58.2091760793799]
We propose a novel contrastive prototype learning with augmented embeddings (CPLAE) model.
With a class prototype as an anchor, CPL aims to pull the query samples of the same class closer and those of different classes further away.
Extensive experiments on several benchmarks demonstrate that our proposed CPLAE achieves new state-of-the-art.
arXiv Detail & Related papers (2021-01-23T13:22:44Z) - Class-Incremental Domain Adaptation [56.72064953133832]
We introduce a practical Domain Adaptation (DA) paradigm called Class-Incremental Domain Adaptation (CIDA)
Existing DA methods tackle domain-shift but are unsuitable for learning novel target-domain classes.
Our approach yields superior performance as compared to both DA and CI methods in the CIDA paradigm.
arXiv Detail & Related papers (2020-08-04T07:55:03Z) - Cross-domain Detection via Graph-induced Prototype Alignment [114.8952035552862]
We propose a Graph-induced Prototype Alignment (GPA) framework to seek for category-level domain alignment.
In addition, in order to alleviate the negative effect of class-imbalance on domain adaptation, we design a Class-reweighted Contrastive Loss.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-03-28T17:46:55Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.