Revisiting Mid-Level Patterns for Cross-Domain Few-Shot Recognition
- URL: http://arxiv.org/abs/2008.03128v4
- Date: Mon, 1 Nov 2021 03:18:25 GMT
- Title: Revisiting Mid-Level Patterns for Cross-Domain Few-Shot Recognition
- Authors: Yixiong Zou, Shanghang Zhang, JianPeng Yu, Yonghong Tian, Jos\'e M. F.
Moura
- Abstract summary: Cross-domain few-shot learning is proposed to transfer knowledge from general-domain base classes to special-domain novel classes.
In this paper, we study a challenging subset of CDFSL where the novel classes are in distant domains from base classes.
We propose a residual-prediction task to encourage mid-level features to learn discriminative information of each sample.
- Score: 31.81367604846625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing few-shot learning (FSL) methods usually assume base classes and
novel classes are from the same domain (in-domain setting). However, in
practice, it may be infeasible to collect sufficient training samples for some
special domains to construct base classes. To solve this problem, cross-domain
FSL (CDFSL) is proposed very recently to transfer knowledge from general-domain
base classes to special-domain novel classes. Existing CDFSL works mostly focus
on transferring between near domains, while rarely consider transferring
between distant domains, which is in practical need as any novel classes could
appear in real-world applications, and is even more challenging. In this paper,
we study a challenging subset of CDFSL where the novel classes are in distant
domains from base classes, by revisiting the mid-level features, which are more
transferable yet under-explored in main stream FSL work. To boost the
discriminability of mid-level features, we propose a residual-prediction task
to encourage mid-level features to learn discriminative information of each
sample. Notably, such mechanism also benefits the in-domain FSL and CDFSL in
near domains. Therefore, we provide two types of features for both cross- and
in-domain FSL respectively, under the same training framework. Experiments
under both settings on six public datasets, including two challenging medical
datasets, validate the our rationale and demonstrate state-of-the-art
performance. Code will be released.
Related papers
- ME-D2N: Multi-Expert Domain Decompositional Network for Cross-Domain
Few-Shot Learning [95.78635058475439]
Cross-Domain Few-Shot Learning aims at addressing the Few-Shot Learning problem across different domains.
This paper technically contributes a novel Multi-Expert Domain Decompositional Network (ME-D2N)
We present a novel domain decomposition module that learns to decompose the student model into two domain-related sub parts.
arXiv Detail & Related papers (2022-10-11T09:24:47Z) - Cross-Domain Cross-Set Few-Shot Learning via Learning Compact and
Aligned Representations [74.90423071048458]
Few-shot learning aims to recognize novel queries with only a few support samples.
We consider the domain shift problem in FSL and aim to address the domain gap between the support set and the query set.
We propose a novel approach, namely stabPA, to learn prototypical compact and cross-domain aligned representations.
arXiv Detail & Related papers (2022-07-16T03:40:38Z) - Few-Shot Object Detection in Unseen Domains [4.36080478413575]
Few-shot object detection (FSOD) has thrived in recent years to learn novel object classes with limited data.
We propose various data augmentations techniques on the few shots of novel classes to account for all possible domain-specific information.
Our experiments on the T-LESS dataset show that the proposed approach succeeds in alleviating the domain gap considerably.
arXiv Detail & Related papers (2022-04-11T13:16:41Z) - A Strong Baseline for Semi-Supervised Incremental Few-Shot Learning [54.617688468341704]
Few-shot learning aims to learn models that generalize to novel classes with limited training samples.
We propose a novel paradigm containing two parts: (1) a well-designed meta-training algorithm for mitigating ambiguity between base and novel classes caused by unreliable pseudo labels and (2) a model adaptation mechanism to learn discriminative features for novel classes while preserving base knowledge using few labeled and all the unlabeled data.
arXiv Detail & Related papers (2021-10-21T13:25:52Z) - Prototypical Cross-domain Self-supervised Learning for Few-shot
Unsupervised Domain Adaptation [91.58443042554903]
We propose an end-to-end Prototypical Cross-domain Self-Supervised Learning (PCS) framework for Few-shot Unsupervised Domain Adaptation (FUDA)
PCS not only performs cross-domain low-level feature alignment, but it also encodes and aligns semantic structures in the shared embedding space across domains.
Compared with state-of-the-art methods, PCS improves the mean classification accuracy over different domain pairs on FUDA by 10.5%, 3.5%, 9.0%, and 13.2% on Office, Office-Home, VisDA-2017, and DomainNet, respectively.
arXiv Detail & Related papers (2021-03-31T02:07:42Z) - Domain-Adaptive Few-Shot Learning [124.51420562201407]
We propose a novel domain-adversarial network (DAPN) model for domain-adaptive few-shot learning.
Our solution is to explicitly enhance the source/target per-class separation before domain-adaptive feature embedding learning.
arXiv Detail & Related papers (2020-03-19T08:31:14Z) - Few-Shot Learning as Domain Adaptation: Algorithm and Analysis [120.75020271706978]
Few-shot learning uses prior knowledge learned from the seen classes to recognize the unseen classes.
This class-difference-caused distribution shift can be considered as a special case of domain shift.
We propose a prototypical domain adaptation network with attention (DAPNA) to explicitly tackle such a domain shift problem in a meta-learning framework.
arXiv Detail & Related papers (2020-02-06T01:04:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.