Cooperative Bi-path Metric for Few-shot Learning
- URL: http://arxiv.org/abs/2008.04031v1
- Date: Mon, 10 Aug 2020 11:28:52 GMT
- Title: Cooperative Bi-path Metric for Few-shot Learning
- Authors: Zeyuan Wang, Yifan Zhao, Jia Li, Yonghong Tian
- Abstract summary: We make two contributions to investigate the few-shot classification problem.
We report a simple and effective baseline trained on base classes in the way of traditional supervised learning.
We propose a cooperative bi-path metric for classification, which leverages the correlations between base classes and novel classes to further improve the accuracy.
- Score: 50.98891758059389
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given base classes with sufficient labeled samples, the target of few-shot
classification is to recognize unlabeled samples of novel classes with only a
few labeled samples. Most existing methods only pay attention to the
relationship between labeled and unlabeled samples of novel classes, which do
not make full use of information within base classes. In this paper, we make
two contributions to investigate the few-shot classification problem. First, we
report a simple and effective baseline trained on base classes in the way of
traditional supervised learning, which can achieve comparable results to the
state of the art. Second, based on the baseline, we propose a cooperative
bi-path metric for classification, which leverages the correlations between
base classes and novel classes to further improve the accuracy. Experiments on
two widely used benchmarks show that our method is a simple and effective
framework, and a new state of the art is established in the few-shot
classification field.
Related papers
- Liberating Seen Classes: Boosting Few-Shot and Zero-Shot Text Classification via Anchor Generation and Classification Reframing [38.84431954053434]
Few-shot and zero-shot text classification aim to recognize samples from novel classes with limited labeled samples or no labeled samples at all.
We propose a simple and effective strategy for few-shot and zero-shot text classification.
arXiv Detail & Related papers (2024-05-06T15:38:32Z) - Knowledge Distillation from Single to Multi Labels: an Empirical Study [14.12487391004319]
We introduce a novel distillation method based on Class Activation Maps (CAMs)
Our findings indicate that the logit-based method is not well-suited for multi-label classification.
We propose that a suitable dark knowledge should incorporate class-wise information and be highly correlated with the final classification results.
arXiv Detail & Related papers (2023-03-15T04:39:01Z) - Multi-Faceted Distillation of Base-Novel Commonality for Few-shot Object
Detection [58.48995335728938]
We learn three types of class-agnostic commonalities between base and novel classes explicitly.
Our method can be readily integrated into most of existing fine-tuning based methods and consistently improve the performance by a large margin.
arXiv Detail & Related papers (2022-07-22T16:46:51Z) - Class-incremental Novel Class Discovery [76.35226130521758]
We study the new task of class-incremental Novel Class Discovery (class-iNCD)
We propose a novel approach for class-iNCD which prevents forgetting of past information about the base classes.
Our experiments, conducted on three common benchmarks, demonstrate that our method significantly outperforms state-of-the-art approaches.
arXiv Detail & Related papers (2022-07-18T13:49:27Z) - A Simple Approach to Adversarial Robustness in Few-shot Image
Classification [20.889464448762176]
We show that a simple transfer-learning based approach can be used to train adversarially robust few-shot classifiers.
We also present a method for novel classification task based on calibrating the centroid of the few-shot category towards the base classes.
arXiv Detail & Related papers (2022-04-11T22:46:41Z) - Bridging Non Co-occurrence with Unlabeled In-the-wild Data for
Incremental Object Detection [56.22467011292147]
Several incremental learning methods are proposed to mitigate catastrophic forgetting for object detection.
Despite the effectiveness, these methods require co-occurrence of the unlabeled base classes in the training data of the novel classes.
We propose the use of unlabeled in-the-wild data to bridge the non-occurrence caused by the missing base classes during the training of additional novel classes.
arXiv Detail & Related papers (2021-10-28T10:57:25Z) - ECKPN: Explicit Class Knowledge Propagation Network for Transductive
Few-shot Learning [53.09923823663554]
Class-level knowledge can be easily learned by humans from just a handful of samples.
We propose an Explicit Class Knowledge Propagation Network (ECKPN) to address this problem.
We conduct extensive experiments on four few-shot classification benchmarks, and the experimental results show that the proposed ECKPN significantly outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-06-16T02:29:43Z) - Few-shot Learning for Multi-label Intent Detection [59.66787898744991]
State-of-the-art work estimates label-instance relevance scores and uses a threshold to select multiple associated intent labels.
Experiments on two datasets show that the proposed model significantly outperforms strong baselines in both one-shot and five-shot settings.
arXiv Detail & Related papers (2020-10-11T14:42:18Z) - Shot in the Dark: Few-Shot Learning with No Base-Class Labels [32.96824710484196]
We show that off-the-shelf self-supervised learning outperforms transductive few-shot methods by 3.9% for 5-shot accuracy on miniImageNet.
This motivates us to examine more carefully the role of features learned through self-supervision in few-shot learning.
arXiv Detail & Related papers (2020-10-06T02:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.