Few-shot Open-set Recognition Using Background as Unknowns
- URL: http://arxiv.org/abs/2207.09059v1
- Date: Tue, 19 Jul 2022 04:19:29 GMT
- Title: Few-shot Open-set Recognition Using Background as Unknowns
- Authors: Nan Song, Chi Zhang, Guosheng Lin
- Abstract summary: Few-shot open-set recognition aims to classify both seen and novel images given only limited training data of seen classes.
Our proposed method not only outperforms multiple baselines but also sets new results on three popular benchmarks.
- Score: 58.04165813493666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot open-set recognition aims to classify both seen and novel images
given only limited training data of seen classes. The challenge of this task is
that the model is required not only to learn a discriminative classifier to
classify the pre-defined classes with few training data but also to reject
inputs from unseen classes that never appear at training time. In this paper,
we propose to solve the problem from two novel aspects. First, instead of
learning the decision boundaries between seen classes, as is done in standard
close-set classification, we reserve space for unseen classes, such that images
located in these areas are recognized as the unseen classes. Second, to
effectively learn such decision boundaries, we propose to utilize the
background features from seen classes. As these background regions do not
significantly contribute to the decision of close-set classification, it is
natural to use them as the pseudo unseen classes for classifier learning. Our
extensive experiments show that our proposed method not only outperforms
multiple baselines but also sets new state-of-the-art results on three popular
benchmarks, namely tieredImageNet, miniImageNet, and Caltech-USCD
Birds-200-2011 (CUB).
Related papers
- Generalization Bounds for Few-Shot Transfer Learning with Pretrained
Classifiers [26.844410679685424]
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes.
We show that the few-shot error of the learned feature map on new classes is small in case of class-feature-variability collapse.
arXiv Detail & Related papers (2022-12-23T18:46:05Z) - Evidential Deep Learning for Class-Incremental Semantic Segmentation [15.563703446465823]
Class-Incremental Learning is a challenging problem in machine learning that aims to extend previously trained neural networks with new classes.
In this paper, we address the problem of how to model unlabeled classes while avoiding spurious feature clustering of future uncorrelated classes.
Our method factorizes the problem into a separate foreground class probability, calculated by the expected value of the Dirichlet distribution, and an unknown class (background) probability corresponding to the uncertainty of the estimate.
arXiv Detail & Related papers (2022-12-06T10:13:30Z) - Learning What Not to Segment: A New Perspective on Few-Shot Segmentation [63.910211095033596]
Recently few-shot segmentation (FSS) has been extensively developed.
This paper proposes a fresh and straightforward insight to alleviate the problem.
In light of the unique nature of the proposed approach, we also extend it to a more realistic but challenging setting.
arXiv Detail & Related papers (2022-03-15T03:08:27Z) - Generalized Category Discovery [148.32255950504182]
We consider a highly general image recognition setting wherein, given a labelled and unlabelled set of images, the task is to categorize all images in the unlabelled set.
Here, the unlabelled images may come from labelled classes or from novel ones.
We first establish strong baselines by taking state-of-the-art algorithms from novel category discovery and adapting them for this task.
We then introduce a simple yet effective semi-supervised $k$-means method to cluster the unlabelled data into seen and unseen classes.
arXiv Detail & Related papers (2022-01-07T18:58:35Z) - Bridging Non Co-occurrence with Unlabeled In-the-wild Data for
Incremental Object Detection [56.22467011292147]
Several incremental learning methods are proposed to mitigate catastrophic forgetting for object detection.
Despite the effectiveness, these methods require co-occurrence of the unlabeled base classes in the training data of the novel classes.
We propose the use of unlabeled in-the-wild data to bridge the non-occurrence caused by the missing base classes during the training of additional novel classes.
arXiv Detail & Related papers (2021-10-28T10:57:25Z) - A Closer Look at Few-Shot Video Classification: A New Baseline and
Benchmark [33.86872697028233]
We present an in-depth study on few-shot video classification by making three contributions.
First, we perform a consistent comparative study on the existing metric-based methods to figure out their limitations in representation learning.
Second, we discover that there is a high correlation between the novel action class and the ImageNet object class, which is problematic in the few-shot recognition setting.
Third, we present a new benchmark with more base data to facilitate future few-shot video classification without pre-training.
arXiv Detail & Related papers (2021-10-24T06:01:46Z) - Rectifying the Shortcut Learning of Background: Shared Object
Concentration for Few-Shot Image Recognition [101.59989523028264]
Few-Shot image classification aims to utilize pretrained knowledge learned from a large-scale dataset to tackle a series of downstream classification tasks.
We propose COSOC, a novel Few-Shot Learning framework, to automatically figure out foreground objects at both pretraining and evaluation stage.
arXiv Detail & Related papers (2021-07-16T07:46:41Z) - Few-Shot Open-Set Recognition using Meta-Learning [72.15940446408824]
The problem of open-set recognition is considered.
A new oPen sEt mEta LEaRning (PEELER) algorithm is introduced.
arXiv Detail & Related papers (2020-05-27T23:49:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.