Few-Shot Learning with Geometric Constraints
- URL: http://arxiv.org/abs/2003.09151v1
- Date: Fri, 20 Mar 2020 08:50:32 GMT
- Title: Few-Shot Learning with Geometric Constraints
- Authors: Hong-Gyu Jung and Seong-Whan Lee
- Abstract summary: We consider the problem of few-shot learning for classification.
We propose two geometric constraints to fine-tune the network with a few training examples.
- Score: 25.22980274856574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this article, we consider the problem of few-shot learning for
classification. We assume a network trained for base categories with a large
number of training examples, and we aim to add novel categories to it that have
only a few, e.g., one or five, training examples. This is a challenging
scenario because: 1) high performance is required in both the base and novel
categories; and 2) training the network for the new categories with a few
training examples can contaminate the feature space trained well for the base
categories. To address these challenges, we propose two geometric constraints
to fine-tune the network with a few training examples. The first constraint
enables features of the novel categories to cluster near the category weights,
and the second maintains the weights of the novel categories far from the
weights of the base categories. By applying the proposed constraints, we
extract discriminative features for the novel categories while preserving the
feature space learned for the base categories. Using public data sets for
few-shot learning that are subsets of ImageNet, we demonstrate that the
proposed method outperforms prevalent methods by a large margin.
Related papers
- Few-Shot Class-Incremental Learning via Training-Free Prototype
Calibration [67.69532794049445]
We find a tendency for existing methods to misclassify the samples of new classes into base classes, which leads to the poor performance of new classes.
We propose a simple yet effective Training-frEE calibratioN (TEEN) strategy to enhance the discriminability of new classes.
arXiv Detail & Related papers (2023-12-08T18:24:08Z) - Learn to Categorize or Categorize to Learn? Self-Coding for Generalized
Category Discovery [49.1865089933055]
We propose a novel, efficient and self-supervised method capable of discovering previously unknown categories at test time.
A salient feature of our approach is the assignment of minimum length category codes to individual data instances.
Experimental evaluations, bolstered by state-of-the-art benchmark comparisons, testify to the efficacy of our solution.
arXiv Detail & Related papers (2023-10-30T17:45:32Z) - Fine-grained Category Discovery under Coarse-grained supervision with
Hierarchical Weighted Self-contrastive Learning [37.6512548064269]
We investigate a new practical scenario called Fine-grained Category Discovery under Coarse-grained supervision (FCDC)
FCDC aims at discovering fine-grained categories with only coarse-grained labeled data, which can adapt models to categories of different granularity from known ones and reduce significant labeling cost.
We propose a hierarchical weighted self-contrastive network by building a novel weighted self-contrastive module and combining it with supervised learning in a hierarchical manner.
arXiv Detail & Related papers (2022-10-14T12:06:23Z) - Automatically Discovering Novel Visual Categories with Self-supervised
Prototype Learning [68.63910949916209]
This paper tackles the problem of novel category discovery (NCD), which aims to discriminate unknown categories in large-scale image collections.
We propose a novel adaptive prototype learning method consisting of two main stages: prototypical representation learning and prototypical self-training.
We conduct extensive experiments on four benchmark datasets and demonstrate the effectiveness and robustness of the proposed method with state-of-the-art performance.
arXiv Detail & Related papers (2022-08-01T16:34:33Z) - Few-shot Open-set Recognition Using Background as Unknowns [58.04165813493666]
Few-shot open-set recognition aims to classify both seen and novel images given only limited training data of seen classes.
Our proposed method not only outperforms multiple baselines but also sets new results on three popular benchmarks.
arXiv Detail & Related papers (2022-07-19T04:19:29Z) - Class-incremental Novel Class Discovery [76.35226130521758]
We study the new task of class-incremental Novel Class Discovery (class-iNCD)
We propose a novel approach for class-iNCD which prevents forgetting of past information about the base classes.
Our experiments, conducted on three common benchmarks, demonstrate that our method significantly outperforms state-of-the-art approaches.
arXiv Detail & Related papers (2022-07-18T13:49:27Z) - A Simple Approach to Adversarial Robustness in Few-shot Image
Classification [20.889464448762176]
We show that a simple transfer-learning based approach can be used to train adversarially robust few-shot classifiers.
We also present a method for novel classification task based on calibrating the centroid of the few-shot category towards the base classes.
arXiv Detail & Related papers (2022-04-11T22:46:41Z) - Weak Novel Categories without Tears: A Survey on Weak-Shot Learning [10.668094663201385]
It is time-consuming and labor-intensive to collect abundant fully-annotated training data for all categories.
weak-shot learning can also be treated as weakly supervised learning with auxiliary fully supervised categories.
arXiv Detail & Related papers (2021-10-06T11:04:36Z) - Fine-grained Angular Contrastive Learning with Coarse Labels [72.80126601230447]
We introduce a novel 'Angular normalization' module that allows to effectively combine supervised and self-supervised contrastive pre-training.
This work will help to pave the way for future research on this new, challenging, and very practical topic of C2FS classification.
arXiv Detail & Related papers (2020-12-07T08:09:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.