Fine-grained Angular Contrastive Learning with Coarse Labels
- URL: http://arxiv.org/abs/2012.03515v1
- Date: Mon, 7 Dec 2020 08:09:02 GMT
- Title: Fine-grained Angular Contrastive Learning with Coarse Labels
- Authors: Guy Bukchin, Eli Schwartz, Kate Saenko, Ori Shahar, Rogerio Feris,
Raja Giryes, Leonid Karlinsky
- Abstract summary: We introduce a novel 'Angular normalization' module that allows to effectively combine supervised and self-supervised contrastive pre-training.
This work will help to pave the way for future research on this new, challenging, and very practical topic of C2FS classification.
- Score: 72.80126601230447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot learning methods offer pre-training techniques optimized for easier
later adaptation of the model to new classes (unseen during training) using one
or a few examples. This adaptivity to unseen classes is especially important
for many practical applications where the pre-trained label space cannot remain
fixed for effective use and the model needs to be "specialized" to support new
categories on the fly. One particularly interesting scenario, essentially
overlooked by the few-shot literature, is Coarse-to-Fine Few-Shot (C2FS), where
the training classes (e.g. animals) are of much `coarser granularity' than the
target (test) classes (e.g. breeds). A very practical example of C2FS is when
the target classes are sub-classes of the training classes. Intuitively, it is
especially challenging as (both regular and few-shot) supervised pre-training
tends to learn to ignore intra-class variability which is essential for
separating sub-classes. In this paper, we introduce a novel 'Angular
normalization' module that allows to effectively combine supervised and
self-supervised contrastive pre-training to approach the proposed C2FS task,
demonstrating significant gains in a broad study over multiple baselines and
datasets. We hope that this work will help to pave the way for future research
on this new, challenging, and very practical topic of C2FS classification.
Related papers
- Enhancing Visual Continual Learning with Language-Guided Supervision [76.38481740848434]
Continual learning aims to empower models to learn new tasks without forgetting previously acquired knowledge.
We argue that the scarce semantic information conveyed by the one-hot labels hampers the effective knowledge transfer across tasks.
Specifically, we use PLMs to generate semantic targets for each class, which are frozen and serve as supervision signals.
arXiv Detail & Related papers (2024-03-24T12:41:58Z) - Few-Shot Class-Incremental Learning via Training-Free Prototype
Calibration [67.69532794049445]
We find a tendency for existing methods to misclassify the samples of new classes into base classes, which leads to the poor performance of new classes.
We propose a simple yet effective Training-frEE calibratioN (TEEN) strategy to enhance the discriminability of new classes.
arXiv Detail & Related papers (2023-12-08T18:24:08Z) - RanPAC: Random Projections and Pre-trained Models for Continual Learning [59.07316955610658]
Continual learning (CL) aims to learn different tasks (such as classification) in a non-stationary data stream without forgetting old ones.
We propose a concise and effective approach for CL with pre-trained models.
arXiv Detail & Related papers (2023-07-05T12:49:02Z) - Sylph: A Hypernetwork Framework for Incremental Few-shot Object
Detection [8.492340530784697]
We show that finetune-free iFSD can be highly effective when a large number of base categories with abundant data are available for meta-training.
We benchmark our model on both COCO and LVIS, reporting as high as $17%$ AP on the long-tail rare classes on LVIS.
arXiv Detail & Related papers (2022-03-25T20:39:00Z) - Learning What Not to Segment: A New Perspective on Few-Shot Segmentation [63.910211095033596]
Recently few-shot segmentation (FSS) has been extensively developed.
This paper proposes a fresh and straightforward insight to alleviate the problem.
In light of the unique nature of the proposed approach, we also extend it to a more realistic but challenging setting.
arXiv Detail & Related papers (2022-03-15T03:08:27Z) - Subspace Regularizers for Few-Shot Class Incremental Learning [26.372024890126408]
We present a new family of subspace regularization schemes that encourage weight vectors for new classes to lie close to the subspace spanned by the weights of existing classes.
Our results show that simple geometric regularization of class representations offers an effective tool for continual learning.
arXiv Detail & Related papers (2021-10-13T22:19:53Z) - Prior Guided Feature Enrichment Network for Few-Shot Segmentation [64.91560451900125]
State-of-the-art semantic segmentation methods require sufficient labeled data to achieve good results.
Few-shot segmentation is proposed to tackle this problem by learning a model that quickly adapts to new classes with a few labeled support samples.
Theses frameworks still face the challenge of generalization ability reduction on unseen classes due to inappropriate use of high-level semantic information.
arXiv Detail & Related papers (2020-08-04T10:41:32Z) - Incremental Few-Shot Object Detection for Robotics [15.082365880914896]
Class-Incremental Few-Shot Object Detection (CI-FSOD) framework enables deep object detection network to perform effective continual learning from just few-shot samples.
Our framework is simple yet effective and outperforms the previous SOTA with a significant margin of 2.4 points in AP performance.
arXiv Detail & Related papers (2020-05-06T08:05:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.