How does the degree of novelty impacts semi-supervised representation
learning for novel class retrieval?
- URL: http://arxiv.org/abs/2208.08217v1
- Date: Wed, 17 Aug 2022 10:49:10 GMT
- Title: How does the degree of novelty impacts semi-supervised representation
learning for novel class retrieval?
- Authors: Quentin Leroy, Olivier Buisson, Alexis Joly
- Abstract summary: Supervised representation learning with deep networks tends to overfit the training classes.
We propose an original evaluation methodology that varies the degree of novelty of novel classes.
We find that a vanilla supervised representation falls short on the retrieval of novel classes even more so when the semantics gap is higher.
- Score: 0.5672132510411463
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Supervised representation learning with deep networks tends to overfit the
training classes and the generalization to novel classes is a challenging
question. It is common to evaluate a learned embedding on held-out images of
the same training classes. In real applications however, data comes from new
sources and novel classes are likely to arise. We hypothesize that
incorporating unlabelled images of novel classes in the training set in a
semi-supervised fashion would be beneficial for the efficient retrieval of
novel-class images compared to a vanilla supervised representation. To verify
this hypothesis in a comprehensive way, we propose an original evaluation
methodology that varies the degree of novelty of novel classes by partitioning
the dataset category-wise either randomly, or semantically, i.e. by minimizing
the shared semantics between base and novel classes. This evaluation procedure
allows to train a representation blindly to any novel-class labels and evaluate
the frozen representation on the retrieval of base or novel classes. We find
that a vanilla supervised representation falls short on the retrieval of novel
classes even more so when the semantics gap is higher. Semi-supervised
algorithms allow to partially bridge this performance gap but there is still
much room for improvement.
Related papers
- Semantic Enhanced Few-shot Object Detection [37.715912401900745]
We propose a fine-tuning based FSOD framework that utilizes semantic embeddings for better detection.
Our method allows each novel class to construct a compact feature space without being confused with similar base classes.
arXiv Detail & Related papers (2024-06-19T12:40:55Z) - ProxyDet: Synthesizing Proxy Novel Classes via Classwise Mixup for
Open-Vocabulary Object Detection [7.122652901894367]
Open-vocabulary object detection (OVOD) aims to recognize novel objects whose categories are not included in the training set.
We present a novel, yet simple technique that helps generalization on the overall distribution of novel classes.
arXiv Detail & Related papers (2023-12-12T13:45:56Z) - PromptCAL: Contrastive Affinity Learning via Auxiliary Prompts for
Generalized Novel Category Discovery [39.03732147384566]
Generalized Novel Category Discovery (GNCD) setting aims to categorize unlabeled training data coming from known and novel classes.
We propose Contrastive Affinity Learning method with auxiliary visual Prompts, dubbed PromptCAL, to address this challenging problem.
Our approach discovers reliable pairwise sample affinities to learn better semantic clustering of both known and novel classes for the class token and visual prompts.
arXiv Detail & Related papers (2022-12-11T20:06:14Z) - Activating the Discriminability of Novel Classes for Few-shot
Segmentation [48.542627940781095]
We propose to activate the discriminability of novel classes explicitly in both the feature encoding stage and the prediction stage for segmentation.
In the prediction stage for segmentation, we learn an Self-Refined Online Foreground-Background classifier (SROFB), which is able to refine itself using the high-confidence pixels of query image.
arXiv Detail & Related papers (2022-12-02T12:22:36Z) - Novel Class Discovery without Forgetting [72.52222295216062]
We identify and formulate a new, pragmatic problem setting of NCDwF: Novel Class Discovery without Forgetting.
We propose a machine learning model to incrementally discover novel categories of instances from unlabeled data.
We introduce experimental protocols based on CIFAR-10, CIFAR-100 and ImageNet-1000 to measure the trade-off between knowledge retention and novel class discovery.
arXiv Detail & Related papers (2022-07-21T17:54:36Z) - Bridging Non Co-occurrence with Unlabeled In-the-wild Data for
Incremental Object Detection [56.22467011292147]
Several incremental learning methods are proposed to mitigate catastrophic forgetting for object detection.
Despite the effectiveness, these methods require co-occurrence of the unlabeled base classes in the training data of the novel classes.
We propose the use of unlabeled in-the-wild data to bridge the non-occurrence caused by the missing base classes during the training of additional novel classes.
arXiv Detail & Related papers (2021-10-28T10:57:25Z) - Revisiting Deep Local Descriptor for Improved Few-Shot Classification [56.74552164206737]
We show how one can improve the quality of embeddings by leveraging textbfDense textbfClassification and textbfAttentive textbfPooling.
We suggest to pool feature maps by applying attentive pooling instead of the widely used global average pooling (GAP) to prepare embeddings for few-shot classification.
arXiv Detail & Related papers (2021-03-30T00:48:28Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Generalized Few-Shot Video Classification with Video Retrieval and
Feature Generation [132.82884193921535]
We argue that previous methods underestimate the importance of video feature learning and propose a two-stage approach.
We show that this simple baseline approach outperforms prior few-shot video classification methods by over 20 points on existing benchmarks.
We present two novel approaches that yield further improvement.
arXiv Detail & Related papers (2020-07-09T13:05:32Z) - Sharing Matters for Generalization in Deep Metric Learning [22.243744691711452]
This work investigates how to learn characteristics that separate between classes without the need for annotations or training data.
By formulating our approach as a novel triplet sampling strategy, it can be easily applied on top of recent ranking loss frameworks.
arXiv Detail & Related papers (2020-04-12T10:21:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.