Revisiting Deep Local Descriptor for Improved Few-Shot Classification
- URL: http://arxiv.org/abs/2103.16009v1
- Date: Tue, 30 Mar 2021 00:48:28 GMT
- Title: Revisiting Deep Local Descriptor for Improved Few-Shot Classification
- Authors: Jun He, Richang Hong, Xueliang Liu, Mingliang Xu and Meng Wang
- Abstract summary: We show how one can improve the quality of embeddings by leveraging textbfDense textbfClassification and textbfAttentive textbfPooling.
We suggest to pool feature maps by applying attentive pooling instead of the widely used global average pooling (GAP) to prepare embeddings for few-shot classification.
- Score: 56.74552164206737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot classification studies the problem of quickly adapting a deep
learner to understanding novel classes based on few support images. In this
context, recent research efforts have been aimed at designing more and more
complex classifiers that measure similarities between query and support images,
but left the importance of feature embeddings seldom explored. We show that the
reliance on sophisticated classifier is not necessary and a simple classifier
applied directly to improved feature embeddings can outperform state-of-the-art
methods. To this end, we present a new method named \textbf{DCAP} in which we
investigate how one can improve the quality of embeddings by leveraging
\textbf{D}ense \textbf{C}lassification and \textbf{A}ttentive \textbf{P}ooling.
Specifically, we propose to pre-train a learner on base classes with abundant
samples to solve dense classification problem first and then fine-tune the
learner on a bunch of randomly sampled few-shot tasks to adapt it to few-shot
scenerio or the test time scenerio. We suggest to pool feature maps by applying
attentive pooling instead of the widely used global average pooling (GAP) to
prepare embeddings for few-shot classification during meta-finetuning.
Attentive pooling learns to reweight local descriptors, explaining what the
learner is looking for as evidence for decision making. Experiments on two
benchmark datasets show the proposed method to be superior in multiple few-shot
settings while being simpler and more explainable. Code is available at:
\url{https://github.com/Ukeyboard/dcap/}.
Related papers
- Semantic Enhanced Few-shot Object Detection [37.715912401900745]
We propose a fine-tuning based FSOD framework that utilizes semantic embeddings for better detection.
Our method allows each novel class to construct a compact feature space without being confused with similar base classes.
arXiv Detail & Related papers (2024-06-19T12:40:55Z) - Rethinking Few-shot 3D Point Cloud Semantic Segmentation [62.80639841429669]
This paper revisits few-shot 3D point cloud semantic segmentation (FS-PCS)
We focus on two significant issues in the state-of-the-art: foreground leakage and sparse point distribution.
To address these issues, we introduce a standardized FS-PCS setting, upon which a new benchmark is built.
arXiv Detail & Related papers (2024-03-01T15:14:47Z) - Few-shot Image Classification based on Gradual Machine Learning [6.935034849731568]
Few-shot image classification aims to accurately classify unlabeled images using only a few labeled samples.
We propose a novel approach based on the non-i.i.d paradigm of gradual machine learning (GML)
We show that the proposed approach can improve the SOTA performance by 1-5% in terms of accuracy.
arXiv Detail & Related papers (2023-07-28T12:30:41Z) - Fast Hierarchical Learning for Few-Shot Object Detection [57.024072600597464]
Transfer learning approaches have recently achieved promising results on the few-shot detection task.
These approaches suffer from catastrophic forgetting'' issue due to finetuning of base detector.
We tackle the aforementioned issues in this work.
arXiv Detail & Related papers (2022-10-10T20:31:19Z) - A Simple Approach to Adversarial Robustness in Few-shot Image
Classification [20.889464448762176]
We show that a simple transfer-learning based approach can be used to train adversarially robust few-shot classifiers.
We also present a method for novel classification task based on calibrating the centroid of the few-shot category towards the base classes.
arXiv Detail & Related papers (2022-04-11T22:46:41Z) - Class-Incremental Learning with Strong Pre-trained Models [97.84755144148535]
Class-incremental learning (CIL) has been widely studied under the setting of starting from a small number of classes (base classes)
We explore an understudied real-world setting of CIL that starts with a strong model pre-trained on a large number of base classes.
Our proposed method is robust and generalizes to all analyzed CIL settings.
arXiv Detail & Related papers (2022-04-07T17:58:07Z) - A Closer Look at Few-Shot Video Classification: A New Baseline and
Benchmark [33.86872697028233]
We present an in-depth study on few-shot video classification by making three contributions.
First, we perform a consistent comparative study on the existing metric-based methods to figure out their limitations in representation learning.
Second, we discover that there is a high correlation between the novel action class and the ImageNet object class, which is problematic in the few-shot recognition setting.
Third, we present a new benchmark with more base data to facilitate future few-shot video classification without pre-training.
arXiv Detail & Related papers (2021-10-24T06:01:46Z) - Contrastive Prototype Learning with Augmented Embeddings for Few-Shot
Learning [58.2091760793799]
We propose a novel contrastive prototype learning with augmented embeddings (CPLAE) model.
With a class prototype as an anchor, CPL aims to pull the query samples of the same class closer and those of different classes further away.
Extensive experiments on several benchmarks demonstrate that our proposed CPLAE achieves new state-of-the-art.
arXiv Detail & Related papers (2021-01-23T13:22:44Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z) - Weakly-supervised Object Localization for Few-shot Learning and
Fine-grained Few-shot Learning [0.5156484100374058]
Few-shot learning aims to learn novel visual categories from very few samples.
We propose a Self-Attention Based Complementary Module (SAC Module) to fulfill the weakly-supervised object localization.
We also produce the activated masks for selecting discriminative deep descriptors for few-shot classification.
arXiv Detail & Related papers (2020-03-02T14:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.