CIM: Class-Irrelevant Mapping for Few-Shot Classification
- URL: http://arxiv.org/abs/2109.02840v1
- Date: Tue, 7 Sep 2021 03:26:24 GMT
- Title: CIM: Class-Irrelevant Mapping for Few-Shot Classification
- Authors: Shuai Shao and Lei Xing and Yixin Chen and Yan-Jiang Wang and Bao-Di
Liu and Yicong Zhou
- Abstract summary: Few-shot classification (FSC) is one of the most concerned hot issues in recent years.
How to appraise the pre-trained FEM is the most crucial focus in the FSC community.
We propose a simple, flexible method, dubbed as Class-Irrelevant Mapping (CIM)
- Score: 58.02773394658623
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot classification (FSC) is one of the most concerned hot issues in
recent years. The general setting consists of two phases: (1) Pre-train a
feature extraction model (FEM) with base data (has large amounts of labeled
samples). (2) Use the FEM to extract the features of novel data (with few
labeled samples and totally different categories from base data), then classify
them with the to-be-designed classifier. The adaptability of pre-trained FEM to
novel data determines the accuracy of novel features, thereby affecting the
final classification performances. To this end, how to appraise the pre-trained
FEM is the most crucial focus in the FSC community. It sounds like traditional
Class Activate Mapping (CAM) based methods can achieve this by overlaying
weighted feature maps. However, due to the particularity of FSC (e.g., there is
no backpropagation when using the pre-trained FEM to extract novel features),
we cannot activate the feature map with the novel classes. To address this
challenge, we propose a simple, flexible method, dubbed as Class-Irrelevant
Mapping (CIM). Specifically, first, we introduce dictionary learning theory and
view the channels of the feature map as the bases in a dictionary. Then we
utilize the feature map to fit the feature vector of an image to achieve the
corresponding channel weights. Finally, we overlap the weighted feature map for
visualization to appraise the ability of pre-trained FEM on novel data. For
fair use of CIM in evaluating different models, we propose a new measurement
index, called Feature Localization Accuracy (FLA). In experiments, we first
compare our CIM with CAM in regular tasks and achieve outstanding performances.
Next, we use our CIM to appraise several classical FSC frameworks without
considering the classification results and discuss them.
Related papers
- Learning Prompt with Distribution-Based Feature Replay for Few-Shot Class-Incremental Learning [56.29097276129473]
We propose a simple yet effective framework, named Learning Prompt with Distribution-based Feature Replay (LP-DiF)
To prevent the learnable prompt from forgetting old knowledge in the new session, we propose a pseudo-feature replay approach.
When progressing to a new session, pseudo-features are sampled from old-class distributions combined with training images of the current session to optimize the prompt.
arXiv Detail & Related papers (2024-01-03T07:59:17Z) - Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need [84.3507610522086]
Class-incremental learning (CIL) aims to adapt to emerging new classes without forgetting old ones.
Recent pre-training has achieved substantial progress, making vast pre-trained models (PTMs) accessible for CIL.
We argue that the core factors in CIL are adaptivity for model updating and generalizability for knowledge transferring.
arXiv Detail & Related papers (2023-03-13T17:59:02Z) - Prediction Calibration for Generalized Few-shot Semantic Segmentation [101.69940565204816]
Generalized Few-shot Semantic (GFSS) aims to segment each image pixel into either base classes with abundant training examples or novel classes with only a handful of (e.g., 1-5) training images per class.
We build a cross-attention module that guides the classifier's final prediction using the fused multi-level features.
Our PCN outperforms the state-the-art alternatives by large margins.
arXiv Detail & Related papers (2022-10-15T13:30:12Z) - CAD: Co-Adapting Discriminative Features for Improved Few-Shot
Classification [11.894289991529496]
Few-shot classification is a challenging problem that aims to learn a model that can adapt to unseen classes given a few labeled samples.
Recent approaches pre-train a feature extractor, and then fine-tune for episodic meta-learning.
We propose a strategy to cross-attend and re-weight discriminative features for few-shot classification.
arXiv Detail & Related papers (2022-03-25T06:14:51Z) - PLATINUM: Semi-Supervised Model Agnostic Meta-Learning using Submodular
Mutual Information [3.1845305066053347]
Few-shot classification (FSC) requires training models using a few (typically one to five) data points per class.
We propose PLATINUM, a novel semi-supervised model agnostic meta-learning framework that uses the submodular mutual information (SMI) functions to boost the performance of FSC.
arXiv Detail & Related papers (2022-01-30T22:07:17Z) - Rank4Class: A Ranking Formulation for Multiclass Classification [26.47229268790206]
Multiclass classification (MCC) is a fundamental machine learning problem.
We show that it is easy to boost MCC performance with a novel formulation through the lens of ranking.
arXiv Detail & Related papers (2021-12-17T19:22:37Z) - Exploring Category-correlated Feature for Few-shot Image Classification [27.13708881431794]
We present a simple yet effective feature rectification method by exploring the category correlation between novel and base classes as the prior knowledge.
The proposed approach consistently obtains considerable performance gains on three widely used benchmarks.
arXiv Detail & Related papers (2021-12-14T08:25:24Z) - Explanation-Guided Training for Cross-Domain Few-Shot Classification [96.12873073444091]
Cross-domain few-shot classification task (CD-FSC) combines few-shot classification with the requirement to generalize across domains represented by datasets.
We introduce a novel training approach for existing FSC models.
We show that explanation-guided training effectively improves the model generalization.
arXiv Detail & Related papers (2020-07-17T07:28:08Z) - Fine-Grained Visual Classification with Efficient End-to-end
Localization [49.9887676289364]
We present an efficient localization module that can be fused with a classification network in an end-to-end setup.
We evaluate the new model on the three benchmark datasets CUB200-2011, Stanford Cars and FGVC-Aircraft.
arXiv Detail & Related papers (2020-05-11T14:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.