Class-Specific Channel Attention for Few-Shot Learning
- URL: http://arxiv.org/abs/2209.01332v1
- Date: Sat, 3 Sep 2022 05:54:20 GMT
- Title: Class-Specific Channel Attention for Few-Shot Learning
- Authors: Ying-Yu Chen, Jun-Wei Hsieh, Ming-Ching Chang
- Abstract summary: Few-Shot Learning has attracted growing attention in computer vision due to its capability in model training without the need for excessive data.
Conventional transfer-based solutions that aim to transfer knowledge learned from large labeled training sets to target testing sets are limited.
We propose Class-Specific Channel Attention (CSCA) module, which learns to highlight the discriminative channels in each class by assigning each class one CSCA weight vector.
- Score: 16.019616787091202
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-Shot Learning (FSL) has attracted growing attention in computer vision
due to its capability in model training without the need for excessive data.
FSL is challenging because the training and testing categories (the base vs.
novel sets) can be largely diversified. Conventional transfer-based solutions
that aim to transfer knowledge learned from large labeled training sets to
target testing sets are limited, as critical adverse impacts of the shift in
task distribution are not adequately addressed. In this paper, we extend the
solution of transfer-based methods by incorporating the concept of
metric-learning and channel attention. To better exploit the feature
representations extracted by the feature backbone, we propose Class-Specific
Channel Attention (CSCA) module, which learns to highlight the discriminative
channels in each class by assigning each class one CSCA weight vector. Unlike
general attention modules designed to learn global-class features, the CSCA
module aims to learn local and class-specific features with very effective
computation. We evaluated the performance of the CSCA module on standard
benchmarks including miniImagenet, Tiered-ImageNet, CIFAR-FS, and CUB-200-2011.
Experiments are performed in inductive and in/cross-domain settings. We achieve
new state-of-the-art results.
Related papers
- Memory-guided Network with Uncertainty-based Feature Augmentation for Few-shot Semantic Segmentation [12.653336728447654]
We propose a class-shared memory (CSM) module consisting of a set of learnable memory vectors.
These memory vectors learn elemental object patterns from base classes during training whilst re-encoding query features during both training and inference.
We integrate CSM and UFA into representative FSS works, with experimental results on the widely-used PASCAL-5$i$ and COCO-20$i$ datasets.
arXiv Detail & Related papers (2024-06-01T19:53:25Z) - Efficient Prompt Tuning of Large Vision-Language Model for Fine-Grained
Ship Classification [62.425462136772666]
Fine-grained ship classification in remote sensing (RS-FGSC) poses a significant challenge due to the high similarity between classes and the limited availability of labeled data.
Recent advancements in large pre-trained Vision-Language Models (VLMs) have demonstrated impressive capabilities in few-shot or zero-shot learning.
This study delves into harnessing the potential of VLMs to enhance classification accuracy for unseen ship categories.
arXiv Detail & Related papers (2024-03-13T05:48:58Z) - SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language Models [71.78800549517298]
Continual learning (CL) ability is vital for deploying large language models (LLMs) in the dynamic world.
Existing methods devise the learning module to acquire task-specific knowledge with parameter-efficient tuning (PET) block and the selection module to pick out the corresponding one for the testing input.
We propose a novel Shared Attention Framework (SAPT) to align the PET learning and selection via the Shared Attentive Learning & Selection module.
arXiv Detail & Related papers (2024-01-16T11:45:03Z) - DiGeo: Discriminative Geometry-Aware Learning for Generalized Few-Shot
Object Detection [39.937724871284665]
Generalized few-shot object detection aims to achieve precise detection on both base classes with abundant annotations and novel classes with limited training data.
Existing approaches enhance few-shot generalization with the sacrifice of base-class performance.
We propose a new training framework, DiGeo, to learn Geometry-aware features of inter-class separation and intra-class compactness.
arXiv Detail & Related papers (2023-03-16T22:37:09Z) - Mitigating Forgetting in Online Continual Learning via Contrasting
Semantically Distinct Augmentations [22.289830907729705]
Online continual learning (OCL) aims to enable model learning from a non-stationary data stream to continuously acquire new knowledge as well as retain the learnt one.
Main challenge comes from the "catastrophic forgetting" issue -- the inability to well remember the learnt knowledge while learning the new ones.
arXiv Detail & Related papers (2022-11-10T05:29:43Z) - Learning What Not to Segment: A New Perspective on Few-Shot Segmentation [63.910211095033596]
Recently few-shot segmentation (FSS) has been extensively developed.
This paper proposes a fresh and straightforward insight to alleviate the problem.
In light of the unique nature of the proposed approach, we also extend it to a more realistic but challenging setting.
arXiv Detail & Related papers (2022-03-15T03:08:27Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z) - Fine-grained Angular Contrastive Learning with Coarse Labels [72.80126601230447]
We introduce a novel 'Angular normalization' module that allows to effectively combine supervised and self-supervised contrastive pre-training.
This work will help to pave the way for future research on this new, challenging, and very practical topic of C2FS classification.
arXiv Detail & Related papers (2020-12-07T08:09:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.