Powering Finetuning in Few-shot Learning: Domain-Agnostic Feature
Adaptation with Rectified Class Prototypes
- URL: http://arxiv.org/abs/2204.03749v1
- Date: Thu, 7 Apr 2022 21:29:12 GMT
- Title: Powering Finetuning in Few-shot Learning: Domain-Agnostic Feature
Adaptation with Rectified Class Prototypes
- Authors: Ran Tao, Han Zhang, Yutong Zheng, Marios Savvides
- Abstract summary: Finetuning is designed to focus on reducing biases in novel-class feature distributions.
By powering finetuning with DCM and SS, we achieve state-of-the-art results on Meta-Dataset.
- Score: 32.622613524622075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent works, utilizing a deep network trained on meta-training set serves
as a strong baseline in few-shot learning. In this paper, we move forward to
refine novel-class features by finetuning a trained deep network. Finetuning is
designed to focus on reducing biases in novel-class feature distributions,
which we define as two aspects: class-agnostic and class-specific biases.
Class-agnostic bias is defined as the distribution shifting introduced by
domain difference, which we propose Distribution Calibration Module(DCM) to
reduce. DCM owes good property of eliminating domain difference and fast
feature adaptation during optimization. Class-specific bias is defined as the
biased estimation using a few samples in novel classes, which we propose
Selected Sampling(SS) to reduce. Without inferring the actual class
distribution, SS is designed by running sampling using proposal distributions
around support-set samples. By powering finetuning with DCM and SS, we achieve
state-of-the-art results on Meta-Dataset with consistent performance boosts
over ten datasets from different domains. We believe our simple yet effective
method demonstrates its possibility to be applied on practical few-shot
applications.
Related papers
- Step-wise Distribution Alignment Guided Style Prompt Tuning for Source-free Cross-domain Few-shot Learning [53.60934432718044]
Cross-domain few-shot learning methods face challenges with large-scale pre-trained models due to inaccessible source data and training strategies.
This paper introduces Step-wise Distribution Alignment Guided Style Prompt Tuning (StepSPT)
StepSPT implicitly narrows domain gaps through prediction distribution optimization.
arXiv Detail & Related papers (2024-11-15T09:34:07Z) - Enhanced Online Test-time Adaptation with Feature-Weight Cosine Alignment [7.991720491452191]
Online Test-Time Adaptation (OTTA) has emerged as an effective strategy to handle distributional shifts.
This paper introduces a novel cosine alignment optimization approach with a dual-objective loss function.
Our method outperforms state-of-the-art techniques and sets a new benchmark in multiple datasets.
arXiv Detail & Related papers (2024-05-12T05:57:37Z) - Rethinking Few-shot 3D Point Cloud Semantic Segmentation [62.80639841429669]
This paper revisits few-shot 3D point cloud semantic segmentation (FS-PCS)
We focus on two significant issues in the state-of-the-art: foreground leakage and sparse point distribution.
To address these issues, we introduce a standardized FS-PCS setting, upon which a new benchmark is built.
arXiv Detail & Related papers (2024-03-01T15:14:47Z) - GRSDet: Learning to Generate Local Reverse Samples for Few-shot Object
Detection [15.998148904793426]
Few-shot object detection (FSOD) aims to achieve object detection only using a few novel class training data.
Most of the existing methods usually adopt a transfer-learning strategy to construct the novel class distribution.
We propose generating local reverse samples (LRSamples) in Prototype Reference Frames to adaptively adjust the center position and boundary range of the novel class distribution.
arXiv Detail & Related papers (2023-12-27T13:36:29Z) - ActiveDC: Distribution Calibration for Active Finetuning [36.64444238742072]
We propose a new method called ActiveDC for the active finetuning tasks.
We calibrate the distribution of the selected samples by exploiting implicit category information in the unlabeled pool.
The results indicate that ActiveDC consistently outperforms the baseline performance in all image classification tasks.
arXiv Detail & Related papers (2023-11-13T14:35:18Z) - Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - P3DC-Shot: Prior-Driven Discrete Data Calibration for Nearest-Neighbor
Few-Shot Classification [6.61282019235397]
P3DC-Shot is an improved nearest-neighbor based few-shot classification method empowered by prior-driven data calibration.
We treat the prototypes representing each base class as priors and calibrate each support data based on its similarity to different base prototypes.
arXiv Detail & Related papers (2023-01-02T16:26:16Z) - Proposal Distribution Calibration for Few-Shot Object Detection [65.19808035019031]
In few-shot object detection (FSOD), the two-step training paradigm is widely adopted to mitigate the severe sample imbalance.
Unfortunately, the extreme data scarcity aggravates the proposal distribution bias, hindering the RoI head from evolving toward novel classes.
We introduce a simple yet effective proposal distribution calibration (PDC) approach to neatly enhance the localization and classification abilities of the RoI head.
arXiv Detail & Related papers (2022-12-15T05:09:11Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.