Alleviating the Sample Selection Bias in Few-shot Learning by Removing
Projection to the Centroid
- URL: http://arxiv.org/abs/2210.16834v1
- Date: Sun, 30 Oct 2022 13:03:13 GMT
- Title: Alleviating the Sample Selection Bias in Few-shot Learning by Removing
Projection to the Centroid
- Authors: Jing Xu, Xu Luo, Xinglin Pan, Wenjie Pei, Yanan Li, Zenglin Xu
- Abstract summary: Task Centroid Projection Removing (TCPR) is applied directly to all image features in a given task.
Our method effectively prevents features from being too close to the task centroid.
It can reliably improve classification accuracy across various feature extractors, training algorithms and datasets.
- Score: 22.918659185060523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot learning (FSL) targets at generalization of vision models towards
unseen tasks without sufficient annotations. Despite the emergence of a number
of few-shot learning methods, the sample selection bias problem, i.e., the
sensitivity to the limited amount of support data, has not been well
understood. In this paper, we find that this problem usually occurs when the
positions of support samples are in the vicinity of task centroid -- the mean
of all class centroids in the task. This motivates us to propose an extremely
simple feature transformation to alleviate this problem, dubbed Task Centroid
Projection Removing (TCPR). TCPR is applied directly to all image features in a
given task, aiming at removing the dimension of features along the direction of
the task centroid. While the exact task centroid cannot be accurately obtained
from limited data, we estimate it using base features that are each similar to
one of the support features. Our method effectively prevents features from
being too close to the task centroid. Extensive experiments over ten datasets
from different domains show that TCPR can reliably improve classification
accuracy across various feature extractors, training algorithms and datasets.
The code has been made available at https://github.com/KikimorMay/FSL-TCBR.
Related papers
- Dense Center-Direction Regression for Object Counting and Localization with Point Supervision [1.9526430269580954]
We propose a novel approach termed CeDiRNet for point-supervised learning.
It uses a dense regression of directions pointing towards the nearest object centers.
We show that it outperforms the existing state-of-the-art methods.
arXiv Detail & Related papers (2024-08-26T17:49:27Z) - Weakly-Supervised Cross-Domain Segmentation of Electron Microscopy with Sparse Point Annotation [1.124958340749622]
We introduce a multitask learning framework to leverage correlations among the counting, detection, and segmentation tasks.
We develop a cross-position cut-and-paste for label augmentation and an entropy-based pseudo-label selection.
The proposed model is capable of significantly outperforming UDA methods and produces comparable performance as the supervised counterpart.
arXiv Detail & Related papers (2024-03-31T12:22:23Z) - Few and Fewer: Learning Better from Few Examples Using Fewer Base
Classes [12.742052888217543]
Fine-tuning is ineffective for few-shot learning, since the target dataset contains only a handful of examples.
This paper investigates whether better features for the target dataset can be obtained by training on fewer base classes.
To our knowledge, this is the first demonstration that fine-tuning on a subset of carefully selected base classes can significantly improve few-shot learning.
arXiv Detail & Related papers (2024-01-29T01:52:49Z) - A Weighted K-Center Algorithm for Data Subset Selection [70.49696246526199]
Subset selection is a fundamental problem that can play a key role in identifying smaller portions of the training data.
We develop a novel factor 3-approximation algorithm to compute subsets based on the weighted sum of both k-center and uncertainty sampling objective functions.
arXiv Detail & Related papers (2023-12-17T04:41:07Z) - Less is More: On the Feature Redundancy of Pretrained Models When
Transferring to Few-shot Tasks [120.23328563831704]
Transferring a pretrained model to a downstream task can be as easy as conducting linear probing with target data.
We show that, for linear probing, the pretrained features can be extremely redundant when the downstream data is scarce.
arXiv Detail & Related papers (2023-10-05T19:00:49Z) - DETA: Denoised Task Adaptation for Few-Shot Learning [135.96805271128645]
Test-time task adaptation in few-shot learning aims to adapt a pre-trained task-agnostic model for capturing taskspecific knowledge.
With only a handful of samples available, the adverse effect of either the image noise (a.k.a. X-noise) or the label noise (a.k.a. Y-noise) from support samples can be severely amplified.
We propose DEnoised Task Adaptation (DETA), a first, unified image- and label-denoising framework to existing task adaptation approaches.
arXiv Detail & Related papers (2023-03-11T05:23:20Z) - Multi-task Bias-Variance Trade-off Through Functional Constraints [102.64082402388192]
Multi-task learning aims to acquire a set of functions that perform well for diverse tasks.
In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks.
We introduce a constrained learning formulation that enforces domain specific solutions to a central function.
arXiv Detail & Related papers (2022-10-27T16:06:47Z) - Centroids Matching: an efficient Continual Learning approach operating
in the embedding space [15.705568893476947]
Catastrophic forgetting (CF) occurs when a neural network loses the information previously learned while training on a set of samples from a different distribution.
We propose a novel regularization method called Centroids Matching, that fights CF by operating in the feature space produced by the neural network.
arXiv Detail & Related papers (2022-08-03T13:17:16Z) - Learning Stable Classifiers by Transferring Unstable Features [59.06169363181417]
We study transfer learning in the presence of spurious correlations.
We experimentally demonstrate that directly transferring the stable feature extractor learned on the source task may not eliminate these biases for the target task.
We hypothesize that the unstable features in the source task and those in the target task are directly related.
arXiv Detail & Related papers (2021-06-15T02:41:12Z) - FairMOT: On the Fairness of Detection and Re-Identification in Multiple
Object Tracking [92.48078680697311]
Multi-object tracking (MOT) is an important problem in computer vision.
We present a simple yet effective approach termed as FairMOT based on the anchor-free object detection architecture CenterNet.
The approach achieves high accuracy for both detection and tracking.
arXiv Detail & Related papers (2020-04-04T08:18:00Z) - Task-Adaptive Clustering for Semi-Supervised Few-Shot Classification [23.913195015484696]
Few-shot learning aims to handle previously unseen tasks using only a small amount of new training data.
In preparing (or meta-training) a few-shot learner, however, massive labeled data are necessary.
In this work, we propose a few-shot learner that can work well under the semi-supervised setting where a large portion of training data is unlabeled.
arXiv Detail & Related papers (2020-03-18T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.