P3DC-Shot: Prior-Driven Discrete Data Calibration for Nearest-Neighbor
Few-Shot Classification
- URL: http://arxiv.org/abs/2301.00740v1
- Date: Mon, 2 Jan 2023 16:26:16 GMT
- Title: P3DC-Shot: Prior-Driven Discrete Data Calibration for Nearest-Neighbor
Few-Shot Classification
- Authors: Shuangmei Wang, Rui Ma, Tieru Wu, Yang Cao
- Abstract summary: P3DC-Shot is an improved nearest-neighbor based few-shot classification method empowered by prior-driven data calibration.
We treat the prototypes representing each base class as priors and calibrate each support data based on its similarity to different base prototypes.
- Score: 6.61282019235397
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nearest-Neighbor (NN) classification has been proven as a simple and
effective approach for few-shot learning. The query data can be classified
efficiently by finding the nearest support class based on features extracted by
pretrained deep models. However, NN-based methods are sensitive to the data
distribution and may produce false prediction if the samples in the support set
happen to lie around the distribution boundary of different classes. To solve
this issue, we present P3DC-Shot, an improved nearest-neighbor based few-shot
classification method empowered by prior-driven data calibration. Inspired by
the distribution calibration technique which utilizes the distribution or
statistics of the base classes to calibrate the data for few-shot tasks, we
propose a novel discrete data calibration operation which is more suitable for
NN-based few-shot classification. Specifically, we treat the prototypes
representing each base class as priors and calibrate each support data based on
its similarity to different base prototypes. Then, we perform NN classification
using these discretely calibrated support data. Results from extensive
experiments on various datasets show our efficient non-learning based method
can outperform or at least comparable to SOTA methods which need additional
learning steps.
Related papers
- A Closer Look at Benchmarking Self-Supervised Pre-training with Image Classification [51.35500308126506]
Self-supervised learning (SSL) is a machine learning approach where the data itself provides supervision, eliminating the need for external labels.
We study how classification-based evaluation protocols for SSL correlate and how well they predict downstream performance on different dataset types.
arXiv Detail & Related papers (2024-07-16T23:17:36Z) - Adapting Conformal Prediction to Distribution Shifts Without Labels [16.478151550456804]
Conformal prediction (CP) enables machine learning models to output prediction sets with guaranteed coverage rate.
Our goal is to improve the quality of CP-generated prediction sets using only unlabeled data from the test domain.
This is achieved by two new methods called ECP and EACP, that adjust the score function in CP according to the base model's uncertainty on the unlabeled test data.
arXiv Detail & Related papers (2024-06-03T15:16:02Z) - Rethinking Few-shot 3D Point Cloud Semantic Segmentation [62.80639841429669]
This paper revisits few-shot 3D point cloud semantic segmentation (FS-PCS)
We focus on two significant issues in the state-of-the-art: foreground leakage and sparse point distribution.
To address these issues, we introduce a standardized FS-PCS setting, upon which a new benchmark is built.
arXiv Detail & Related papers (2024-03-01T15:14:47Z) - Adaptive Distribution Calibration for Few-Shot Learning with
Hierarchical Optimal Transport [78.9167477093745]
We propose a novel distribution calibration method by learning the adaptive weight matrix between novel samples and base classes.
Experimental results on standard benchmarks demonstrate that our proposed plug-and-play model outperforms competing approaches.
arXiv Detail & Related papers (2022-10-09T02:32:57Z) - Lightweight Conditional Model Extrapolation for Streaming Data under
Class-Prior Shift [27.806085423595334]
We introduce LIMES, a new method for learning with non-stationary streaming data.
We learn a single set of model parameters from which a specific classifier for any specific data distribution is derived.
Experiments on a set of exemplary tasks using Twitter data show that LIMES achieves higher accuracy than alternative approaches.
arXiv Detail & Related papers (2022-06-10T15:19:52Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Transformers Can Do Bayesian Inference [56.99390658880008]
We present Prior-Data Fitted Networks (PFNs)
PFNs leverage in-context learning in large-scale machine learning techniques to approximate a large set of posteriors.
We demonstrate that PFNs can near-perfectly mimic Gaussian processes and also enable efficient Bayesian inference for intractable problems.
arXiv Detail & Related papers (2021-12-20T13:07:39Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Exploring the Uncertainty Properties of Neural Networks' Implicit Priors
in the Infinite-Width Limit [47.324627920761685]
We use recent theoretical advances that characterize the function-space prior to an ensemble of infinitely-wide NNs as a Gaussian process.
This gives us a better understanding of the implicit prior NNs place on function space.
We also examine the calibration of previous approaches to classification with the NNGP.
arXiv Detail & Related papers (2020-10-14T18:41:54Z) - My Health Sensor, my Classifier: Adapting a Trained Classifier to
Unlabeled End-User Data [0.5091527753265949]
In this work, we present an approach for unsupervised domain adaptation (DA) with the constraint, that the labeled source data are not directly available.
Our solution, iteratively labels only high confidence sub-regions of the target data distribution, based on the belief of the classifier.
The goal is to apply the proposed approach on DA for the task of sleep apnea detection and achieve personalization based on the needs of the patient.
arXiv Detail & Related papers (2020-09-22T20:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.