Few-shot Open-set Recognition by Transformation Consistency
- URL: http://arxiv.org/abs/2103.01537v1
- Date: Tue, 2 Mar 2021 07:37:17 GMT
- Title: Few-shot Open-set Recognition by Transformation Consistency
- Authors: Minki Jeong, Seokeon Choi, Changick Kim
- Abstract summary: We propose a novel unknown class sample detector, named SnaTCHer, that does not require pseudo-unseen samples.
Based on the transformation consistency, our method measures the difference between the transformed prototypes and a modified prototype set.
Our method alters the unseen class distribution estimation problem to a relative feature transformation problem, independent of pseudo-unseen class samples.
- Score: 30.757030359718048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we attack a few-shot open-set recognition (FSOSR) problem,
which is a combination of few-shot learning (FSL) and open-set recognition
(OSR). It aims to quickly adapt a model to a given small set of labeled samples
while rejecting unseen class samples. Since OSR requires rich data and FSL
considers closed-set classification, existing OSR and FSL methods show poor
performances in solving FSOSR problems. The previous FSOSR method follows the
pseudo-unseen class sample-based methods, which collect pseudo-unseen samples
from the other dataset or synthesize samples to model unseen class
representations. However, this approach is heavily dependent on the composition
of the pseudo samples. In this paper, we propose a novel unknown class sample
detector, named SnaTCHer, that does not require pseudo-unseen samples. Based on
the transformation consistency, our method measures the difference between the
transformed prototypes and a modified prototype set. The modified set is
composed by replacing a query feature and its predicted class prototype.
SnaTCHer rejects samples with large differences to the transformed prototypes.
Our method alters the unseen class distribution estimation problem to a
relative feature transformation problem, independent of pseudo-unseen class
samples. We investigate our SnaTCHer with various prototype transformation
methods and observe that our method consistently improves unseen class sample
detection performance without closed-set classification reduction.
Related papers
- Few-Shot Class Incremental Learning via Robust Transformer Approach [16.590193619691416]
Few-Shot Class-Incremental Learning presents an extension of the Class Incremental Learning problem where a model is faced with the problem of data scarcity.
This problem remains an open problem because all recent works are built upon the convolutional neural networks performing sub-optimally.
Our paper presents Robust Transformer Approach built upon the Compact Convolution Transformer.
arXiv Detail & Related papers (2024-05-08T03:35:52Z) - Decoupled Prototype Learning for Reliable Test-Time Adaptation [50.779896759106784]
Test-time adaptation (TTA) is a task that continually adapts a pre-trained source model to the target domain during inference.
One popular approach involves fine-tuning model with cross-entropy loss according to estimated pseudo-labels.
This study reveals that minimizing the classification error of each sample causes the cross-entropy loss's vulnerability to label noise.
We propose a novel Decoupled Prototype Learning (DPL) method that features prototype-centric loss computation.
arXiv Detail & Related papers (2024-01-15T03:33:39Z) - Efficient Failure Pattern Identification of Predictive Algorithms [15.02620042972929]
We propose a human-machine collaborative framework that consists of a team of human annotators and a sequential recommendation algorithm.
The results empirically demonstrate the competitive performance of our framework on multiple datasets at various signal-to-noise ratios.
arXiv Detail & Related papers (2023-06-01T14:54:42Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - Large-Scale Open-Set Classification Protocols for ImageNet [0.0]
Open-Set Classification (OSC) intends to adapt closed-set classification models to real-world scenarios.
We propose three open-set protocols that provide rich datasets of natural images with different levels of similarity between known and unknown classes.
We propose a new validation metric that can be employed to assess whether the training of deep learning models addresses both the classification of known samples and the rejection of unknown samples.
arXiv Detail & Related papers (2022-10-13T07:01:34Z) - Imbalanced Classification via a Tabular Translation GAN [4.864819846886142]
We present a model based on Generative Adversarial Networks which uses additional regularization losses to map majority samples to corresponding synthetic minority samples.
We show that the proposed method improves average precision when compared to alternative re-weighting and oversampling techniques.
arXiv Detail & Related papers (2022-04-19T06:02:53Z) - CARMS: Categorical-Antithetic-REINFORCE Multi-Sample Gradient Estimator [60.799183326613395]
We propose an unbiased estimator for categorical random variables based on multiple mutually negatively correlated (jointly antithetic) samples.
CARMS combines REINFORCE with copula based sampling to avoid duplicate samples and reduce its variance, while keeping the estimator unbiased using importance sampling.
We evaluate CARMS on several benchmark datasets on a generative modeling task, as well as a structured output prediction task, and find it to outperform competing methods including a strong self-control baseline.
arXiv Detail & Related papers (2021-10-26T20:14:30Z) - Training on Test Data with Bayesian Adaptation for Covariate Shift [96.3250517412545]
Deep neural networks often make inaccurate predictions with unreliable uncertainty estimates.
We derive a Bayesian model that provides for a well-defined relationship between unlabeled inputs under distributional shift and model parameters.
We show that our method improves both accuracy and uncertainty estimation.
arXiv Detail & Related papers (2021-09-27T01:09:08Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - Multi-Scale Positive Sample Refinement for Few-Shot Object Detection [61.60255654558682]
Few-shot object detection (FSOD) helps detectors adapt to unseen classes with few training instances.
We propose a Multi-scale Positive Sample Refinement (MPSR) approach to enrich object scales in FSOD.
MPSR generates multi-scale positive samples as object pyramids and refines the prediction at various scales.
arXiv Detail & Related papers (2020-07-18T09:48:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.