The Devil is in the Wrongly-classified Samples: Towards Unified Open-set
Recognition
- URL: http://arxiv.org/abs/2302.04002v1
- Date: Wed, 8 Feb 2023 11:34:04 GMT
- Title: The Devil is in the Wrongly-classified Samples: Towards Unified Open-set
Recognition
- Authors: Jun Cen, Di Luan, Shiwei Zhang, Yixuan Pei, Yingya Zhang, Deli Zhao,
Shaojie Shen, Qifeng Chen
- Abstract summary: Open-set Recognition (OSR) aims to identify test samples whose classes are not seen during the training process.
Recently, Unified Open-set Recognition (UOSR) has been proposed to reject not only unknown samples but also known but wrongly classified samples.
- Score: 61.28722817272917
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Open-set Recognition (OSR) aims to identify test samples whose classes are
not seen during the training process. Recently, Unified Open-set Recognition
(UOSR) has been proposed to reject not only unknown samples but also known but
wrongly classified samples, which tends to be more practical in real-world
applications. The UOSR draws little attention since it is proposed, but we find
sometimes it is even more practical than OSR in the real world applications, as
evaluation results of known but wrongly classified samples are also wrong like
unknown samples. In this paper, we deeply analyze the UOSR task under different
training and evaluation settings to shed light on this promising research
direction. For this purpose, we first evaluate the UOSR performance of several
OSR methods and show a significant finding that the UOSR performance
consistently surpasses the OSR performance by a large margin for the same
method. We show that the reason lies in the known but wrongly classified
samples, as their uncertainty distribution is extremely close to unknown
samples rather than known and correctly classified samples. Second, we analyze
how the two training settings of OSR (i.e., pre-training and outlier exposure)
influence the UOSR. We find although they are both beneficial for
distinguishing known and correctly classified samples from unknown samples,
pre-training is also helpful for identifying known but wrongly classified
samples while outlier exposure is not. In addition to different training
settings, we also formulate a new evaluation setting for UOSR which is called
few-shot UOSR, where only one or five samples per unknown class are available
during evaluation to help identify unknown samples. We propose FS-KNNS for the
few-shot UOSR to achieve state-of-the-art performance under all settings.
Related papers
- Open Set Recognition for Random Forest [4.266270583680947]
In real-world classification tasks, it is difficult to collect training examples that exhaust all possible classes.
We propose a novel approach to enabling open-set recognition capability for random forest.
The proposed method is validated on both synthetic and real-world datasets.
arXiv Detail & Related papers (2024-08-01T04:21:14Z) - Exploring Diverse Representations for Open Set Recognition [51.39557024591446]
Open set recognition (OSR) requires the model to classify samples that belong to closed sets while rejecting unknown samples during test.
Currently, generative models often perform better than discriminative models in OSR.
We propose a new model, namely Multi-Expert Diverse Attention Fusion (MEDAF), that learns diverse representations in a discriminative way.
arXiv Detail & Related papers (2024-01-12T11:40:22Z) - Entropic Open-set Active Learning [30.91182106918793]
Active Learning (AL) aims to enhance the performance of deep models by selecting the most informative samples for annotation from a pool of unlabeled data.
Despite impressive performance in closed-set settings, most AL methods fail in real-world scenarios where the unlabeled data contains unknown categories.
We propose an Entropic Open-set AL framework which leverages both known and unknown distributions effectively to select informative samples during AL rounds.
arXiv Detail & Related papers (2023-12-21T18:47:12Z) - OpenAUC: Towards AUC-Oriented Open-Set Recognition [151.5072746015253]
Traditional machine learning follows a close-set assumption that the training and test set share the same label space.
Open-Set Recognition (OSR) aims to make correct predictions on both close-set samples and open-set samples.
To fix these issues, we propose a novel metric named OpenAUC.
arXiv Detail & Related papers (2022-10-22T08:54:15Z) - Reducing Training Sample Memorization in GANs by Training with
Memorization Rejection [80.0916819303573]
We propose rejection memorization, a training scheme that rejects generated samples that are near-duplicates of training samples during training.
Our scheme is simple, generic and can be directly applied to any GAN architecture.
arXiv Detail & Related papers (2022-10-21T20:17:50Z) - Large-Scale Open-Set Classification Protocols for ImageNet [0.0]
Open-Set Classification (OSC) intends to adapt closed-set classification models to real-world scenarios.
We propose three open-set protocols that provide rich datasets of natural images with different levels of similarity between known and unknown classes.
We propose a new validation metric that can be employed to assess whether the training of deep learning models addresses both the classification of known samples and the rejection of unknown samples.
arXiv Detail & Related papers (2022-10-13T07:01:34Z) - Multi-Attribute Open Set Recognition [7.012240324005977]
We introduce a novel problem setup that generalizes conventional OSR to a multi-attribute setting.
We show that these baselines are vulnerable to shortcuts when spurious correlations exist in the training dataset.
We provide an empirical evidence showing that this behavior is consistent across different baselines on both synthetic and real world datasets.
arXiv Detail & Related papers (2022-08-14T09:04:52Z) - Reconstruction guided Meta-learning for Few Shot Open Set Recognition [31.49168444631114]
We propose Reconstructing Exemplar-based Few-shot Open-set ClaSsifier (ReFOCS)
By using a novel exemplar reconstruction-based meta-learning strategy ReFOCS streamlines FSOSR.
We show ReFOCS to outperform multiple state-of-the-art methods.
arXiv Detail & Related papers (2021-07-31T23:23:35Z) - Jo-SRC: A Contrastive Approach for Combating Noisy Labels [58.867237220886885]
We propose a noise-robust approach named Jo-SRC (Joint Sample Selection and Model Regularization based on Consistency)
Specifically, we train the network in a contrastive learning manner. Predictions from two different views of each sample are used to estimate its "likelihood" of being clean or out-of-distribution.
arXiv Detail & Related papers (2021-03-24T07:26:07Z) - Open Set Recognition with Conditional Probabilistic Generative Models [51.40872765917125]
We propose Conditional Probabilistic Generative Models (CPGM) for open set recognition.
CPGM can detect unknown samples but also classify known classes by forcing different latent features to approximate conditional Gaussian distributions.
Experiment results on multiple benchmark datasets reveal that the proposed method significantly outperforms the baselines.
arXiv Detail & Related papers (2020-08-12T06:23:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.