OpenAUC: Towards AUC-Oriented Open-Set Recognition
- URL: http://arxiv.org/abs/2210.13458v1
- Date: Sat, 22 Oct 2022 08:54:15 GMT
- Title: OpenAUC: Towards AUC-Oriented Open-Set Recognition
- Authors: Zitai Wang, Qianqian Xu, Zhiyong Yang, Yuan He, Xiaochun Cao, Qingming
Huang
- Abstract summary: Traditional machine learning follows a close-set assumption that the training and test set share the same label space.
Open-Set Recognition (OSR) aims to make correct predictions on both close-set samples and open-set samples.
To fix these issues, we propose a novel metric named OpenAUC.
- Score: 151.5072746015253
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional machine learning follows a close-set assumption that the training
and test set share the same label space. While in many practical scenarios, it
is inevitable that some test samples belong to unknown classes (open-set). To
fix this issue, Open-Set Recognition (OSR), whose goal is to make correct
predictions on both close-set samples and open-set samples, has attracted
rising attention. In this direction, the vast majority of literature focuses on
the pattern of open-set samples. However, how to evaluate model performance in
this challenging task is still unsolved. In this paper, a systematic analysis
reveals that most existing metrics are essentially inconsistent with the
aforementioned goal of OSR: (1) For metrics extended from close-set
classification, such as Open-set F-score, Youden's index, and Normalized
Accuracy, a poor open-set prediction can escape from a low performance score
with a superior close-set prediction. (2) Novelty detection AUC, which measures
the ranking performance between close-set and open-set samples, ignores the
close-set performance. To fix these issues, we propose a novel metric named
OpenAUC. Compared with existing metrics, OpenAUC enjoys a concise pairwise
formulation that evaluates open-set performance and close-set performance in a
coupling manner. Further analysis shows that OpenAUC is free from the
aforementioned inconsistency properties. Finally, an end-to-end learning method
is proposed to minimize the OpenAUC risk, and the experimental results on
popular benchmark datasets speak to its effectiveness.
Related papers
- Large-Scale Evaluation of Open-Set Image Classification Techniques [1.1249583407496218]
Open-Set Classification (OSC) algorithms aim to maximize both closed and open-set recognition capabilities.
Recent studies showed the utility of such algorithms on small-scale data sets, but limited experimentation makes it difficult to assess their performances in real-world problems.
arXiv Detail & Related papers (2024-06-13T13:43:01Z) - Open-Set Recognition in the Age of Vision-Language Models [9.306738687897889]
We investigate whether vision-language models (VLMs) for open-vocabulary perception inherently open-set models because they are trained on internet-scale datasets.
We find they introduce closed-set assumptions via their finite query set, making them vulnerable to open-set conditions.
We show that naively increasing the size of the query set to contain more and more classes does not mitigate this problem, but instead causes diminishing task performance and open-set performance.
arXiv Detail & Related papers (2024-03-25T08:14:22Z) - Open-Set Facial Expression Recognition [42.62439125553367]
Facial expression recognition (FER) models are typically trained on datasets with a fixed number of seven basic classes.
Recent research works point out that there are far more expressions than the basic ones.
We propose the open-set FER task for the first time.
arXiv Detail & Related papers (2024-01-23T05:57:50Z) - M-Tuning: Prompt Tuning with Mitigated Label Bias in Open-Set Scenarios [103.6153593636399]
We propose a vision-language prompt tuning method with mitigated label bias (M-Tuning)
It introduces open words from the WordNet to extend the range of words forming the prompt texts from only closed-set label words to more, and thus prompts are tuned in a simulated open-set scenario.
Our method achieves the best performance on datasets with various scales, and extensive ablation studies also validate its effectiveness.
arXiv Detail & Related papers (2023-03-09T09:05:47Z) - Open-Set Likelihood Maximization for Few-Shot Learning [36.97433312193586]
We tackle the Few-Shot Open-Set Recognition (FSOSR) problem, i.e. classifying instances among a set of classes for which we only have a few labeled samples.
We explore the popular transductive setting, which leverages the unlabelled query instances at inference.
Motivated by the observation that existing transductive methods perform poorly in open-set scenarios, we propose a generalization of the maximum likelihood principle.
arXiv Detail & Related papers (2023-01-20T01:56:19Z) - Reconstruction guided Meta-learning for Few Shot Open Set Recognition [31.49168444631114]
We propose Reconstructing Exemplar-based Few-shot Open-set ClaSsifier (ReFOCS)
By using a novel exemplar reconstruction-based meta-learning strategy ReFOCS streamlines FSOSR.
We show ReFOCS to outperform multiple state-of-the-art methods.
arXiv Detail & Related papers (2021-07-31T23:23:35Z) - OpenMatch: Open-set Consistency Regularization for Semi-supervised
Learning with Outliers [71.08167292329028]
We propose a novel Open-set Semi-Supervised Learning (OSSL) approach called OpenMatch.
OpenMatch unifies FixMatch with novelty detection based on one-vs-all (OVA) classifiers.
It achieves state-of-the-art performance on three datasets, and even outperforms a fully supervised model in detecting outliers unseen in unlabeled data on CIFAR10.
arXiv Detail & Related papers (2021-05-28T23:57:15Z) - WSSOD: A New Pipeline for Weakly- and Semi-Supervised Object Detection [75.80075054706079]
We propose a weakly- and semi-supervised object detection framework (WSSOD)
An agent detector is first trained on a joint dataset and then used to predict pseudo bounding boxes on weakly-annotated images.
The proposed framework demonstrates remarkable performance on PASCAL-VOC and MSCOCO benchmark, achieving a high performance comparable to those obtained in fully-supervised settings.
arXiv Detail & Related papers (2021-05-21T11:58:50Z) - OpenGAN: Open-Set Recognition via Open Data Generation [76.00714592984552]
Real-world machine learning systems need to analyze novel testing data that differs from the training data.
Two conceptually elegant ideas for open-set discrimination are: 1) discriminatively learning an open-vs-closed binary discriminator, and 2) unsupervised learning the closed-set data distribution with a GAN.
We propose OpenGAN, which addresses the limitation of each approach by combining them with several technical insights.
arXiv Detail & Related papers (2021-04-07T06:19:24Z) - Few-Shot Open-Set Recognition using Meta-Learning [72.15940446408824]
The problem of open-set recognition is considered.
A new oPen sEt mEta LEaRning (PEELER) algorithm is introduced.
arXiv Detail & Related papers (2020-05-27T23:49:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.