Unified Classification and Rejection: A One-versus-All Framework
- URL: http://arxiv.org/abs/2311.13355v1
- Date: Wed, 22 Nov 2023 12:47:12 GMT
- Title: Unified Classification and Rejection: A One-versus-All Framework
- Authors: Zhen Cheng, Xu-Yao Zhang, Cheng-Lin Liu
- Abstract summary: We build a unified framework for building open set classifiers for both classification and OOD rejection.
By decomposing the $ K $-class problem into $ K $ one-versus-all (OVA) binary classification tasks, we show that combining the scores of OVA classifiers can give $ (K + 1) $-class posterior probabilities.
We implement the OVA framework and hybrid training strategy on the recently proposed convolutional prototype network.
- Score: 53.47637947615391
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Classifying patterns of known classes and rejecting ambiguous and novel (also
called as out-of-distribution (OOD)) inputs are involved in open world pattern
recognition. Deep neural network models usually excel in closed-set
classification while performing poorly in rejecting OOD. To tackle this
problem, numerous methods have been designed to perform open set recognition
(OSR) or OOD rejection/detection tasks. Previous methods mostly take
post-training score transformation or hybrid models to ensure low scores on OOD
inputs while separating known classes. In this paper, we attempt to build a
unified framework for building open set classifiers for both classification and
OOD rejection. We formulate the open set recognition of $ K $-known-class as a
$ (K + 1) $-class classification problem with model trained on known-class
samples only. By decomposing the $ K $-class problem into $ K $ one-versus-all
(OVA) binary classification tasks and binding some parameters, we show that
combining the scores of OVA classifiers can give $ (K + 1) $-class posterior
probabilities, which enables classification and OOD rejection in a unified
framework. To maintain the closed-set classification accuracy of the OVA
trained classifier, we propose a hybrid training strategy combining OVA loss
and multi-class cross-entropy loss. We implement the OVA framework and hybrid
training strategy on the recently proposed convolutional prototype network.
Experiments on popular OSR and OOD detection datasets demonstrate that the
proposed framework, using a single multi-class classifier, yields competitive
performance in closed-set classification, OOD detection, and misclassification
detection.
Related papers
- CBR - Boosting Adaptive Classification By Retrieval of Encrypted Network Traffic with Out-of-distribution [9.693391036125908]
One of the common approaches is using Machine learning or Deep Learning-based solutions on a fixed number of classes.
One of the solutions for handling unknown classes is to retrain the model, however, retraining models every time they become obsolete is both resource and time-consuming.
In this paper, we introduce Adaptive Classification By Retrieval CBR, a novel approach for encrypted network traffic classification.
arXiv Detail & Related papers (2024-03-17T13:14:09Z) - Multi-Classification using One-versus-One Deep Learning Strategy with
Joint Probability Estimates [0.0]
The proposed model achieves generally higher classification accuracy than other state-of-the-art models.
Numerical experiments in different applications show that the proposed model achieves generally higher classification accuracy than other state-of-the-art models.
arXiv Detail & Related papers (2023-06-16T07:54:15Z) - Learning Classifiers of Prototypes and Reciprocal Points for Universal
Domain Adaptation [79.62038105814658]
Universal Domain aims to transfer the knowledge between datasets by handling two shifts: domain-shift and categoryshift.
Main challenge is correctly distinguishing the unknown target samples while adapting the distribution of known class knowledge from source to target.
Most existing methods approach this problem by first training the target adapted known and then relying on the single threshold to distinguish unknown target samples.
arXiv Detail & Related papers (2022-12-16T09:01:57Z) - Partial and Asymmetric Contrastive Learning for Out-of-Distribution
Detection in Long-Tailed Recognition [80.07843757970923]
We show that existing OOD detection methods suffer from significant performance degradation when the training set is long-tail distributed.
We propose Partial and Asymmetric Supervised Contrastive Learning (PASCL), which explicitly encourages the model to distinguish between tail-class in-distribution samples and OOD samples.
Our method outperforms previous state-of-the-art method by $1.29%$, $1.45%$, $0.69%$ anomaly detection false positive rate (FPR) and $3.24%$, $4.06%$, $7.89%$ in-distribution
arXiv Detail & Related papers (2022-07-04T01:53:07Z) - Exemplar-free Class Incremental Learning via Discriminative and
Comparable One-class Classifiers [12.121885324463388]
We propose a new framework, named Discriminative and Comparable One-class classifiers for Incremental Learning (DisCOIL)
DisCOIL follows the basic principle of POC, but it adopts variational auto-encoders (VAE) instead of other well-established one-class classifiers (e.g. deep SVDD)
With this advantage, DisCOIL trains a new-class VAE in contrast with the old-class VAEs, which forces the new-class VAE to reconstruct better for new-class samples but worse for the old-class pseudo samples, thus enhancing the
arXiv Detail & Related papers (2022-01-05T07:16:34Z) - $k$Folden: $k$-Fold Ensemble for Out-Of-Distribution Detection [31.10536251430344]
Out-of-Distribution (OOD) detection is an important problem in natural language processing (NLP)
We propose a framework $k$Folden, which mimics the behaviors of OOD detection during training without the use of any external data.
arXiv Detail & Related papers (2021-08-29T01:52:11Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Binary Classification from Multiple Unlabeled Datasets via Surrogate Set
Classification [94.55805516167369]
We propose a new approach for binary classification from m U-sets for $mge2$.
Our key idea is to consider an auxiliary classification task called surrogate set classification (SSC)
arXiv Detail & Related papers (2021-02-01T07:36:38Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.