Reject Illegal Inputs with Generative Classifier Derived from Any
Discriminative Classifier
- URL: http://arxiv.org/abs/2001.00483v1
- Date: Thu, 2 Jan 2020 15:11:58 GMT
- Title: Reject Illegal Inputs with Generative Classifier Derived from Any
Discriminative Classifier
- Authors: Xin Wang
- Abstract summary: Supervised Deep Infomax(M) is a scalable end-to-end framework to learn generative classifiers.
We propose a modification of SDIM termed SDIM-emphlogit.
- Score: 7.33811357166334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative classifiers have been shown promising to detect illegal inputs
including adversarial examples and out-of-distribution samples. Supervised Deep
Infomax~(SDIM) is a scalable end-to-end framework to learn generative
classifiers. In this paper, we propose a modification of SDIM termed
SDIM-\emph{logit}. Instead of training generative classifier from scratch,
SDIM-\emph{logit} first takes as input the logits produced any given
discriminative classifier, and generate logit representations; then a
generative classifier is derived by imposing statistical constraints on logit
representations. SDIM-\emph{logit} could inherit the performance of the
discriminative classifier without loss. SDIM-\emph{logit} incurs a negligible
number of additional parameters, and can be efficiently trained with base
classifiers fixed. We perform \emph{classification with rejection}, where test
samples whose class conditionals are smaller than pre-chosen thresholds will be
rejected without predictions. Experiments on illegal inputs, including
adversarial examples, samples with common corruptions, and
out-of-distribution~(OOD) samples show that allowed to reject a portion of test
samples, SDIM-\emph{logit} significantly improves the performance on the left
test sets.
Related papers
- Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Using Reed-Muller Codes for Classification with Rejection and Recovery [0.0]
Reed-Muller Aggregation Networks (RMAggNet)
This paper shows that RMAggNet can minimise incorrectness while maintaining good correctness over multiple adversarial attacks at different perturbation budgets.
It provides an alternative classification-with-rejection method which can reduce the amount of additional processing in situations where a small number of incorrect classifications are permissible.
arXiv Detail & Related papers (2023-09-12T16:20:20Z) - Parametric Classification for Generalized Category Discovery: A Baseline
Study [70.73212959385387]
Generalized Category Discovery (GCD) aims to discover novel categories in unlabelled datasets using knowledge learned from labelled samples.
We investigate the failure of parametric classifiers, verify the effectiveness of previous design choices when high-quality supervision is available, and identify unreliable pseudo-labels as a key problem.
We propose a simple yet effective parametric classification method that benefits from entropy regularisation, achieves state-of-the-art performance on multiple GCD benchmarks and shows strong robustness to unknown class numbers.
arXiv Detail & Related papers (2022-11-21T18:47:11Z) - Is the Performance of My Deep Network Too Good to Be True? A Direct
Approach to Estimating the Bayes Error in Binary Classification [86.32752788233913]
In classification problems, the Bayes error can be used as a criterion to evaluate classifiers with state-of-the-art performance.
We propose a simple and direct Bayes error estimator, where we just take the mean of the labels that show emphuncertainty of the classes.
Our flexible approach enables us to perform Bayes error estimation even for weakly supervised data.
arXiv Detail & Related papers (2022-02-01T13:22:26Z) - When in Doubt: Improving Classification Performance with Alternating
Normalization [57.39356691967766]
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification.
CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution.
We empirically demonstrate its effectiveness across a diverse set of classification tasks.
arXiv Detail & Related papers (2021-09-28T02:55:42Z) - Does Adversarial Oversampling Help us? [10.210871872870737]
We propose a three-player adversarial game-based end-to-end method to handle class imbalance in datasets.
Rather than adversarial minority oversampling, we propose an adversarial oversampling (AO) and a data-space oversampling (DO) approach.
The effectiveness of our proposed method has been validated with high-dimensional, highly imbalanced and large-scale multi-class datasets.
arXiv Detail & Related papers (2021-08-20T05:43:17Z) - ATRO: Adversarial Training with a Rejection Option [10.36668157679368]
This paper proposes a classification framework with a rejection option to mitigate the performance deterioration caused by adversarial examples.
Applying the adversarial training objective to both a classifier and a rejection function simultaneously, we can choose to abstain from classification when it has insufficient confidence to classify a test data point.
arXiv Detail & Related papers (2020-10-24T14:05:03Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z) - GIM: Gaussian Isolation Machines [40.7916016364212]
In many cases, neural network classifiers are exposed to input data that is outside of their training distribution data.
We present a novel hybrid (generative-discriminative) classifier aimed at solving the problem arising when OOD data is encountered.
The proposed GIM's novelty lies in its discriminative performance and generative capabilities, a combination of characteristics not usually seen in a single classifier.
arXiv Detail & Related papers (2020-02-06T09:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.