Generative Robust Classification
- URL: http://arxiv.org/abs/2212.07283v1
- Date: Wed, 14 Dec 2022 15:33:11 GMT
- Title: Generative Robust Classification
- Authors: Xuwang Yin
- Abstract summary: Training adversarially robust discriminative (i.e., softmax) classification has been the dominant approach to robust classification.
We investigate using adversarial training (AT)-based generative models.
We find it straightforward to apply advanced data augmentation to achieve better robustness in our approach.
- Score: 3.4773470589069477
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training adversarially robust discriminative (i.e., softmax) classifier has
been the dominant approach to robust classification. Building on recent work on
adversarial training (AT)-based generative models, we investigate using AT to
learn unnormalized class-conditional density models and then performing
generative robust classification. Our result shows that, under the condition of
similar model capacities, the generative robust classifier achieves comparable
performance to a baseline softmax robust classifier when the test data is clean
or when the test perturbation is of limited size, and much better performance
when the test perturbation size exceeds the training perturbation size. The
generative classifier is also able to generate samples or counterfactuals that
more closely resemble the training data, suggesting that the generative
classifier can better capture the class-conditional distributions. In contrast
to standard discriminative adversarial training where advanced data
augmentation techniques are only effective when combined with weight averaging,
we find it straightforward to apply advanced data augmentation to achieve
better robustness in our approach. Our result suggests that the generative
classifier is a competitive alternative to robust classification, especially
for problems with limited number of classes.
Related papers
- TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - Characterizing the Optimal 0-1 Loss for Multi-class Classification with
a Test-time Attacker [57.49330031751386]
We find achievable information-theoretic lower bounds on loss in the presence of a test-time attacker for multi-class classifiers on any discrete dataset.
We provide a general framework for finding the optimal 0-1 loss that revolves around the construction of a conflict hypergraph from the data and adversarial constraints.
arXiv Detail & Related papers (2023-02-21T15:17:13Z) - Anomaly Detection using Ensemble Classification and Evidence Theory [62.997667081978825]
We present a novel approach for novel detection using ensemble classification and evidence theory.
A pool selection strategy is presented to build a solid ensemble classifier.
We use uncertainty for the anomaly detection approach.
arXiv Detail & Related papers (2022-12-23T00:50:41Z) - Parametric Classification for Generalized Category Discovery: A Baseline
Study [70.73212959385387]
Generalized Category Discovery (GCD) aims to discover novel categories in unlabelled datasets using knowledge learned from labelled samples.
We investigate the failure of parametric classifiers, verify the effectiveness of previous design choices when high-quality supervision is available, and identify unreliable pseudo-labels as a key problem.
We propose a simple yet effective parametric classification method that benefits from entropy regularisation, achieves state-of-the-art performance on multiple GCD benchmarks and shows strong robustness to unknown class numbers.
arXiv Detail & Related papers (2022-11-21T18:47:11Z) - The Impact of Using Regression Models to Build Defect Classifiers [13.840006058766766]
It is common practice to discretize continuous defect counts into defective and non-defective classes.
We compare the performance and interpretation of defect classifiers built using both approaches.
arXiv Detail & Related papers (2022-02-12T22:12:55Z) - Score-Based Generative Classifiers [9.063815952852783]
Generative models have been used as adversarially robust classifiers on simple datasets such as MNIST.
Previous results have suggested a trade-off between the likelihood of the data and classification accuracy.
We show that score-based generative models are closing the gap in classification accuracy compared to standard discriminative models.
arXiv Detail & Related papers (2021-10-01T15:05:33Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - Dynamic Decision Boundary for One-class Classifiers applied to
non-uniformly Sampled Data [0.9569316316728905]
A typical issue in Pattern Recognition is the non-uniformly sampled data.
In this paper, we propose a one-class classifier based on the minimum spanning tree with a dynamic decision boundary.
arXiv Detail & Related papers (2020-04-05T18:29:36Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.