The Interplay between Distribution Parameters and the
Accuracy-Robustness Tradeoff in Classification
- URL: http://arxiv.org/abs/2107.00247v1
- Date: Thu, 1 Jul 2021 06:57:50 GMT
- Title: The Interplay between Distribution Parameters and the
Accuracy-Robustness Tradeoff in Classification
- Authors: Alireza Mousavi Hosseini, Amir Mohammad Abouei, Mohammad Hossein
Rohban
- Abstract summary: Adrial training tends to result in models that are less accurate on natural (unperturbed) examples compared to standard models.
This can be attributed to either an algorithmic shortcoming or a fundamental property of the training data distribution.
In this work, we focus on the latter case under a binary Gaussian mixture classification problem.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial training tends to result in models that are less accurate on
natural (unperturbed) examples compared to standard models. This can be
attributed to either an algorithmic shortcoming or a fundamental property of
the training data distribution, which admits different solutions for optimal
standard and adversarial classifiers. In this work, we focus on the latter case
under a binary Gaussian mixture classification problem. Unlike earlier work, we
aim to derive the natural accuracy gap between the optimal Bayes and
adversarial classifiers, and study the effect of different distributional
parameters, namely separation between class centroids, class proportions, and
the covariance matrix, on the derived gap. We show that under certain
conditions, the natural error of the optimal adversarial classifier, as well as
the gap, are locally minimized when classes are balanced, contradicting the
performance of the Bayes classifier where perfect balance induces the worst
accuracy. Moreover, we show that with an $\ell_\infty$ bounded perturbation and
an adversarial budget of $\epsilon$, this gap is $\Theta(\epsilon^2)$ for the
worst-case parameters, which for suitably small $\epsilon$ indicates the
theoretical possibility of achieving robust classifiers with near-perfect
accuracy, which is rarely reflected in practical algorithms.
Related papers
- Adaptive $k$-nearest neighbor classifier based on the local estimation of the shape operator [49.87315310656657]
We introduce a new adaptive $k$-nearest neighbours ($kK$-NN) algorithm that explores the local curvature at a sample to adaptively defining the neighborhood size.
Results on many real-world datasets indicate that the new $kK$-NN algorithm yields superior balanced accuracy compared to the established $k$-NN method.
arXiv Detail & Related papers (2024-09-08T13:08:45Z) - Understanding the Impact of Adversarial Robustness on Accuracy Disparity [18.643495650734398]
We decompose the impact of adversarial robustness into two parts: an inherent effect that will degrade the standard accuracy on all classes due to the robustness constraint, and the other caused by the class imbalance ratio.
Our results suggest that the implications may extend to nonlinear models over real-world datasets.
arXiv Detail & Related papers (2022-11-28T20:46:51Z) - Robustness and Accuracy Could Be Reconcilable by (Proper) Definition [109.62614226793833]
The trade-off between robustness and accuracy has been widely studied in the adversarial literature.
We find that it may stem from the improperly defined robust error, which imposes an inductive bias of local invariance.
By definition, SCORE facilitates the reconciliation between robustness and accuracy, while still handling the worst-case uncertainty.
arXiv Detail & Related papers (2022-02-21T10:36:09Z) - Predict then Interpolate: A Simple Algorithm to Learn Stable Classifiers [59.06169363181417]
Predict then Interpolate (PI) is an algorithm for learning correlations that are stable across environments.
We prove that by interpolating the distributions of the correct predictions and the wrong predictions, we can uncover an oracle distribution where the unstable correlation vanishes.
arXiv Detail & Related papers (2021-05-26T15:37:48Z) - Robust Classification Under $\ell_0$ Attack for the Gaussian Mixture
Model [39.414875342234204]
We develop a novel classification algorithm called FilTrun that has two main modules: filtration and Truncation.
We discuss several examples that illustrate interesting behaviors such as a phase transition for adversary's budget determining whether the effect of adversarial perturbation can be fully neutralized.
arXiv Detail & Related papers (2021-04-05T23:31:25Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Classifier-independent Lower-Bounds for Adversarial Robustness [13.247278149124757]
We theoretically analyse the limits of robustness to test-time adversarial and noisy examples in classification.
We use optimal transport theory to derive variational formulae for the Bayes-optimal error a classifier can make on a given classification problem.
We derive explicit lower-bounds on the Bayes-optimal error in the case of the popular distance-based attacks.
arXiv Detail & Related papers (2020-06-17T16:46:39Z) - Provable tradeoffs in adversarially robust classification [96.48180210364893]
We develop and leverage new tools, including recent breakthroughs from probability theory on robust isoperimetry.
Our results reveal fundamental tradeoffs between standard and robust accuracy that grow when data is imbalanced.
arXiv Detail & Related papers (2020-06-09T09:58:19Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.