Understanding the Impact of Adversarial Robustness on Accuracy Disparity
- URL: http://arxiv.org/abs/2211.15762v2
- Date: Sun, 28 May 2023 05:14:28 GMT
- Title: Understanding the Impact of Adversarial Robustness on Accuracy Disparity
- Authors: Yuzheng Hu, Fan Wu, Hongyang Zhang, Han Zhao
- Abstract summary: We decompose the impact of adversarial robustness into two parts: an inherent effect that will degrade the standard accuracy on all classes due to the robustness constraint, and the other caused by the class imbalance ratio.
Our results suggest that the implications may extend to nonlinear models over real-world datasets.
- Score: 18.643495650734398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While it has long been empirically observed that adversarial robustness may
be at odds with standard accuracy and may have further disparate impacts on
different classes, it remains an open question to what extent such observations
hold and how the class imbalance plays a role within. In this paper, we attempt
to understand this question of accuracy disparity by taking a closer look at
linear classifiers under a Gaussian mixture model. We decompose the impact of
adversarial robustness into two parts: an inherent effect that will degrade the
standard accuracy on all classes due to the robustness constraint, and the
other caused by the class imbalance ratio, which will increase the accuracy
disparity compared to standard training. Furthermore, we also show that such
effects extend beyond the Gaussian mixture model, by generalizing our data
model to the general family of stable distributions. More specifically, we
demonstrate that while the constraint of adversarial robustness consistently
degrades the standard accuracy in the balanced class setting, the class
imbalance ratio plays a fundamentally different role in accuracy disparity
compared to the Gaussian case, due to the heavy tail of the stable
distribution. We additionally perform experiments on both synthetic and
real-world datasets to corroborate our theoretical findings. Our empirical
results also suggest that the implications may extend to nonlinear models over
real-world datasets. Our code is publicly available on GitHub at
https://github.com/Accuracy-Disparity/AT-on-AD.
Related papers
- Uncertainty-guided Boundary Learning for Imbalanced Social Event
Detection [64.4350027428928]
We propose a novel uncertainty-guided class imbalance learning framework for imbalanced social event detection tasks.
Our model significantly improves social event representation and classification tasks in almost all classes, especially those uncertain ones.
arXiv Detail & Related papers (2023-10-30T03:32:04Z) - Robust Linear Regression: Phase-Transitions and Precise Tradeoffs for
General Norms [29.936005822346054]
We investigate the impact of test-time adversarial attacks on linear regression models.
We determine the optimal level of robustness that any model can reach while maintaining a given level of standard predictive performance (accuracy)
We obtain a precise characterization which distinguishes between regimes where robustness is achievable without hurting standard accuracy and regimes where a tradeoff might be unavoidable.
arXiv Detail & Related papers (2023-08-01T13:55:45Z) - On the Importance of Feature Separability in Predicting
Out-Of-Distribution Error [25.995311155942016]
We propose a dataset-level score based upon feature dispersion to estimate the test accuracy under distribution shift.
Our method is inspired by desirable properties of features in representation learning: high inter-class dispersion and high intra-class compactness.
arXiv Detail & Related papers (2023-03-27T09:52:59Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Confidence and Dispersity Speak: Characterising Prediction Matrix for
Unsupervised Accuracy Estimation [51.809741427975105]
This work aims to assess how well a model performs under distribution shifts without using labels.
We use the nuclear norm that has been shown to be effective in characterizing both properties.
We show that the nuclear norm is more accurate and robust in accuracy than existing methods.
arXiv Detail & Related papers (2023-02-02T13:30:48Z) - Robustness and Accuracy Could Be Reconcilable by (Proper) Definition [109.62614226793833]
The trade-off between robustness and accuracy has been widely studied in the adversarial literature.
We find that it may stem from the improperly defined robust error, which imposes an inductive bias of local invariance.
By definition, SCORE facilitates the reconciliation between robustness and accuracy, while still handling the worst-case uncertainty.
arXiv Detail & Related papers (2022-02-21T10:36:09Z) - The Interplay between Distribution Parameters and the
Accuracy-Robustness Tradeoff in Classification [0.0]
Adrial training tends to result in models that are less accurate on natural (unperturbed) examples compared to standard models.
This can be attributed to either an algorithmic shortcoming or a fundamental property of the training data distribution.
In this work, we focus on the latter case under a binary Gaussian mixture classification problem.
arXiv Detail & Related papers (2021-07-01T06:57:50Z) - Don't Just Blame Over-parametrization for Over-confidence: Theoretical
Analysis of Calibration in Binary Classification [58.03725169462616]
We show theoretically that over-parametrization is not the only reason for over-confidence.
We prove that logistic regression is inherently over-confident, in the realizable, under-parametrized setting.
Perhaps surprisingly, we also show that over-confidence is not always the case.
arXiv Detail & Related papers (2021-02-15T21:38:09Z) - Precise Statistical Analysis of Classification Accuracies for
Adversarial Training [43.25761725062367]
A variety of recent adversarial training procedures have been proposed to remedy this issue.
We derive a precise characterization of the standard and robust accuracy for a class of minimax adversarially trained models.
arXiv Detail & Related papers (2020-10-21T18:00:53Z) - Provable tradeoffs in adversarially robust classification [96.48180210364893]
We develop and leverage new tools, including recent breakthroughs from probability theory on robust isoperimetry.
Our results reveal fundamental tradeoffs between standard and robust accuracy that grow when data is imbalanced.
arXiv Detail & Related papers (2020-06-09T09:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.