Robust Linear Regression: Phase-Transitions and Precise Tradeoffs for
General Norms
- URL: http://arxiv.org/abs/2308.00556v1
- Date: Tue, 1 Aug 2023 13:55:45 GMT
- Title: Robust Linear Regression: Phase-Transitions and Precise Tradeoffs for
General Norms
- Authors: Elvis Dohmatob, Meyer Scetbon
- Abstract summary: We investigate the impact of test-time adversarial attacks on linear regression models.
We determine the optimal level of robustness that any model can reach while maintaining a given level of standard predictive performance (accuracy)
We obtain a precise characterization which distinguishes between regimes where robustness is achievable without hurting standard accuracy and regimes where a tradeoff might be unavoidable.
- Score: 29.936005822346054
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we investigate the impact of test-time adversarial attacks on
linear regression models and determine the optimal level of robustness that any
model can reach while maintaining a given level of standard predictive
performance (accuracy). Through quantitative estimates, we uncover fundamental
tradeoffs between adversarial robustness and accuracy in different regimes. We
obtain a precise characterization which distinguishes between regimes where
robustness is achievable without hurting standard accuracy and regimes where a
tradeoff might be unavoidable. Our findings are empirically confirmed with
simple experiments that represent a variety of settings. This work applies to
feature covariance matrices and attack norms of any nature, and extends beyond
previous works in this area.
Related papers
- A Fundamental Accuracy--Robustness Trade-off in Regression and Classification [0.0]
We derive a fundamental trade-off between standard and adversarial risk in a general situation.
As a concrete example, we evaluate the trade-off in regression with derived ridge functions under mild regularity conditions.
arXiv Detail & Related papers (2024-11-06T22:03:53Z) - Selective Learning: Towards Robust Calibration with Dynamic Regularization [79.92633587914659]
Miscalibration in deep learning refers to there is a discrepancy between the predicted confidence and performance.
We introduce Dynamic Regularization (DReg) which aims to learn what should be learned during training thereby circumventing the confidence adjusting trade-off.
arXiv Detail & Related papers (2024-02-13T11:25:20Z) - Understanding the Impact of Adversarial Robustness on Accuracy Disparity [18.643495650734398]
We decompose the impact of adversarial robustness into two parts: an inherent effect that will degrade the standard accuracy on all classes due to the robustness constraint, and the other caused by the class imbalance ratio.
Our results suggest that the implications may extend to nonlinear models over real-world datasets.
arXiv Detail & Related papers (2022-11-28T20:46:51Z) - Robustness and Accuracy Could Be Reconcilable by (Proper) Definition [109.62614226793833]
The trade-off between robustness and accuracy has been widely studied in the adversarial literature.
We find that it may stem from the improperly defined robust error, which imposes an inductive bias of local invariance.
By definition, SCORE facilitates the reconciliation between robustness and accuracy, while still handling the worst-case uncertainty.
arXiv Detail & Related papers (2022-02-21T10:36:09Z) - Adversarial robustness for latent models: Revisiting the robust-standard
accuracies tradeoff [12.386462516398472]
adversarial training is often observed to drop the standard test accuracy.
In this paper, we argue that this tradeoff is mitigated when the data enjoys a low-dimensional structure.
We show that as the manifold dimension to the ambient dimension decreases, one can obtain models that are nearly optimal with respect to both, the standard accuracy and the robust accuracy measures.
arXiv Detail & Related papers (2021-10-22T17:58:27Z) - Squared $\ell_2$ Norm as Consistency Loss for Leveraging Augmented Data
to Learn Robust and Invariant Representations [76.85274970052762]
Regularizing distance between embeddings/representations of original samples and augmented counterparts is a popular technique for improving robustness of neural networks.
In this paper, we explore these various regularization choices, seeking to provide a general understanding of how we should regularize the embeddings.
We show that the generic approach we identified (squared $ell$ regularized augmentation) outperforms several recent methods, which are each specially designed for one task.
arXiv Detail & Related papers (2020-11-25T22:40:09Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Precise Statistical Analysis of Classification Accuracies for
Adversarial Training [43.25761725062367]
A variety of recent adversarial training procedures have been proposed to remedy this issue.
We derive a precise characterization of the standard and robust accuracy for a class of minimax adversarially trained models.
arXiv Detail & Related papers (2020-10-21T18:00:53Z) - Revisiting Ensembles in an Adversarial Context: Improving Natural
Accuracy [5.482532589225552]
There is still a significant gap in natural accuracy between robust and non-robust models.
We consider a number of ensemble methods designed to mitigate this performance difference.
We consider two schemes, one that combines predictions from several randomly robust models, and the other that fuses features from robust and standard models.
arXiv Detail & Related papers (2020-02-26T15:45:58Z) - Understanding and Mitigating the Tradeoff Between Robustness and
Accuracy [88.51943635427709]
Adversarial training augments the training set with perturbations to improve the robust error.
We show that the standard error could increase even when the augmented perturbations have noiseless observations from the optimal linear predictor.
arXiv Detail & Related papers (2020-02-25T08:03:01Z) - GenDICE: Generalized Offline Estimation of Stationary Values [108.17309783125398]
We show that effective estimation can still be achieved in important applications.
Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions.
The resulting algorithm, GenDICE, is straightforward and effective.
arXiv Detail & Related papers (2020-02-21T00:27:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.