Robustness and Accuracy Could Be Reconcilable by (Proper) Definition
- URL: http://arxiv.org/abs/2202.10103v1
- Date: Mon, 21 Feb 2022 10:36:09 GMT
- Title: Robustness and Accuracy Could Be Reconcilable by (Proper) Definition
- Authors: Tianyu Pang, Min Lin, Xiao Yang, Jun Zhu, Shuicheng Yan
- Abstract summary: The trade-off between robustness and accuracy has been widely studied in the adversarial literature.
We find that it may stem from the improperly defined robust error, which imposes an inductive bias of local invariance.
By definition, SCORE facilitates the reconciliation between robustness and accuracy, while still handling the worst-case uncertainty.
- Score: 109.62614226793833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The trade-off between robustness and accuracy has been widely studied in the
adversarial literature. Although still controversial, the prevailing view is
that this trade-off is inherent, either empirically or theoretically. Thus, we
dig for the origin of this trade-off in adversarial training and find that it
may stem from the improperly defined robust error, which imposes an inductive
bias of local invariance -- an overcorrection towards smoothness. Given this,
we advocate employing local equivariance to describe the ideal behavior of a
robust model, leading to a self-consistent robust error named SCORE. By
definition, SCORE facilitates the reconciliation between robustness and
accuracy, while still handling the worst-case uncertainty via robust
optimization. By simply substituting KL divergence with variants of distance
metrics, SCORE can be efficiently minimized. Empirically, our models achieve
top-rank performance on RobustBench under AutoAttack. Besides, SCORE provides
instructive insights for explaining the overfitting phenomenon and semantic
input gradients observed on robust models.
Related papers
- Generalized Gaussian Temporal Difference Error for Uncertainty-aware Reinforcement Learning [0.19418036471925312]
We introduce a novel framework for generalized Gaussian error modeling in deep reinforcement learning.
Our framework enhances the flexibility of error distribution modeling by incorporating additional higher-order moment, particularly kurtosis.
arXiv Detail & Related papers (2024-08-05T08:12:25Z) - Regulating Model Reliance on Non-Robust Features by Smoothing Input Marginal Density [93.32594873253534]
Trustworthy machine learning requires meticulous regulation of model reliance on non-robust features.
We propose a framework to delineate and regulate such features by attributing model predictions to the input.
arXiv Detail & Related papers (2024-07-05T09:16:56Z) - Extreme Miscalibration and the Illusion of Adversarial Robustness [66.29268991629085]
Adversarial Training is often used to increase model robustness.
We show that this observed gain in robustness is an illusion of robustness (IOR)
We urge the NLP community to incorporate test-time temperature scaling into their robustness evaluations.
arXiv Detail & Related papers (2024-02-27T13:49:12Z) - Selective Learning: Towards Robust Calibration with Dynamic Regularization [79.92633587914659]
Miscalibration in deep learning refers to there is a discrepancy between the predicted confidence and performance.
We introduce Dynamic Regularization (DReg) which aims to learn what should be learned during training thereby circumventing the confidence adjusting trade-off.
arXiv Detail & Related papers (2024-02-13T11:25:20Z) - Distributional Shift-Aware Off-Policy Interval Estimation: A Unified
Error Quantification Framework [8.572441599469597]
We study high-confidence off-policy evaluation in the context of infinite-horizon Markov decision processes.
The objective is to establish a confidence interval (CI) for the target policy value using only offline data pre-collected from unknown behavior policies.
We show that our algorithm is sample-efficient, error-robust, and provably convergent even in non-linear function approximation settings.
arXiv Detail & Related papers (2023-09-23T06:35:44Z) - Understanding the Impact of Adversarial Robustness on Accuracy Disparity [18.643495650734398]
We decompose the impact of adversarial robustness into two parts: an inherent effect that will degrade the standard accuracy on all classes due to the robustness constraint, and the other caused by the class imbalance ratio.
Our results suggest that the implications may extend to nonlinear models over real-world datasets.
arXiv Detail & Related papers (2022-11-28T20:46:51Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Differentially Private Adversarial Robustness Through Randomized
Perturbations [16.187650541902283]
Deep Neural Networks are provably sensitive to small perturbations on correctly classified examples and lead to erroneous predictions.
In this paper, we study adversarial robustness through randomized perturbations.
Our approach uses a novel density-based mechanism based on truncated Gumbel noise.
arXiv Detail & Related papers (2020-09-27T00:58:32Z) - Improving Calibration through the Relationship with Adversarial
Robustness [19.384119330332446]
We study the connection between adversarial robustness and calibration.
We propose Adversarial Robustness based Adaptive Labeling (AR-AdaLS)
We find that our method, taking the adversarial robustness of the in-distribution data into consideration, leads to better calibration over the model even under distributional shifts.
arXiv Detail & Related papers (2020-06-29T20:56:33Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.