Boosting Barely Robust Learners: A New Perspective on Adversarial
Robustness
- URL: http://arxiv.org/abs/2202.05920v1
- Date: Fri, 11 Feb 2022 22:07:36 GMT
- Title: Boosting Barely Robust Learners: A New Perspective on Adversarial
Robustness
- Authors: Avrim Blum, Omar Montasser, Greg Shakhnarovich, Hongyang Zhang
- Abstract summary: Barely robust learning algorithms learn predictors that are adversarially robust only on a small fraction.
Our proposed notion of barely robust learning requires perturbation with respect to a "larger" set.
- Score: 30.301460075475344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an oracle-efficient algorithm for boosting the adversarial
robustness of barely robust learners. Barely robust learning algorithms learn
predictors that are adversarially robust only on a small fraction $\beta \ll 1$
of the data distribution. Our proposed notion of barely robust learning
requires robustness with respect to a "larger" perturbation set; which we show
is necessary for strongly robust learning, and that weaker relaxations are not
sufficient for strongly robust learning. Our results reveal a qualitative and
quantitative equivalence between two seemingly unrelated problems: strongly
robust learning and barely robust learning.
Related papers
- Maintaining Adversarial Robustness in Continuous Learning [11.208958315147918]
Adversarial robustness is essential for security and reliability of machine learning systems.
This vulnerability can be addressed by fostering a novel capability for neural networks, termed continual robust learning.
arXiv Detail & Related papers (2024-02-17T05:14:47Z) - On the Onset of Robust Overfitting in Adversarial Training [66.27055915739331]
Adversarial Training (AT) is a widely-used algorithm for building robust neural networks.
AT suffers from the issue of robust overfitting, the fundamental mechanism of which remains unclear.
arXiv Detail & Related papers (2023-10-01T07:57:03Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Theoretical Foundations of Adversarially Robust Learning [7.589246500826111]
Current machine learning systems have been shown to be brittle against adversarial examples.
In this thesis, we explore what robustness properties can we hope to guarantee against adversarial examples.
arXiv Detail & Related papers (2023-06-13T12:20:55Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Towards Robust Dataset Learning [90.2590325441068]
We propose a principled, tri-level optimization to formulate the robust dataset learning problem.
Under an abstraction model that characterizes robust vs. non-robust features, the proposed method provably learns a robust dataset.
arXiv Detail & Related papers (2022-11-19T17:06:10Z) - Adversarial Robustness under Long-Tailed Distribution [93.50792075460336]
Adversarial robustness has attracted extensive studies recently by revealing the vulnerability and intrinsic characteristics of deep networks.
In this work we investigate the adversarial vulnerability as well as defense under long-tailed distributions.
We propose a clean yet effective framework, RoBal, which consists of two dedicated modules, a scale-invariant and data re-balancing.
arXiv Detail & Related papers (2021-04-06T17:53:08Z) - Decoder-free Robustness Disentanglement without (Additional) Supervision [42.066771710455754]
Our proposed Adversarial Asymmetric Training (AAT) algorithm can reliably disentangle robust and non-robust representations without additional supervision on robustness.
Empirical results show our method does not only successfully preserve accuracy by combining two representations, but also achieve much better disentanglement than previous work.
arXiv Detail & Related papers (2020-07-02T19:51:40Z) - Provably Robust Metric Learning [98.50580215125142]
We show that existing metric learning algorithms can result in metrics that are less robust than the Euclidean distance.
We propose a novel metric learning algorithm to find a Mahalanobis distance that is robust against adversarial perturbations.
Experimental results show that the proposed metric learning algorithm improves both certified robust errors and empirical robust errors.
arXiv Detail & Related papers (2020-06-12T09:17:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.