Boosting Barely Robust Learners: A New Perspective on Adversarial
Robustness
- URL: http://arxiv.org/abs/2202.05920v1
- Date: Fri, 11 Feb 2022 22:07:36 GMT
- Title: Boosting Barely Robust Learners: A New Perspective on Adversarial
Robustness
- Authors: Avrim Blum, Omar Montasser, Greg Shakhnarovich, Hongyang Zhang
- Abstract summary: Barely robust learning algorithms learn predictors that are adversarially robust only on a small fraction.
Our proposed notion of barely robust learning requires perturbation with respect to a "larger" set.
- Score: 30.301460075475344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an oracle-efficient algorithm for boosting the adversarial
robustness of barely robust learners. Barely robust learning algorithms learn
predictors that are adversarially robust only on a small fraction $\beta \ll 1$
of the data distribution. Our proposed notion of barely robust learning
requires robustness with respect to a "larger" perturbation set; which we show
is necessary for strongly robust learning, and that weaker relaxations are not
sufficient for strongly robust learning. Our results reveal a qualitative and
quantitative equivalence between two seemingly unrelated problems: strongly
robust learning and barely robust learning.
Related papers
- Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Theoretical Foundations of Adversarially Robust Learning [7.589246500826111]
Current machine learning systems have been shown to be brittle against adversarial examples.
In this thesis, we explore what robustness properties can we hope to guarantee against adversarial examples.
arXiv Detail & Related papers (2023-06-13T12:20:55Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Towards Robust Dataset Learning [90.2590325441068]
We propose a principled, tri-level optimization to formulate the robust dataset learning problem.
Under an abstraction model that characterizes robust vs. non-robust features, the proposed method provably learns a robust dataset.
arXiv Detail & Related papers (2022-11-19T17:06:10Z) - Adversarial Robustness under Long-Tailed Distribution [93.50792075460336]
Adversarial robustness has attracted extensive studies recently by revealing the vulnerability and intrinsic characteristics of deep networks.
In this work we investigate the adversarial vulnerability as well as defense under long-tailed distributions.
We propose a clean yet effective framework, RoBal, which consists of two dedicated modules, a scale-invariant and data re-balancing.
arXiv Detail & Related papers (2021-04-06T17:53:08Z) - Certifiably-Robust Federated Adversarial Learning via Randomized
Smoothing [16.528628447356496]
In this paper, we incorporate smoothing techniques into federated adversarial training to enable data-private distributed learning.
Our experiments show that such an advanced federated adversarial learning framework can deliver models as robust as those trained by the centralized training.
arXiv Detail & Related papers (2021-03-30T02:19:45Z) - Decoder-free Robustness Disentanglement without (Additional) Supervision [42.066771710455754]
Our proposed Adversarial Asymmetric Training (AAT) algorithm can reliably disentangle robust and non-robust representations without additional supervision on robustness.
Empirical results show our method does not only successfully preserve accuracy by combining two representations, but also achieve much better disentanglement than previous work.
arXiv Detail & Related papers (2020-07-02T19:51:40Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z) - Provably Robust Metric Learning [98.50580215125142]
We show that existing metric learning algorithms can result in metrics that are less robust than the Euclidean distance.
We propose a novel metric learning algorithm to find a Mahalanobis distance that is robust against adversarial perturbations.
Experimental results show that the proposed metric learning algorithm improves both certified robust errors and empirical robust errors.
arXiv Detail & Related papers (2020-06-12T09:17:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.