Probabilistically Robust Learning: Balancing Average- and Worst-case
Performance
- URL: http://arxiv.org/abs/2202.01136v1
- Date: Wed, 2 Feb 2022 17:01:38 GMT
- Title: Probabilistically Robust Learning: Balancing Average- and Worst-case
Performance
- Authors: Alexander Robey and Luiz F. O. Chamon and George J. Pappas and Hamed
Hassani
- Abstract summary: We propose a framework called robustness probabilistic that bridges the gap between the accurate, yet brittle average case and the robust, yet conservative worst case.
From a theoretical point of view, this framework overcomes the trade-offs between the performance and the sample-complexity of worst-case and average-case learning.
- Score: 105.87195436925722
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many of the successes of machine learning are based on minimizing an averaged
loss function. However, it is well-known that this paradigm suffers from
robustness issues that hinder its applicability in safety-critical domains.
These issues are often addressed by training against worst-case perturbations
of data, a technique known as adversarial training. Although empirically
effective, adversarial training can be overly conservative, leading to
unfavorable trade-offs between nominal performance and robustness. To this end,
in this paper we propose a framework called probabilistic robustness that
bridges the gap between the accurate, yet brittle average case and the robust,
yet conservative worst case by enforcing robustness to most rather than to all
perturbations. From a theoretical point of view, this framework overcomes the
trade-offs between the performance and the sample-complexity of worst-case and
average-case learning. From a practical point of view, we propose a novel
algorithm based on risk-aware optimization that effectively balances average-
and worst-case performance at a considerably lower computational cost relative
to adversarial training. Our results on MNIST, CIFAR-10, and SVHN illustrate
the advantages of this framework on the spectrum from average- to worst-case
robustness.
Related papers
- Conflict-Aware Adversarial Training [29.804312958830636]
We argue that the weighted-average method does not provide the best tradeoff for the standard performance and adversarial robustness.
We propose a new trade-off paradigm for adversarial training with a conflict-aware factor for the convex combination of standard and adversarial loss, named textbfConflict-Aware Adrial Training(CA-AT)
arXiv Detail & Related papers (2024-10-21T23:44:03Z) - Towards Fairness-Aware Adversarial Learning [13.932705960012846]
We propose a novel learning paradigm, named Fairness-Aware Adversarial Learning (FAAL)
Our method aims to find the worst distribution among different categories, and the solution is guaranteed to obtain the upper bound performance with high probability.
In particular, FAAL can fine-tune an unfair robust model to be fair within only two epochs, without compromising the overall clean and robust accuracies.
arXiv Detail & Related papers (2024-02-27T18:01:59Z) - Perturbation-Invariant Adversarial Training for Neural Ranking Models:
Improving the Effectiveness-Robustness Trade-Off [107.35833747750446]
adversarial examples can be crafted by adding imperceptible perturbations to legitimate documents.
This vulnerability raises significant concerns about their reliability and hinders the widespread deployment of NRMs.
In this study, we establish theoretical guarantees regarding the effectiveness-robustness trade-off in NRMs.
arXiv Detail & Related papers (2023-12-16T05:38:39Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - WAT: Improve the Worst-class Robustness in Adversarial Training [11.872656386839436]
Adversarial training is a popular strategy to defend against adversarial attacks.
Deep Neural Networks (DNN) have been shown to be vulnerable to adversarial examples.
This paper proposes a novel framework of worst-class adversarial training.
arXiv Detail & Related papers (2023-02-08T12:54:19Z) - On the Convergence and Robustness of Adversarial Training [134.25999006326916]
Adrial training with Project Gradient Decent (PGD) is amongst the most effective.
We propose a textitdynamic training strategy to increase the convergence quality of the generated adversarial examples.
Our theoretical and empirical results show the effectiveness of the proposed method.
arXiv Detail & Related papers (2021-12-15T17:54:08Z) - Adversarial Robustness with Semi-Infinite Constrained Learning [177.42714838799924]
Deep learning to inputs perturbations has raised serious questions about its use in safety-critical domains.
We propose a hybrid Langevin Monte Carlo training approach to mitigate this issue.
We show that our approach can mitigate the trade-off between state-of-the-art performance and robust robustness.
arXiv Detail & Related papers (2021-10-29T13:30:42Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.