Adversarial Robustness under Long-Tailed Distribution
- URL: http://arxiv.org/abs/2104.02703v1
- Date: Tue, 6 Apr 2021 17:53:08 GMT
- Title: Adversarial Robustness under Long-Tailed Distribution
- Authors: Tong Wu, Ziwei Liu, Qingqiu Huang, Yu Wang and Dahua Lin
- Abstract summary: Adversarial robustness has attracted extensive studies recently by revealing the vulnerability and intrinsic characteristics of deep networks.
In this work we investigate the adversarial vulnerability as well as defense under long-tailed distributions.
We propose a clean yet effective framework, RoBal, which consists of two dedicated modules, a scale-invariant and data re-balancing.
- Score: 93.50792075460336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial robustness has attracted extensive studies recently by revealing
the vulnerability and intrinsic characteristics of deep networks. However,
existing works on adversarial robustness mainly focus on balanced datasets,
while real-world data usually exhibits a long-tailed distribution. To push
adversarial robustness towards more realistic scenarios, in this work we
investigate the adversarial vulnerability as well as defense under long-tailed
distributions. In particular, we first reveal the negative impacts induced by
imbalanced data on both recognition performance and adversarial robustness,
uncovering the intrinsic challenges of this problem. We then perform a
systematic study on existing long-tailed recognition methods in conjunction
with the adversarial training framework. Several valuable observations are
obtained: 1) natural accuracy is relatively easy to improve, 2) fake gain of
robust accuracy exists under unreliable evaluation, and 3) boundary error
limits the promotion of robustness. Inspired by these observations, we propose
a clean yet effective framework, RoBal, which consists of two dedicated
modules, a scale-invariant classifier and data re-balancing via both margin
engineering at training stage and boundary adjustment during inference.
Extensive experiments demonstrate the superiority of our approach over other
state-of-the-art defense methods. To our best knowledge, we are the first to
tackle adversarial robustness under long-tailed distributions, which we believe
would be a significant step towards real-world robustness. Our code is
available at: https://github.com/wutong16/Adversarial_Long-Tail .
Related papers
- Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Generalizability of Adversarial Robustness Under Distribution Shifts [57.767152566761304]
We take a first step towards investigating the interplay between empirical and certified adversarial robustness on one hand and domain generalization on another.
We train robust models on multiple domains and evaluate their accuracy and robustness on an unseen domain.
We extend our study to cover a real-world medical application, in which adversarial augmentation significantly boosts the generalization of robustness with minimal effect on clean data accuracy.
arXiv Detail & Related papers (2022-09-29T18:25:48Z) - Explicit Tradeoffs between Adversarial and Natural Distributional
Robustness [48.44639585732391]
In practice, models need to enjoy both types of robustness to ensure reliability.
In this work, we show that in fact, explicit tradeoffs exist between adversarial and natural distributional robustness.
arXiv Detail & Related papers (2022-09-15T19:58:01Z) - How many perturbations break this model? Evaluating robustness beyond
adversarial accuracy [28.934863462633636]
We introduce adversarial sparsity, which quantifies how difficult it is to find a successful perturbation given both an input point and a constraint on the direction of the perturbation.
We show that sparsity provides valuable insight into neural networks in multiple ways.
arXiv Detail & Related papers (2022-07-08T21:25:17Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z) - Geometry-aware Instance-reweighted Adversarial Training [78.70024866515756]
In adversarial machine learning, there was a common belief that robustness and accuracy hurt each other.
We propose geometry-aware instance-reweighted adversarial training, where the weights are based on how difficult it is to attack a natural data point.
Experiments show that our proposal boosts the robustness of standard adversarial training.
arXiv Detail & Related papers (2020-10-05T01:33:11Z) - SoK: Certified Robustness for Deep Neural Networks [13.10665264010575]
Recent studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks.
In this paper, we systematize certifiably robust approaches and related practical and theoretical implications.
We also provide the first comprehensive benchmark on existing robustness verification and training approaches on different datasets.
arXiv Detail & Related papers (2020-09-09T07:00:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.