Achieving Adversarial Robustness via Sparsity
- URL: http://arxiv.org/abs/2009.05423v1
- Date: Fri, 11 Sep 2020 13:15:43 GMT
- Title: Achieving Adversarial Robustness via Sparsity
- Authors: Shufan Wang, Ningyi Liao, Liyao Xiang, Nanyang Ye, Quanshi Zhang
- Abstract summary: We prove that the sparsity of network weights is closely associated with model robustness.
We propose a novel adversarial training method called inverse weights inheritance.
- Score: 33.11581532788394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Network pruning has been known to produce compact models without much
accuracy degradation. However, how the pruning process affects a network's
robustness and the working mechanism behind remain unresolved. In this work, we
theoretically prove that the sparsity of network weights is closely associated
with model robustness. Through experiments on a variety of adversarial pruning
methods, we find that weights sparsity will not hurt but improve robustness,
where both weights inheritance from the lottery ticket and adversarial training
improve model robustness in network pruning. Based on these findings, we
propose a novel adversarial training method called inverse weights inheritance,
which imposes sparse weights distribution on a large network by inheriting
weights from a small network, thereby improving the robustness of the large
network.
Related papers
- Robustness to distribution shifts of compressed networks for edge
devices [6.606005367624169]
It is important to investigate the robustness of compressed networks in two types of data distribution shifts: domain shifts and adversarial perturbations.
In this study, we discover that compressed models are less robust to distribution shifts than their original networks.
compact networks obtained by knowledge distillation are much more robust to distribution shifts than pruned networks.
arXiv Detail & Related papers (2024-01-22T15:00:32Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Revisiting the Trade-off between Accuracy and Robustness via Weight Distribution of Filters [17.316537476091867]
Adversarial attacks have been proven to be potential threats to Deep Neural Networks (DNNs)
We propose a sample-wise dynamic network architecture named Adversarial Weight-Varied Network (AW-Net)
AW-Net adaptively adjusts the network's weights based on regulation signals generated by an adversarial router.
arXiv Detail & Related papers (2023-06-06T06:09:11Z) - Robust low-rank training via approximate orthonormal constraints [2.519906683279153]
We introduce a robust low-rank training algorithm that maintains the network's weights on the low-rank matrix manifold.
The resulting model reduces both training and inference costs while ensuring well-conditioning and thus better adversarial robustness, without compromising model accuracy.
arXiv Detail & Related papers (2023-06-02T12:22:35Z) - Understanding the effect of sparsity on neural networks robustness [32.15505923976003]
This paper examines the impact of static sparsity on the robustness of a trained network to weight perturbations, data corruption, and adversarial examples.
We show that, up to a certain sparsity achieved by increasing network width and depth while keeping the network capacity fixed, sparsified networks consistently match and often outperform their initially dense versions.
arXiv Detail & Related papers (2022-06-22T08:51:40Z) - High-Robustness, Low-Transferability Fingerprinting of Neural Networks [78.2527498858308]
This paper proposes Characteristic Examples for effectively fingerprinting deep neural networks.
It features high-robustness to the base model against model pruning as well as low-transferability to unassociated models.
arXiv Detail & Related papers (2021-05-14T21:48:23Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Do Wider Neural Networks Really Help Adversarial Robustness? [92.8311752980399]
We show that the model robustness is closely related to the tradeoff between natural accuracy and perturbation stability.
We propose a new Width Adjusted Regularization (WAR) method that adaptively enlarges $lambda$ on wide models.
arXiv Detail & Related papers (2020-10-03T04:46:17Z) - Rethinking Clustering for Robustness [56.14672993686335]
ClusTR is a clustering-based and adversary-free training framework to learn robust models.
textitClusTR outperforms adversarially-trained networks by up to $4%$ under strong PGD attacks.
arXiv Detail & Related papers (2020-06-13T16:55:51Z) - Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness [97.67477497115163]
We use mode connectivity to study the adversarial robustness of deep neural networks.
Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.
Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.
arXiv Detail & Related papers (2020-04-30T19:12:50Z) - Defense Through Diverse Directions [24.129270094757587]
We develop a novel Bayesian neural network methodology to achieve strong adversarial robustness.
We demonstrate that by encouraging the network to distribute evenly across inputs, the network becomes less susceptible to localized, brittle features.
We show empirical robustness on several benchmark datasets.
arXiv Detail & Related papers (2020-03-24T01:22:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.