Blind Adversarial Pruning: Balance Accuracy, Efficiency and Robustness
- URL: http://arxiv.org/abs/2004.05913v1
- Date: Fri, 10 Apr 2020 02:27:48 GMT
- Title: Blind Adversarial Pruning: Balance Accuracy, Efficiency and Robustness
- Authors: Haidong Xie, Lixin Qian, Xueshuang Xiang, Naijin Liu
- Abstract summary: This paper first investigates the robustness of pruned models with different compression ratios under the gradual pruning process.
We then test the performance of mixing the clean data and adversarial examples into the gradual pruning process, called adversarial pruning.
To better balance the AER, we propose an approach called blind adversarial pruning (BAP), which introduces the idea of blind adversarial training into the gradual pruning process.
- Score: 3.039568795810294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the growth of interest in the attack and defense of deep neural
networks, researchers are focusing more on the robustness of applying them to
devices with limited memory. Thus, unlike adversarial training, which only
considers the balance between accuracy and robustness, we come to a more
meaningful and critical issue, i.e., the balance among accuracy, efficiency and
robustness (AER). Recently, some related works focused on this issue, but with
different observations, and the relations among AER remain unclear. This paper
first investigates the robustness of pruned models with different compression
ratios under the gradual pruning process and concludes that the robustness of
the pruned model drastically varies with different pruning processes,
especially in response to attacks with large strength. Second, we test the
performance of mixing the clean data and adversarial examples (generated with a
prescribed uniform budget) into the gradual pruning process, called adversarial
pruning, and find the following: the pruned model's robustness exhibits high
sensitivity to the budget. Furthermore, to better balance the AER, we propose
an approach called blind adversarial pruning (BAP), which introduces the idea
of blind adversarial training into the gradual pruning process. The main idea
is to use a cutoff-scale strategy to adaptively estimate a nonuniform budget to
modify the AEs used during pruning, thus ensuring that the strengths of AEs are
dynamically located within a reasonable range at each pruning step and
ultimately improving the overall AER of the pruned model. The experimental
results obtained using BAP for pruning classification models based on several
benchmarks demonstrate the competitive performance of this method: the
robustness of the model pruned by BAP is more stable among varying pruning
processes, and BAP exhibits better overall AER than adversarial pruning.
Related papers
- PUMA: margin-based data pruning [51.12154122266251]
We focus on data pruning, where some training samples are removed based on the distance to the model classification boundary (i.e., margin)
We propose PUMA, a new data pruning strategy that computes the margin using DeepFool.
We show that PUMA can be used on top of the current state-of-the-art methodology in robustness, and it is able to significantly improve the model performance unlike the existing data pruning strategies.
arXiv Detail & Related papers (2024-05-10T08:02:20Z) - Towards Understanding the Robustness of Diffusion-Based Purification: A Stochastic Perspective [65.10019978876863]
Diffusion-Based Purification (DBP) has emerged as an effective defense mechanism against adversarial attacks.
In this paper, we argue that the inherentity in the DBP process is the primary driver of its robustness.
arXiv Detail & Related papers (2024-04-22T16:10:38Z) - Towards Understanding Dual BN In Hybrid Adversarial Training [79.92394747290905]
We show that disentangling statistics plays a less role than disentangling affine parameters in model training.
We propose a two-task hypothesis which serves as the empirical foundation and a unified framework for Hybrid-AT improvement.
arXiv Detail & Related papers (2024-03-28T05:08:25Z) - Perturbation-Invariant Adversarial Training for Neural Ranking Models:
Improving the Effectiveness-Robustness Trade-Off [107.35833747750446]
adversarial examples can be crafted by adding imperceptible perturbations to legitimate documents.
This vulnerability raises significant concerns about their reliability and hinders the widespread deployment of NRMs.
In this study, we establish theoretical guarantees regarding the effectiveness-robustness trade-off in NRMs.
arXiv Detail & Related papers (2023-12-16T05:38:39Z) - PEARL: Preprocessing Enhanced Adversarial Robust Learning of Image
Deraining for Semantic Segmentation [42.911517493220664]
We present the first attempt to improve the robustness of semantic segmentation tasks by simultaneously handling different types of degradation factors.
Our approach effectively handles both rain streaks and adversarial perturbation by transferring the robustness of the segmentation model to the image derain model.
As opposed to the commonly used Negative Adversarial Attack (NAA), we design the Auxiliary Mirror Attack (AMA) to introduce positive information prior to the training of the PEARL framework.
arXiv Detail & Related papers (2023-05-25T04:44:17Z) - Robustness and Accuracy Could Be Reconcilable by (Proper) Definition [109.62614226793833]
The trade-off between robustness and accuracy has been widely studied in the adversarial literature.
We find that it may stem from the improperly defined robust error, which imposes an inductive bias of local invariance.
By definition, SCORE facilitates the reconciliation between robustness and accuracy, while still handling the worst-case uncertainty.
arXiv Detail & Related papers (2022-02-21T10:36:09Z) - Sparse Progressive Distillation: Resolving Overfitting under
Pretrain-and-Finetune Paradigm [7.662952656290564]
Various pruning approaches have been proposed to reduce the footprint requirements of Transformer-based language models.
We show for the first time that reducing the risk of overfitting can help the effectiveness of pruning under the pretrain-and-finetune paradigm.
arXiv Detail & Related papers (2021-10-15T16:42:56Z) - Blind Adversarial Training: Balance Accuracy and Robustness [9.224557511013584]
Adversarial training (AT) aims to improve the robustness of deep learning models by mixing clean data and adversarial examples (AEs)
This paper proposes a novel AT approach named blind adversarial training (BAT) to better balance the accuracy and robustness.
arXiv Detail & Related papers (2020-04-10T02:16:01Z) - Adversarial Robustness on In- and Out-Distribution Improves
Explainability [109.68938066821246]
RATIO is a training procedure for robustness via Adversarial Training on In- and Out-distribution.
RATIO achieves state-of-the-art $l$-adrial on CIFAR10 and maintains better clean accuracy.
arXiv Detail & Related papers (2020-03-20T18:57:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.