CSTAR: Towards Compact and STructured Deep Neural Networks with
Adversarial Robustness
- URL: http://arxiv.org/abs/2212.01957v1
- Date: Sun, 4 Dec 2022 23:59:47 GMT
- Title: CSTAR: Towards Compact and STructured Deep Neural Networks with
Adversarial Robustness
- Authors: Huy Phan, Miao Yin, Yang Sui, Bo Yuan, Saman Zonouz
- Abstract summary: CSTAR is an efficient solution that can simultaneously impose the low-rankness-based Compactness, high STructuredness and high Adversarial Robustness on the target DNN models.
Compared with the state-of-the-art robust structured pruning methods, CSTAR shows consistently better performance.
- Score: 19.69048976479834
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Model compression and model defense for deep neural networks (DNNs) have been
extensively and individually studied. Considering the co-importance of model
compactness and robustness in practical applications, several prior works have
explored to improve the adversarial robustness of the sparse neural networks.
However, the structured sparse models obtained by the exiting works suffer
severe performance degradation for both benign and robust accuracy, thereby
causing a challenging dilemma between robustness and structuredness of the
compact DNNs. To address this problem, in this paper, we propose CSTAR, an
efficient solution that can simultaneously impose the low-rankness-based
Compactness, high STructuredness and high Adversarial Robustness on the target
DNN models. By formulating the low-rankness and robustness requirement within
the same framework and globally determining the ranks, the compressed DNNs can
simultaneously achieve high compression performance and strong adversarial
robustness. Evaluations for various DNN models on different datasets
demonstrate the effectiveness of CSTAR. Compared with the state-of-the-art
robust structured pruning methods, CSTAR shows consistently better performance.
For instance, when compressing ResNet-18 on CIFAR-10, CSTAR can achieve up to
20.07% and 11.91% improvement for benign accuracy and robust accuracy,
respectively. For compressing ResNet-18 with 16x compression ratio on Imagenet,
CSTAR can obtain 8.58% benign accuracy gain and 4.27% robust accuracy gain
compared to the existing robust structured pruning method.
Related papers
- CCSRP: Robust Pruning of Spiking Neural Networks through Cooperative Coevolution [2.5388345537743056]
Spiking neural networks (SNNs) have shown promise in various dynamic visual tasks, yet those ready for practical deployment often lack the compactness and robustness essential in resource-limited and safety-critical settings.
We propose CCSRP, an innovative robust pruning method for SNNs, underpinned by cooperative co-evolution.
arXiv Detail & Related papers (2024-07-18T04:28:16Z) - Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness [47.9744734181236]
We explore the concept of Lipschitz continuity to certify the robustness of deep neural networks (DNNs) against adversarial attacks.
We propose a novel algorithm that remaps the input domain into a constrained range, reducing the Lipschitz constant and potentially enhancing robustness.
Our method achieves the best robust accuracy for CIFAR10, CIFAR100, and ImageNet datasets on the RobustBench leaderboard.
arXiv Detail & Related papers (2024-06-28T03:10:36Z) - Interpolated Joint Space Adversarial Training for Robust and
Generalizable Defenses [82.3052187788609]
Adversarial training (AT) is considered to be one of the most reliable defenses against adversarial attacks.
Recent works show generalization improvement with adversarial samples under novel threat models.
We propose a novel threat model called Joint Space Threat Model (JSTM)
Under JSTM, we develop novel adversarial attacks and defenses.
arXiv Detail & Related papers (2021-12-12T21:08:14Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of
DNNs [8.597091257152567]
We present a dynamic network rewiring (DNR) method to generate pruned deep neural network (DNN) models that are robust against adversarial attacks.
Our experiments show that DNR consistently finds compressed models with better clean and adversarial image classification performance than what is achievable through state-of-the-art alternatives.
arXiv Detail & Related papers (2020-11-03T19:49:00Z) - Do Wider Neural Networks Really Help Adversarial Robustness? [92.8311752980399]
We show that the model robustness is closely related to the tradeoff between natural accuracy and perturbation stability.
We propose a new Width Adjusted Regularization (WAR) method that adaptively enlarges $lambda$ on wide models.
arXiv Detail & Related papers (2020-10-03T04:46:17Z) - EMPIR: Ensembles of Mixed Precision Deep Networks for Increased
Robustness against Adversarial Attacks [18.241639570479563]
Deep Neural Networks (DNNs) are vulnerable to adversarial attacks in which small input perturbations can produce catastrophic misclassifications.
We propose EMPIR, ensembles of quantized DNN models with different numerical precisions, as a new approach to increase robustness against adversarial attacks.
Our results indicate that EMPIR boosts the average adversarial accuracies by 42.6%, 15.2% and 10.5% for the DNN models trained on the MNIST, CIFAR-10 and ImageNet datasets respectively.
arXiv Detail & Related papers (2020-04-21T17:17:09Z) - Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by
Enabling Input-Adaptive Inference [119.19779637025444]
Deep networks were recently suggested to face the odds between accuracy (on clean natural images) and robustness (on adversarially perturbed images)
This paper studies multi-exit networks associated with input-adaptive inference, showing their strong promise in achieving a "sweet point" in cooptimizing model accuracy, robustness and efficiency.
arXiv Detail & Related papers (2020-02-24T00:40:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.