Can pruning improve certified robustness of neural networks?
- URL: http://arxiv.org/abs/2206.07311v2
- Date: Fri, 17 Jun 2022 04:18:44 GMT
- Title: Can pruning improve certified robustness of neural networks?
- Authors: Zhangheng Li, Tianlong Chen, Linyi Li, Bo Li, Zhangyang Wang
- Abstract summary: We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
- Score: 106.03070538582222
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the rapid development of deep learning, the sizes of neural networks
become larger and larger so that the training and inference often overwhelm the
hardware resources. Given the fact that neural networks are often
over-parameterized, one effective way to reduce such computational overhead is
neural network pruning, by removing redundant parameters from trained neural
networks. It has been recently observed that pruning can not only reduce
computational overhead but also can improve empirical robustness of deep neural
networks (NNs), potentially owing to removing spurious correlations while
preserving the predictive accuracies. This paper for the first time
demonstrates that pruning can generally improve certified robustness for
ReLU-based NNs under the complete verification setting. Using the popular
Branch-and-Bound (BaB) framework, we find that pruning can enhance the
estimated bound tightness of certified robustness verification, by alleviating
linear relaxation and sub-domain split problems. We empirically verify our
findings with off-the-shelf pruning methods and further present a new
stability-based pruning method tailored for reducing neuron instability, that
outperforms existing pruning methods in enhancing certified robustness. Our
experiments show that by appropriately pruning an NN, its certified accuracy
can be boosted up to 8.2% under standard training, and up to 24.5% under
adversarial training on the CIFAR10 dataset. We additionally observe the
existence of certified lottery tickets that can match both standard and
certified robust accuracies of the original dense models across different
datasets. Our findings offer a new angle to study the intriguing interaction
between sparsity and robustness, i.e. interpreting the interaction of sparsity
and certified robustness via neuron stability. Codes are available at:
https://github.com/VITA-Group/CertifiedPruning.
Related papers
- Confident magnitude-based neural network pruning [0.0]
Pruning neural networks has proven to be a successful approach to increase the efficiency and reduce the memory storage of deep learning models.
We leverage recent techniques on distribution-free uncertainty quantification to provide finite-sample statistical guarantees to compress deep neural networks.
This work presents experiments in computer vision tasks to illustrate how uncertainty-aware pruning is a useful approach to deploy sparse neural networks safely.
arXiv Detail & Related papers (2024-08-08T21:29:20Z) - CreINNs: Credal-Set Interval Neural Networks for Uncertainty Estimation
in Classification Tasks [5.19656787424626]
Uncertainty estimation is increasingly attractive for improving the reliability of neural networks.
We present novel credal-set interval neural networks (CreINNs) designed for classification tasks.
arXiv Detail & Related papers (2024-01-10T10:04:49Z) - Benign Overfitting for Two-layer ReLU Convolutional Neural Networks [60.19739010031304]
We establish algorithm-dependent risk bounds for learning two-layer ReLU convolutional neural networks with label-flipping noise.
We show that, under mild conditions, the neural network trained by gradient descent can achieve near-zero training loss and Bayes optimal test risk.
arXiv Detail & Related papers (2023-03-07T18:59:38Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - CorrectNet: Robustness Enhancement of Analog In-Memory Computing for
Neural Networks by Error Suppression and Compensation [4.570841222958966]
We propose a framework to enhance the robustness of neural networks under variations and noise.
We show that inference accuracy of neural networks can be recovered from as low as 1.69% under variations and noise.
arXiv Detail & Related papers (2022-11-27T19:13:33Z) - Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness [172.61581010141978]
Certifiable robustness is a desirable property for adopting deep neural networks (DNNs) in safety-critical scenarios.
We propose a novel solution to strategically manipulate neurons, by "grafting" appropriate levels of linearity.
arXiv Detail & Related papers (2022-06-15T22:42:29Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Robustness against Adversarial Attacks in Neural Networks using
Incremental Dissipativity [3.8673567847548114]
Adversarial examples can easily degrade the classification performance in neural networks.
This work proposes an incremental dissipativity-based robustness certificate for neural networks.
arXiv Detail & Related papers (2021-11-25T04:42:57Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Improve Adversarial Robustness via Weight Penalization on Classification
Layer [20.84248493946059]
Deep neural networks are vulnerable to adversarial attacks.
Recent studies show that well-designed classification parts can lead to better robustness.
We develop a novel light-weight-penalized defensive method.
arXiv Detail & Related papers (2020-10-08T08:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.