Fast Training of Provably Robust Neural Networks by SingleProp
- URL: http://arxiv.org/abs/2102.01208v1
- Date: Mon, 1 Feb 2021 22:12:51 GMT
- Title: Fast Training of Provably Robust Neural Networks by SingleProp
- Authors: Akhilan Boopathy, Tsui-Wei Weng, Sijia Liu, Pin-Yu Chen, Gaoyuan
Zhang, Luca Daniel
- Abstract summary: We develop a new regularizer that is both more efficient than existing certified defenses.
We demonstrate improvements in training speed and comparable certified accuracy compared to state-of-the-art certified defenses.
- Score: 71.19423596238568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works have developed several methods of defending neural networks
against adversarial attacks with certified guarantees. However, these
techniques can be computationally costly due to the use of certification during
training. We develop a new regularizer that is both more efficient than
existing certified defenses, requiring only one additional forward propagation
through a network, and can be used to train networks with similar certified
accuracy. Through experiments on MNIST and CIFAR-10 we demonstrate improvements
in training speed and comparable certified accuracy compared to
state-of-the-art certified defenses.
Related papers
- On Using Certified Training towards Empirical Robustness [40.582830117229854]
We show that a certified training algorithm can prevent catastrophic overfitting on single-step attacks.
We also present a novel regularizer for network over-approximations that can achieve similar effects while markedly reducing runtime.
arXiv Detail & Related papers (2024-10-02T14:56:21Z) - Towards Certified Unlearning for Deep Neural Networks [50.816473152067104]
certified unlearning has been extensively studied in convex machine learning models.
We propose several techniques to bridge the gap between certified unlearning and deep neural networks (DNNs)
arXiv Detail & Related papers (2024-08-01T21:22:10Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Accelerating Certified Robustness Training via Knowledge Transfer [3.5934248574481717]
We propose a framework for reducing the computational overhead of any certifiably robust training method through knowledge transfer.
Our experiments on CIFAR-10 show that CRT speeds up certified robustness training by $8 times$ on average across three different architecture generations.
arXiv Detail & Related papers (2022-10-25T19:12:28Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Improving the Certified Robustness of Neural Networks via Consistency
Regularization [25.42238710803711]
A range of defense methods have been proposed to improve the robustness of neural networks on adversarial examples.
Most of these provable defense methods treat all examples equally during training process.
In this paper, we explore this inconsistency caused by misclassified examples and add a novel consistency regularization term to make better use of the misclassified examples.
arXiv Detail & Related papers (2020-12-24T05:00:50Z) - Exploring Model Robustness with Adaptive Networks and Improved
Adversarial Training [56.82000424924979]
We propose a conditional normalization module to adapt networks when conditioned on input samples.
Our adaptive networks, once adversarially trained, can outperform their non-adaptive counterparts on both clean validation accuracy and robustness.
arXiv Detail & Related papers (2020-05-30T23:23:56Z) - Regularized Training and Tight Certification for Randomized Smoothed
Classifier with Provable Robustness [15.38718018477333]
We derive a new regularized risk, in which the regularizer can adaptively encourage the accuracy and robustness of the smoothed counterpart.
We also design a new certification algorithm, which can leverage the regularization effect to provide tighter robustness lower bound that holds with high probability.
arXiv Detail & Related papers (2020-02-17T20:54:34Z) - Fast is better than free: Revisiting adversarial training [86.11788847990783]
We show that it is possible to train empirically robust models using a much weaker and cheaper adversary.
We identify a failure mode referred to as "catastrophic overfitting" which may have caused previous attempts to use FGSM adversarial training to fail.
arXiv Detail & Related papers (2020-01-12T20:30:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.