Improving the Tightness of Convex Relaxation Bounds for Training
Certifiably Robust Classifiers
- URL: http://arxiv.org/abs/2002.09766v1
- Date: Sat, 22 Feb 2020 20:19:53 GMT
- Title: Improving the Tightness of Convex Relaxation Bounds for Training
Certifiably Robust Classifiers
- Authors: Chen Zhu, Renkun Ni, Ping-yeh Chiang, Hengduo Li, Furong Huang, Tom
Goldstein
- Abstract summary: Convex relaxations are effective for certifying training and neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical robustness.
We propose two experiments that can be used to train neural networks that can be trained in higher certified accuracy than non-regularized baselines.
- Score: 72.56180590447835
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convex relaxations are effective for training and certifying neural networks
against norm-bounded adversarial attacks, but they leave a large gap between
certifiable and empirical robustness. In principle, convex relaxation can
provide tight bounds if the solution to the relaxed problem is feasible for the
original non-convex problem. We propose two regularizers that can be used to
train neural networks that yield tighter convex relaxation bounds for
robustness. In all of our experiments, the proposed regularizers result in
higher certified accuracy than non-regularized baselines.
Related papers
- Gaussian Loss Smoothing Enables Certified Training with Tight Convex Relaxations [14.061189994638667]
Training neural networks with high certified accuracy against adversarial examples remains an open challenge.
certification methods can effectively leverage tight convex relaxations for bound computation.
In training, these methods can perform worse than looser relaxations.
We show that Gaussian Loss Smoothing can alleviate these issues.
arXiv Detail & Related papers (2024-03-11T18:44:36Z) - Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness [172.61581010141978]
Certifiable robustness is a desirable property for adopting deep neural networks (DNNs) in safety-critical scenarios.
We propose a novel solution to strategically manipulate neurons, by "grafting" appropriate levels of linearity.
arXiv Detail & Related papers (2022-06-15T22:42:29Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness [61.212486108346695]
We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
arXiv Detail & Related papers (2021-11-17T18:20:59Z) - DeepSplit: Scalable Verification of Deep Neural Networks via Operator
Splitting [70.62923754433461]
Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non- optimization problem.
We propose a novel method that can directly solve a convex relaxation of the problem to high accuracy, by splitting it into smaller subproblems that often have analytical solutions.
arXiv Detail & Related papers (2021-06-16T20:43:49Z) - A Primer on Multi-Neuron Relaxation-based Adversarial Robustness
Certification [6.71471794387473]
adversarial examples pose a real danger when deep neural networks are deployed in the real world.
We develop a unified mathematical framework to describe relaxation-based robustness certification methods.
arXiv Detail & Related papers (2021-06-06T11:59:27Z) - Bayesian Inference with Certifiable Adversarial Robustness [25.40092314648194]
We consider adversarial training networks through the lens of Bayesian learning.
We present a principled framework for adversarial training of Bayesian Neural Networks (BNNs) with certifiable guarantees.
Our method is the first to directly train certifiable BNNs, thus facilitating their use in safety-critical applications.
arXiv Detail & Related papers (2021-02-10T07:17:49Z) - Feature Purification: How Adversarial Training Performs Robust Deep
Learning [66.05472746340142]
We show a principle that we call Feature Purification, where we show one of the causes of the existence of adversarial examples is the accumulation of certain small dense mixtures in the hidden weights during the training process of a neural network.
We present both experiments on the CIFAR-10 dataset to illustrate this principle, and a theoretical result proving that for certain natural classification tasks, training a two-layer neural network with ReLU activation using randomly gradient descent indeed this principle.
arXiv Detail & Related papers (2020-05-20T16:56:08Z) - Tightened Convex Relaxations for Neural Network Robustness Certification [10.68833097448566]
We exploit the structure of ReLU networks to improve relaxation errors through a novel partition-based certification procedure.
The proposed method is proven to tighten existing linear programming relaxations, and achieves zero relaxation error as the result is made finer.
arXiv Detail & Related papers (2020-04-01T16:59:21Z) - Regularized Training and Tight Certification for Randomized Smoothed
Classifier with Provable Robustness [15.38718018477333]
We derive a new regularized risk, in which the regularizer can adaptively encourage the accuracy and robustness of the smoothed counterpart.
We also design a new certification algorithm, which can leverage the regularization effect to provide tighter robustness lower bound that holds with high probability.
arXiv Detail & Related papers (2020-02-17T20:54:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.