Tightened Convex Relaxations for Neural Network Robustness Certification
- URL: http://arxiv.org/abs/2004.00570v2
- Date: Fri, 18 Sep 2020 00:04:58 GMT
- Title: Tightened Convex Relaxations for Neural Network Robustness Certification
- Authors: Brendon G. Anderson, Ziye Ma, Jingqi Li, Somayeh Sojoudi
- Abstract summary: We exploit the structure of ReLU networks to improve relaxation errors through a novel partition-based certification procedure.
The proposed method is proven to tighten existing linear programming relaxations, and achieves zero relaxation error as the result is made finer.
- Score: 10.68833097448566
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider the problem of certifying the robustness of neural
networks to perturbed and adversarial input data. Such certification is
imperative for the application of neural networks in safety-critical
decision-making and control systems. Certification techniques using convex
optimization have been proposed, but they often suffer from relaxation errors
that void the certificate. Our work exploits the structure of ReLU networks to
improve relaxation errors through a novel partition-based certification
procedure. The proposed method is proven to tighten existing linear programming
relaxations, and asymptotically achieves zero relaxation error as the partition
is made finer. We develop a finite partition that attains zero relaxation error
and use the result to derive a tractable partitioning scheme that minimizes the
worst-case relaxation error. Experiments using real data show that the
partitioning procedure is able to issue robustness certificates in cases where
prior methods fail. Consequently, partition-based certification procedures are
found to provide an intuitive, effective, and theoretically justified method
for tightening existing convex relaxation techniques.
Related papers
- Verification of Geometric Robustness of Neural Networks via Piecewise Linear Approximation and Lipschitz Optimisation [57.10353686244835]
We address the problem of verifying neural networks against geometric transformations of the input image, including rotation, scaling, shearing, and translation.
The proposed method computes provably sound piecewise linear constraints for the pixel values by using sampling and linear approximations in combination with branch-and-bound Lipschitz.
We show that our proposed implementation resolves up to 32% more verification cases than present approaches.
arXiv Detail & Related papers (2024-08-23T15:02:09Z) - Gaussian Loss Smoothing Enables Certified Training with Tight Convex Relaxations [14.061189994638667]
Training neural networks with high certified accuracy against adversarial examples remains an open challenge.
certification methods can effectively leverage tight convex relaxations for bound computation.
In training, these methods can perform worse than looser relaxations.
We show that Gaussian Loss Smoothing can alleviate these issues.
arXiv Detail & Related papers (2024-03-11T18:44:36Z) - Disparate Impact on Group Accuracy of Linearization for Private Inference [48.27026603581436]
We show that reducing the number of ReLU activations disproportionately decreases the accuracy for minority groups compared to majority groups.
We also show how a simple procedure altering the fine-tuning step for linearized models can serve as an effective mitigation strategy.
arXiv Detail & Related papers (2024-02-06T01:56:29Z) - Robust Stochastically-Descending Unrolled Networks [85.6993263983062]
Deep unrolling is an emerging learning-to-optimize method that unrolls a truncated iterative algorithm in the layers of a trainable neural network.
We show that convergence guarantees and generalizability of the unrolled networks are still open theoretical problems.
We numerically assess unrolled architectures trained under the proposed constraints in two different applications.
arXiv Detail & Related papers (2023-12-25T18:51:23Z) - Tight Certification of Adversarially Trained Neural Networks via
Nonconvex Low-Rank Semidefinite Relaxations [12.589519278962378]
We propose a non certification technique for adversarial network models.
Non certification makes strong certifications comparable to much more expensive SDP methods, while optimizing variables dramatically fewer comparable to LP methods.
Our experiments find that the non certification almost completely closes the gap towards exact certification adversarially trained models.
arXiv Detail & Related papers (2022-11-30T18:46:00Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - A Unified View of SDP-based Neural Network Verification through
Completely Positive Programming [27.742278216854714]
We develop an exact, convex formulation of verification as a completely positive program ( CPP)
We provide analysis showing that our formulation is minimal -- the removal of any constraint fundamentally misrepresents the neural network computation.
arXiv Detail & Related papers (2022-03-06T19:23:09Z) - DeepSplit: Scalable Verification of Deep Neural Networks via Operator
Splitting [70.62923754433461]
Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non- optimization problem.
We propose a novel method that can directly solve a convex relaxation of the problem to high accuracy, by splitting it into smaller subproblems that often have analytical solutions.
arXiv Detail & Related papers (2021-06-16T20:43:49Z) - Towards Optimal Branching of Linear and Semidefinite Relaxations for Neural Network Robustness Certification [10.349616734896522]
We study certifying the robustness of ReLU neural networks against adversarial input perturbations.
We take a branch-and-bound approach to propose partitioning the input uncertainty set and solving the relaxations on each part separately.
We show that this approach reduces relaxation error, and that the error is eliminated entirely upon performing an LP relaxation with a partition intelligently designed to exploit the nature of the ReLU activations.
arXiv Detail & Related papers (2021-01-22T19:36:40Z) - Improving the Tightness of Convex Relaxation Bounds for Training
Certifiably Robust Classifiers [72.56180590447835]
Convex relaxations are effective for certifying training and neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical robustness.
We propose two experiments that can be used to train neural networks that can be trained in higher certified accuracy than non-regularized baselines.
arXiv Detail & Related papers (2020-02-22T20:19:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.