Lagrangian Decomposition for Neural Network Verification
- URL: http://arxiv.org/abs/2002.10410v3
- Date: Wed, 17 Jun 2020 17:49:58 GMT
- Title: Lagrangian Decomposition for Neural Network Verification
- Authors: Rudy Bunel, Alessandro De Palma, Alban Desmaison, Krishnamurthy
Dvijotham, Pushmeet Kohli, Philip H.S. Torr, M. Pawan Kumar
- Abstract summary: A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
- Score: 148.0448557991349
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A fundamental component of neural network verification is the computation of
bounds on the values their outputs can take. Previous methods have either used
off-the-shelf solvers, discarding the problem structure, or relaxed the problem
even further, making the bounds unnecessarily loose. We propose a novel
approach based on Lagrangian Decomposition. Our formulation admits an efficient
supergradient ascent algorithm, as well as an improved proximal algorithm. Both
the algorithms offer three advantages: (i) they yield bounds that are provably
at least as tight as previous dual algorithms relying on Lagrangian
relaxations; (ii) they are based on operations analogous to forward/backward
pass of neural networks layers and are therefore easily parallelizable,
amenable to GPU implementation and able to take advantage of the convolutional
structure of problems; and (iii) they allow for anytime stopping while still
providing valid bounds. Empirically, we show that we obtain bounds comparable
with off-the-shelf solvers in a fraction of their running time, and obtain
tighter bounds in the same time as previous dual algorithms. This results in an
overall speed-up when employing the bounds for formal verification. Code for
our algorithms is available at
https://github.com/oval-group/decomposition-plnn-bounds.
Related papers
- Infeasible Deterministic, Stochastic, and Variance-Reduction Algorithms for Optimization under Orthogonality Constraints [9.301728976515255]
This article provides new practical and theoretical developments for the landing algorithm.
First, the method is extended to the Stiefel manifold.
We also consider variance reduction algorithms when the cost function is an average of many functions.
arXiv Detail & Related papers (2023-03-29T07:36:54Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Block-coordinate Frank-Wolfe algorithm and convergence analysis for
semi-relaxed optimal transport problem [20.661025590877774]
We propose a fast block-coordinate Frank-Wolfe (BCFW) algorithm for a convex semi-relaxed optimal transport (OT) problem.
Numerical experiments have demonstrated that our proposed algorithms are effective for color transfer and surpass state-of-the-art algorithms.
arXiv Detail & Related papers (2022-05-27T05:54:45Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Improved Branch and Bound for Neural Network Verification via Lagrangian
Decomposition [161.09660864941603]
We improve the scalability of Branch and Bound (BaB) algorithms for formally proving input-output properties of neural networks.
We present a novel activation-based branching strategy and a BaB framework, named Branch and Dual Network Bound (BaDNB)
BaDNB outperforms previous complete verification systems by a large margin, cutting average verification times by factors up to 50 on adversarial properties.
arXiv Detail & Related papers (2021-04-14T09:22:42Z) - Scaling the Convex Barrier with Sparse Dual Algorithms [141.4085318878354]
We present two novel dual algorithms for tight and efficient neural network bounding.
Both methods recover the strengths of the new relaxation: tightness and a linear separation oracle.
We can obtain better bounds than off-the-shelf solvers in only a fraction of their running time.
arXiv Detail & Related papers (2021-01-14T19:45:17Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.