Scaling the Convex Barrier with Sparse Dual Algorithms
- URL: http://arxiv.org/abs/2101.05844v2
- Date: Tue, 26 Jan 2021 14:36:42 GMT
- Title: Scaling the Convex Barrier with Sparse Dual Algorithms
- Authors: Alessandro De Palma, Harkirat Singh Behl, Rudy Bunel, Philip H.S.
Torr, M. Pawan Kumar
- Abstract summary: We present two novel dual algorithms for tight and efficient neural network bounding.
Both methods recover the strengths of the new relaxation: tightness and a linear separation oracle.
We can obtain better bounds than off-the-shelf solvers in only a fraction of their running time.
- Score: 141.4085318878354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tight and efficient neural network bounding is crucial to the scaling of
neural network verification systems. Many efficient bounding algorithms have
been presented recently, but they are often too loose to verify more
challenging properties. This is due to the weakness of the employed relaxation,
which is usually a linear program of size linear in the number of neurons.
While a tighter linear relaxation for piecewise-linear activations exists, it
comes at the cost of exponentially many constraints and currently lacks an
efficient customized solver. We alleviate this deficiency by presenting two
novel dual algorithms: one operates a subgradient method on a small active set
of dual variables, the other exploits the sparsity of Frank-Wolfe type
optimizers to incur only a linear memory cost. Both methods recover the
strengths of the new relaxation: tightness and a linear separation oracle. At
the same time, they share the benefits of previous dual approaches for weaker
relaxations: massive parallelism, GPU implementation, low cost per iteration
and valid bounds at any time. As a consequence, we can obtain better bounds
than off-the-shelf solvers in only a fraction of their running time, attaining
significant formal verification speed-ups.
Related papers
- A Penalty-Based Guardrail Algorithm for Non-Decreasing Optimization with Inequality Constraints [1.5498250598583487]
Traditional mathematical programming solvers require long computational times to solve constrained minimization problems.
We propose a penalty-based guardrail algorithm (PGA) to efficiently solve them.
arXiv Detail & Related papers (2024-05-03T10:37:34Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Zonotope Domains for Lagrangian Neural Network Verification [102.13346781220383]
We decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks.
Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques.
arXiv Detail & Related papers (2022-10-14T19:31:39Z) - LinSyn: Synthesizing Tight Linear Bounds for Arbitrary Neural Network
Activation Functions [4.03308187036005]
LinSyn is the first approach that achieves tight bounds for any arbitrary activation function.
We show that our approach often achieves 2-5X tighter final output bounds and more than quadruple certified robustness.
arXiv Detail & Related papers (2022-01-31T17:00:50Z) - DeepSplit: Scalable Verification of Deep Neural Networks via Operator
Splitting [70.62923754433461]
Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non- optimization problem.
We propose a novel method that can directly solve a convex relaxation of the problem to high accuracy, by splitting it into smaller subproblems that often have analytical solutions.
arXiv Detail & Related papers (2021-06-16T20:43:49Z) - Improved Branch and Bound for Neural Network Verification via Lagrangian
Decomposition [161.09660864941603]
We improve the scalability of Branch and Bound (BaB) algorithms for formally proving input-output properties of neural networks.
We present a novel activation-based branching strategy and a BaB framework, named Branch and Dual Network Bound (BaDNB)
BaDNB outperforms previous complete verification systems by a large margin, cutting average verification times by factors up to 50 on adversarial properties.
arXiv Detail & Related papers (2021-04-14T09:22:42Z) - The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron
Relaxations for Neural Network Verification [11.10637926254491]
We improve the effectiveness of propagation- and linear-optimization-based neural network verification algorithms with a new tightened convex relaxation for ReLU neurons.
We design two-time algorithms for neural network verification: a linear-programming-based algorithm that leverages the full power of our relaxation, and a fast propagation algorithm that generalizes existing approaches.
arXiv Detail & Related papers (2020-06-24T22:16:58Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.