General Cutting Planes for Bound-Propagation-Based Neural Network
Verification
- URL: http://arxiv.org/abs/2208.05740v1
- Date: Thu, 11 Aug 2022 10:31:28 GMT
- Title: General Cutting Planes for Bound-Propagation-Based Neural Network
Verification
- Authors: Huan Zhang, Shiqi Wang, Kaidi Xu, Linyi Li, Bo Li, Suman Jana, Cho-Jui
Hsieh, J. Zico Kolter
- Abstract summary: We generalize the bound propagation procedure to allow the addition of arbitrary cutting plane constraints.
We find that MIP solvers can generate high-quality cutting planes for strengthening bound-propagation-based verifiers.
Our method is the first verifier that can completely solve the oval20 benchmark and verify twice as many instances on the oval21 benchmark.
- Score: 144.7290035694459
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bound propagation methods, when combined with branch and bound, are among the
most effective methods to formally verify properties of deep neural networks
such as correctness, robustness, and safety. However, existing works cannot
handle the general form of cutting plane constraints widely accepted in
traditional solvers, which are crucial for strengthening verifiers with
tightened convex relaxations. In this paper, we generalize the bound
propagation procedure to allow the addition of arbitrary cutting plane
constraints, including those involving relaxed integer variables that do not
appear in existing bound propagation formulations. Our generalized bound
propagation method, GCP-CROWN, opens up the opportunity to apply general
cutting plane methods} for neural network verification while benefiting from
the efficiency and GPU acceleration of bound propagation methods. As a case
study, we investigate the use of cutting planes generated by off-the-shelf
mixed integer programming (MIP) solver. We find that MIP solvers can generate
high-quality cutting planes for strengthening bound-propagation-based verifiers
using our new formulation. Since the branching-focused bound propagation
procedure and the cutting-plane-focused MIP solver can run in parallel
utilizing different types of hardware (GPUs and CPUs), their combination can
quickly explore a large number of branches with strong cutting planes, leading
to strong verification performance. Experiments demonstrate that our method is
the first verifier that can completely solve the oval20 benchmark and verify
twice as many instances on the oval21 benchmark compared to the best tool in
VNN-COMP 2021, and also noticeably outperforms state-of-the-art verifiers on a
wide range of benchmarks. GCP-CROWN is part of the $\alpha$,$\beta$-CROWN
verifier, the VNN-COMP 2022 winner. Code is available at
http://PaperCode.cc/GCP-CROWN
Related papers
- Learning Cut Generating Functions for Integer Programming [1.1510009152620668]
The branch-and-cut algorithm is the method of choice to solve large scale integer programming problems.
Recent advances have employed a data-driven approach to select optimal cutting planes from a parameterized family.
We show that the selected CGF can outperform the GMI cuts for certain distributions.
arXiv Detail & Related papers (2024-05-22T20:56:34Z) - Verifying message-passing neural networks via topology-based bounds tightening [3.3267518043390205]
We develop a computationally effective approach towards providing robust certificates for message-passing neural networks (MPNNs)
Because our work builds on mixed-integer optimization, it encodes a wide variety of subproblems.
We test on both node and graph classification problems and consider topological attacks that both add and remove edges.
arXiv Detail & Related papers (2024-02-21T17:05:27Z) - Solving a Class of Cut-Generating Linear Programs via Machine Learning [0.0]
Cut-generating linear programs (CGLPs) play a key role as a separation oracle to produce valid inequalities for the feasible region of mixed-integer programs.
Running the dualPs at the nodes of the branch-and-bound tree is computationally cumbersome due to the number of node candidates and the lack of a priori knowledge on which nodes admit useful cutting planes.
We propose a novel framework based on learning to approximate the optimal value of a machineP class that determines whether a cutting plane can be generated at a node of the branch-bound tree.
arXiv Detail & Related papers (2023-10-30T18:31:52Z) - GloptiNets: Scalable Non-Convex Optimization with Certificates [61.50835040805378]
We present a novel approach to non-cube optimization with certificates, which handles smooth functions on the hypercube or on the torus.
By exploiting the regularity of the target function intrinsic in the decay of its spectrum, we allow at the same time to obtain precise certificates and leverage the advanced and powerful neural networks.
arXiv Detail & Related papers (2023-06-26T09:42:59Z) - GDB: Gated convolutions-based Document Binarization [0.0]
We formulate text extraction as the learning of gating values and propose an end-to-end gated convolutions-based network (GDB) to solve the problem of imprecise stroke edge extraction.
Our proposed framework consists of two stages. Firstly, a coarse sub-network with an extra edge branch is trained to get more precise feature maps by feeding a priori mask and edge.
Secondly, a refinement sub-network is cascaded to refine the output of the first stage by gated convolutions based on the sharp edge.
arXiv Detail & Related papers (2023-02-04T02:56:40Z) - Neural Network Branch-and-Bound for Neural Network Verification [26.609606492971967]
We propose a novel machine learning framework that can be used for designing an effective branching strategy.
We learn two graph neural networks (GNNs) that both directly treat the network we want to verify as a graph input.
We show that our GNN models generalize well to harder properties on larger unseen networks.
arXiv Detail & Related papers (2021-07-27T14:42:57Z) - Improved Branch and Bound for Neural Network Verification via Lagrangian
Decomposition [161.09660864941603]
We improve the scalability of Branch and Bound (BaB) algorithms for formally proving input-output properties of neural networks.
We present a novel activation-based branching strategy and a BaB framework, named Branch and Dual Network Bound (BaDNB)
BaDNB outperforms previous complete verification systems by a large margin, cutting average verification times by factors up to 50 on adversarial properties.
arXiv Detail & Related papers (2021-04-14T09:22:42Z) - Beta-CROWN: Efficient Bound Propagation with Per-neuron Split
Constraints for Complete and Incomplete Neural Network Verification [151.62491805851107]
We develop $beta$-CROWN, a bound propagation based verifier that can fully encode per-neuron splits.
$beta$-CROWN is close to three orders of magnitude faster than LP-based BaB methods for robustness verification.
By terminating BaB early, our method can also be used for incomplete verification.
arXiv Detail & Related papers (2021-03-11T11:56:54Z) - Global Optimization of Objective Functions Represented by ReLU Networks [77.55969359556032]
Neural networks can learn complex, non- adversarial functions, and it is challenging to guarantee their correct behavior in safety-critical contexts.
Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures.
We propose an approach that integrates the optimization process into the verification procedure, achieving better performance than the naive approach.
arXiv Detail & Related papers (2020-10-07T08:19:48Z) - Patch-level Neighborhood Interpolation: A General and Effective
Graph-based Regularization Strategy [77.34280933613226]
We propose a general regularizer called textbfPatch-level Neighborhood Interpolation(Pani) that conducts a non-local representation in the computation of networks.
Our proposal explicitly constructs patch-level graphs in different layers and then linearly interpolates neighborhood patch features, serving as a general and effective regularization strategy.
arXiv Detail & Related papers (2019-11-21T06:31:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.