Provably Bounding Neural Network Preimages
- URL: http://arxiv.org/abs/2302.01404v4
- Date: Sun, 17 Mar 2024 17:36:02 GMT
- Title: Provably Bounding Neural Network Preimages
- Authors: Suhas Kotha, Christopher Brix, Zico Kolter, Krishnamurthy Dvijotham, Huan Zhang,
- Abstract summary: We present the INVPROP algorithm for verifying properties over the preimage of a linearly constrained output set.
Contrary to other approaches, our efficient algorithm is GPU-accelerated and does not require a linear programming solver.
Our results show that in certain settings, we find over-approximations over 2500x tighter than prior work while being 2.5x faster.
- Score: 18.810882386083655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most work on the formal verification of neural networks has focused on bounding the set of outputs that correspond to a given set of inputs (for example, bounded perturbations of a nominal input). However, many use cases of neural network verification require solving the inverse problem, or over-approximating the set of inputs that lead to certain outputs. We present the INVPROP algorithm for verifying properties over the preimage of a linearly constrained output set, which can be combined with branch-and-bound to increase precision. Contrary to other approaches, our efficient algorithm is GPU-accelerated and does not require a linear programming solver. We demonstrate our algorithm for identifying safe control regions for a dynamical system via backward reachability analysis, verifying adversarial robustness, and detecting out-of-distribution inputs to a neural network. Our results show that in certain settings, we find over-approximations over 2500x tighter than prior work while being 2.5x faster. By strengthening robustness verification with output constraints, we consistently verify more properties than the previous state-of-the-art on multiple benchmarks, including a large model with 167k neurons in VNN-COMP 2023. Our algorithm has been incorporated into the $\alpha,\!\beta$-CROWN verifier, available at https://abcrown.org.
Related papers
- PREMAP: A Unifying PREiMage APproximation Framework for Neural Networks [30.701422594374456]
We present a framework for preimage abstraction that produces under- and over-approximations of any polyhedral output set.
We evaluate our method on a range of tasks, demonstrating significant improvement in efficiency and scalability to high-input-dimensional image classification tasks.
arXiv Detail & Related papers (2024-08-17T17:24:47Z) - Tight Verification of Probabilistic Robustness in Bayesian Neural
Networks [17.499817915644467]
We introduce two algorithms for computing tight guarantees on the probabilistic robustness of Bayesian Neural Networks (BNNs)
Our algorithms efficiently search the parameters' space for safe weights by using iterative expansion and the network's gradient.
In addition to proving that our algorithms compute tighter bounds than the SoA, we also evaluate our algorithms against the SoA on standard benchmarks.
arXiv Detail & Related papers (2024-01-21T23:41:32Z) - Provable Preimage Under-Approximation for Neural Networks (Full Version) [27.519993407376862]
We propose an efficient anytime algorithm for generating symbolic under-approximations of the preimage of any polyhedron output set for neural networks.
Empirically, we validate the efficacy of our method across a range of domains, including a high-dimensional MNIST classification task.
We present a sound and complete algorithm for the former, which exploits our disjoint union of polytopes representation to provide formal guarantees.
arXiv Detail & Related papers (2023-05-05T16:55:27Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - A Mixed Integer Programming Approach for Verifying Properties of
Binarized Neural Networks [44.44006029119672]
We propose a mixed integer programming formulation for BNN verification.
We demonstrate our approach by verifying properties of BNNs trained on the MNIST dataset and an aircraft collision avoidance controller.
arXiv Detail & Related papers (2022-03-11T01:11:29Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - Neural Network Branch-and-Bound for Neural Network Verification [26.609606492971967]
We propose a novel machine learning framework that can be used for designing an effective branching strategy.
We learn two graph neural networks (GNNs) that both directly treat the network we want to verify as a graph input.
We show that our GNN models generalize well to harder properties on larger unseen networks.
arXiv Detail & Related papers (2021-07-27T14:42:57Z) - Enabling certification of verification-agnostic networks via
memory-efficient semidefinite programming [97.40955121478716]
We propose a first-order dual SDP algorithm that requires memory only linear in the total number of network activations.
We significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively.
We also demonstrate tight verification of a quadratic stability specification for the decoder of a variational autoencoder.
arXiv Detail & Related papers (2020-10-22T12:32:29Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.