Zonotope Domains for Lagrangian Neural Network Verification
- URL: http://arxiv.org/abs/2210.08069v1
- Date: Fri, 14 Oct 2022 19:31:39 GMT
- Title: Zonotope Domains for Lagrangian Neural Network Verification
- Authors: Matt Jordan, Jonathan Hayase, Alexandros G. Dimakis, Sewoong Oh
- Abstract summary: We decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks.
Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques.
- Score: 102.13346781220383
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural network verification aims to provide provable bounds for the output of
a neural network for a given input range. Notable prior works in this domain
have either generated bounds using abstract domains, which preserve some
dependency between intermediate neurons in the network; or framed verification
as an optimization problem and solved a relaxation using Lagrangian methods. A
key drawback of the latter technique is that each neuron is treated
independently, thereby ignoring important neuron interactions. We provide an
approach that merges these two threads and uses zonotopes within a Lagrangian
decomposition. Crucially, we can decompose the problem of verifying a deep
neural network into the verification of many 2-layer neural networks. While
each of these problems is provably hard, we provide efficient relaxation
methods that are amenable to efficient dual ascent procedures. Our technique
yields bounds that improve upon both linear programming and Lagrangian-based
verification techniques in both time and bound tightness.
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Mean-field Analysis on Two-layer Neural Networks from a Kernel Perspective [40.69646918673903]
We show that two-layer neural networks can learn a union of multiple reproducing kernel Hilbert spaces more efficiently than any kernel methods.
We also develop a label noise procedure, which converges to the global optimum and show that the degrees of freedom appears as an implicit regularization.
arXiv Detail & Related papers (2024-03-22T02:41:57Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Convergence and Recovery Guarantees of Unsupervised Neural Networks for Inverse Problems [2.6695224599322214]
We provide deterministic convergence and recovery guarantees for the class of unsupervised feedforward multilayer neural networks trained to solve inverse problems.
We also derive overparametrization bounds under which a two-layers Deep Inverse Prior network with smooth activation function will benefit from our guarantees.
arXiv Detail & Related papers (2023-09-21T14:48:02Z) - Approximating nonlinear functions with latent boundaries in low-rank
excitatory-inhibitory spiking networks [5.955727366271805]
We put forth a new framework for spike-based excitatory-inhibitory spiking networks.
Our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.
arXiv Detail & Related papers (2023-07-18T15:17:00Z) - Open- and Closed-Loop Neural Network Verification using Polynomial
Zonotopes [6.591194329459251]
We present a novel approach to efficiently compute tight non-contact activation functions.
In particular, we evaluate the input-output relation of each neuron by an approximation.
This results in a superior performance compared to other methods.
arXiv Detail & Related papers (2022-07-06T14:39:19Z) - Classifying high-dimensional Gaussian mixtures: Where kernel methods
fail and neural networks succeed [27.38015169185521]
We show theoretically that two-layer neural networks (2LNN) with only a few hidden neurons can beat the performance of kernel learning.
We show how over-parametrising the neural network leads to faster convergence, but does not improve its final performance.
arXiv Detail & Related papers (2021-02-23T15:10:15Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.