Bounding the Complexity of Formally Verifying Neural Networks: A
Geometric Approach
- URL: http://arxiv.org/abs/2012.11761v2
- Date: Thu, 25 Mar 2021 13:17:14 GMT
- Title: Bounding the Complexity of Formally Verifying Neural Networks: A
Geometric Approach
- Authors: James Ferlez and Yasser Shoukry
- Abstract summary: We consider formally verifying the complexity of ReLU Neural Networks (NNs)
In this paper, we show that for two different NNs, the verification problem satisfies two different types of constraints.
Both algorithms efficiently translate the NN parameters into the effect of the NN's architecture by means of hyperplanes.
- Score: 1.9493449206135296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider the computational complexity of formally verifying
the behavior of Rectified Linear Unit (ReLU) Neural Networks (NNs), where
verification entails determining whether the NN satisfies convex polytopic
specifications. Specifically, we show that for two different NN architectures
-- shallow NNs and Two-Level Lattice (TLL) NNs -- the verification problem with
(convex) polytopic constraints is polynomial in the number of neurons in the NN
to be verified, when all other aspects of the verification problem held fixed.
We achieve these complexity results by exhibiting explicit (but similar)
verification algorithms for each type of architecture. Both algorithms
efficiently translate the NN parameters into a partitioning of the NN's input
space by means of hyperplanes; this has the effect of partitioning the original
verification problem into polynomially many sub-verification problems derived
from the geometry of the neurons. We show that these sub-problems may be chosen
so that the NN is purely affine within each, and hence each sub-problem is
solvable in polynomial time by means of a Linear Program (LP). Thus, a
polynomial-time algorithm for the original verification problem can be obtained
using known algorithms for enumerating the regions in a hyperplane arrangement.
Finally, we adapt our proposed algorithms to the verification of dynamical
systems, specifically when these NN architectures are used as state-feedback
controllers for LTI systems. We further evaluate the viability of this approach
numerically.
Related papers
- Neural Networks Asymptotic Behaviours for the Resolution of Inverse
Problems [0.0]
This paper presents a study of the effectiveness of Neural Network (NN) techniques for deconvolution inverse problems.
We consider NNs limits, corresponding to Gaussian Processes (GPs), where non-linearities in the parameters of the NN can be neglected.
We address the deconvolution inverse problem in the case of a quantum harmonic oscillator simulated through Monte Carlo techniques on a lattice.
arXiv Detail & Related papers (2024-02-14T17:42:24Z) - Correctness Verification of Neural Networks Approximating Differential
Equations [0.0]
Neural Networks (NNs) approximate the solution of Partial Differential Equations (PDEs)
NNs can become integral parts of simulation software tools which can accelerate the simulation of complex dynamic systems more than 100 times.
This work addresses the verification of these functions by defining the NN derivative as a finite difference approximation.
For the first time, we tackle the problem of bounding an NN function without a priori knowledge of the output domain.
arXiv Detail & Related papers (2024-02-12T12:55:35Z) - Finding Nontrivial Minimum Fixed Points in Discrete Dynamical Systems [29.7237944669855]
We formulate a novel optimization problem of finding a nontrivial fixed point of the system with the minimum number of affected nodes.
To cope with this computational intractability, we identify several special cases for which the problem can be solved efficiently.
For solving the problem on larger networks, we propose a general framework along with greedy selection methods.
arXiv Detail & Related papers (2023-01-06T14:46:01Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Universal approximation property of invertible neural networks [76.95927093274392]
Invertible neural networks (INNs) are neural network architectures with invertibility by design.
Thanks to their invertibility and the tractability of Jacobian, INNs have various machine learning applications such as probabilistic modeling, generative modeling, and representation learning.
arXiv Detail & Related papers (2022-04-15T10:45:26Z) - Geometric Path Enumeration for Equivalence Verification of Neural
Networks [2.007262412327553]
We focus on the formal verification problem of NN equivalence which aims to prove that two NNs show equivalent behavior.
We show a theoretical result by proving that the epsilon-equivalence problem is coNP-complete.
In a third step, we implement the extended algorithm for equivalence verification and evaluate optimizations necessary for its practical use.
arXiv Detail & Related papers (2021-12-13T11:56:08Z) - Path Regularization: A Convexity and Sparsity Inducing Regularization
for Parallel ReLU Networks [75.33431791218302]
We study the training problem of deep neural networks and introduce an analytic approach to unveil hidden convexity in the optimization landscape.
We consider a deep parallel ReLU network architecture, which also includes standard deep networks and ResNets as its special cases.
arXiv Detail & Related papers (2021-10-18T18:00:36Z) - Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable
domain decomposition approach for solving differential equations [20.277873724720987]
We propose a new, scalable approach for solving large problems relating to differential equations called Finite Basis PINNs (FBPINNs)
FBPINNs are inspired by classical finite element methods, where the solution of the differential equation is expressed as the sum of a finite set of basis functions with compact support.
In FBPINNs neural networks are used to learn these basis functions, which are defined over small, overlapping subdomain problems.
arXiv Detail & Related papers (2021-07-16T13:03:47Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.