Open- and Closed-Loop Neural Network Verification using Polynomial
Zonotopes
- URL: http://arxiv.org/abs/2207.02715v2
- Date: Tue, 18 Apr 2023 02:58:42 GMT
- Title: Open- and Closed-Loop Neural Network Verification using Polynomial
Zonotopes
- Authors: Niklas Kochdumper, Christian Schilling, Matthias Althoff, Stanley Bak
- Abstract summary: We present a novel approach to efficiently compute tight non-contact activation functions.
In particular, we evaluate the input-output relation of each neuron by an approximation.
This results in a superior performance compared to other methods.
- Score: 6.591194329459251
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel approach to efficiently compute tight non-convex
enclosures of the image through neural networks with ReLU, sigmoid, or
hyperbolic tangent activation functions. In particular, we abstract the
input-output relation of each neuron by a polynomial approximation, which is
evaluated in a set-based manner using polynomial zonotopes. While our approach
can also can be beneficial for open-loop neural network verification, our main
application is reachability analysis of neural network controlled systems,
where polynomial zonotopes are able to capture the non-convexity caused by the
neural network as well as the system dynamics. This results in a superior
performance compared to other methods, as we demonstrate on various benchmarks.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Exploring the Approximation Capabilities of Multiplicative Neural
Networks for Smooth Functions [9.936974568429173]
We consider two classes of target functions: generalized bandlimited functions and Sobolev-Type balls.
Our results demonstrate that multiplicative neural networks can approximate these functions with significantly fewer layers and neurons.
These findings suggest that multiplicative gates can outperform standard feed-forward layers and have potential for improving neural network design.
arXiv Detail & Related papers (2023-01-11T17:57:33Z) - On the Approximation and Complexity of Deep Neural Networks to Invariant
Functions [0.0]
We study the approximation and complexity of deep neural networks to invariant functions.
We show that a broad range of invariant functions can be approximated by various types of neural network models.
We provide a feasible application that connects the parameter estimation and forecasting of high-resolution signals with our theoretical conclusions.
arXiv Detail & Related papers (2022-10-27T09:19:19Z) - Zonotope Domains for Lagrangian Neural Network Verification [102.13346781220383]
We decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks.
Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques.
arXiv Detail & Related papers (2022-10-14T19:31:39Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Validation of RELU nets with tropical polyhedra [7.087237546722617]
We present an approach that abstracts ReLU feedforward neural networks using tropical polyhedra.
We show how the connection between ReLU networks and tropical rational functions can provide approaches for range analysis of ReLU neural networks.
arXiv Detail & Related papers (2021-07-30T06:22:59Z) - Deep Kronecker neural networks: A general framework for neural networks
with adaptive activation functions [4.932130498861987]
We propose a new type of neural networks, Kronecker neural networks (KNNs), that form a general framework for neural networks with adaptive activation functions.
Under suitable conditions, KNNs induce a faster decay of the loss than that by the feed-forward networks.
arXiv Detail & Related papers (2021-05-20T04:54:57Z) - Towards a mathematical framework to inform Neural Network modelling via
Polynomial Regression [0.0]
It is shown that almost identical predictions can be made when certain conditions are met locally.
When learning from generated data, the proposed method producess that approximate correctly the data locally.
arXiv Detail & Related papers (2021-02-07T17:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.