Reachability In Simple Neural Networks
- URL: http://arxiv.org/abs/2203.07941v4
- Date: Wed, 11 Oct 2023 09:07:24 GMT
- Title: Reachability In Simple Neural Networks
- Authors: Marco S\"alzer and Martin Lange
- Abstract summary: We show that NP-hardness already holds for restricted classes of simple specifications and neural networks.
We give a thorough discussion and outlook of possible extensions for this direction of research on neural network verification.
- Score: 2.7195102129095003
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate the complexity of the reachability problem for (deep) neural
networks: does it compute valid output given some valid input? It was recently
claimed that the problem is NP-complete for general neural networks and
specifications over the input/output dimension given by conjunctions of linear
inequalities. We recapitulate the proof and repair some flaws in the original
upper and lower bound proofs. Motivated by the general result, we show that
NP-hardness already holds for restricted classes of simple specifications and
neural networks. Allowing for a single hidden layer and an output dimension of
one as well as neural networks with just one negative, zero and one positive
weight or bias is sufficient to ensure NP-hardness. Additionally, we give a
thorough discussion and outlook of possible extensions for this direction of
research on neural network verification.
Related papers
- Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Neural Networks with Sparse Activation Induced by Large Bias: Tighter Analysis with Bias-Generalized NTK [86.45209429863858]
We study training one-hidden-layer ReLU networks in the neural tangent kernel (NTK) regime.
We show that the neural networks possess a different limiting kernel which we call textitbias-generalized NTK
We also study various properties of the neural networks with this new kernel.
arXiv Detail & Related papers (2023-01-01T02:11:39Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Consistency of Neural Networks with Regularization [0.0]
This paper proposes the general framework of neural networks with regularization and prove its consistency.
Two types of activation functions: hyperbolic function(Tanh) and rectified linear unit(ReLU) have been taken into consideration.
arXiv Detail & Related papers (2022-06-22T23:33:39Z) - On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks [91.3755431537592]
We study how random pruning of the weights affects a neural network's neural kernel (NTK)
In particular, this work establishes an equivalence of the NTKs between a fully-connected neural network and its randomly pruned version.
arXiv Detail & Related papers (2022-03-27T15:22:19Z) - The mathematics of adversarial attacks in AI -- Why deep learning is
unstable despite the existence of stable neural networks [69.33657875725747]
We prove that any training procedure based on training neural networks for classification problems with a fixed architecture will yield neural networks that are either inaccurate or unstable (if accurate)
The key is that the stable and accurate neural networks must have variable dimensions depending on the input, in particular, variable dimensions is a necessary condition for stability.
Our result points towards the paradox that accurate and stable neural networks exist, however, modern algorithms do not compute them.
arXiv Detail & Related papers (2021-09-13T16:19:25Z) - Reachability Is NP-Complete Even for the Simplest Neural Networks [0.0]
It was recently claimed that the problem is NP-complete for general neural networks and conjunctive input/output specifications.
We show that NP-hardness already holds for restricted classes of simple specifications and neural networks with just one layer.
arXiv Detail & Related papers (2021-08-30T12:38:57Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Training Neural Networks is ER-complete [0.7251305766151019]
Given a neural network, training data, and a threshold, it was known that it is NP-hard for the neural network such that the total error is below the threshold.
We determine this fundamental problem precisely, by showing that it is ER-complete.
This means that it is equivalent to deciding whether a system of ER equations and inequalities with integer coefficients and real unknowns has a solution.
arXiv Detail & Related papers (2021-02-19T08:28:37Z) - Input Hessian Regularization of Neural Networks [31.941188983286207]
We propose an efficient algorithm to train deep neural networks with Hessian operator-norm regularization.
We show that the new regularizer can, indeed, be feasible and, furthermore, that it increases the robustness of neural networks over input gradient regularization.
arXiv Detail & Related papers (2020-09-14T16:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.