Verifying Recurrent Neural Networks using Invariant Inference
- URL: http://arxiv.org/abs/2004.02462v2
- Date: Mon, 10 Aug 2020 08:38:35 GMT
- Title: Verifying Recurrent Neural Networks using Invariant Inference
- Authors: Yuval Jacoby, Clark Barrett, Guy Katz
- Abstract summary: We propose a novel approach for verifying properties of a widespread variant of neural networks, called recurrent neural networks.
Our approach is based on the inference of invariants, which allow us to reduce the complex problem of verifying recurrent networks into simpler, non-recurrent problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are revolutionizing the way complex systems are
developed. However, these automatically-generated networks are opaque to
humans, making it difficult to reason about them and guarantee their
correctness. Here, we propose a novel approach for verifying properties of a
widespread variant of neural networks, called recurrent neural networks.
Recurrent neural networks play a key role in, e.g., natural language
processing, and their verification is crucial for guaranteeing the reliability
of many critical systems. Our approach is based on the inference of invariants,
which allow us to reduce the complex problem of verifying recurrent networks
into simpler, non-recurrent problems. Experiments with a proof-of-concept
implementation of our approach demonstrate that it performs orders-of-magnitude
better than the state of the art.
Related papers
- Message Passing Variational Autoregressive Network for Solving Intractable Ising Models [6.261096199903392]
Many deep neural networks have been used to solve Ising models, including autoregressive neural networks, convolutional neural networks, recurrent neural networks, and graph neural networks.
Here we propose a variational autoregressive architecture with a message passing mechanism, which can effectively utilize the interactions between spin variables.
The new network trained under an annealing framework outperforms existing methods in solving several prototypical Ising spin Hamiltonians, especially for larger spin systems at low temperatures.
arXiv Detail & Related papers (2024-04-09T11:27:07Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Fully Automatic Neural Network Reduction for Formal Verification [8.017543518311196]
We introduce a fully automatic and sound reduction of neural networks using reachability analysis.
The soundness ensures that the verification of the reduced network entails the verification of the original network.
We show that our approach can reduce the number of neurons to a fraction of the original number of neurons with minor outer-approximation.
arXiv Detail & Related papers (2023-05-03T07:13:47Z) - Certified Invertibility in Neural Networks via Mixed-Integer Programming [16.64960701212292]
Neural networks are known to be vulnerable to adversarial attacks.
There may exist large, meaningful perturbations that do not affect the network's decision.
We discuss how our findings can be useful for invertibility certification in transformations between neural networks.
arXiv Detail & Related papers (2023-01-27T15:40:38Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Building Compact and Robust Deep Neural Networks with Toeplitz Matrices [93.05076144491146]
This thesis focuses on the problem of training neural networks which are compact, easy to train, reliable and robust to adversarial examples.
We leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.
arXiv Detail & Related papers (2021-09-02T13:58:12Z) - Thinking Deeply with Recurrence: Generalizing from Easy to Hard
Sequential Reasoning Problems [51.132938969015825]
We observe that recurrent networks have the uncanny ability to closely emulate the behavior of non-recurrent deep models.
We show that recurrent networks that are trained to solve simple mazes with few recurrent steps can indeed solve much more complex problems simply by performing additional recurrences during inference.
arXiv Detail & Related papers (2021-02-22T14:09:20Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Towards Repairing Neural Networks Correctly [6.600380575920419]
We propose a runtime verification method to ensure the correctness of neural networks.
Experiment results show that our approach effectively generates neural networks which are guaranteed to satisfy the properties.
arXiv Detail & Related papers (2020-12-03T12:31:07Z) - Implicit recurrent networks: A novel approach to stationary input
processing with recurrent neural networks in deep learning [0.0]
In this work, we introduce and test a novel implementation of recurrent neural networks into deep learning.
We provide an algorithm which implements the backpropagation algorithm on a implicit implementation of recurrent networks.
A single-layer implicit recurrent network is able to solve the XOR problem, while a feed-forward network with monotonically increasing activation function fails at this task.
arXiv Detail & Related papers (2020-10-20T18:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.