Neuro-Symbolic Verification of Deep Neural Networks
- URL: http://arxiv.org/abs/2203.00938v1
- Date: Wed, 2 Mar 2022 08:40:01 GMT
- Title: Neuro-Symbolic Verification of Deep Neural Networks
- Authors: Xuan Xie, Kristian Kersting, Daniel Neider
- Abstract summary: We introduce a novel framework for verifying neural networks, named neuro-symbolic verification.
The key idea is to use neural networks as part of the otherwise logical specification.
We show how neuro-symbolic verification can be implemented on top of existing verification infrastructure for neural networks.
- Score: 20.973078277539276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Formal verification has emerged as a powerful approach to ensure the safety
and reliability of deep neural networks. However, current verification tools
are limited to only a handful of properties that can be expressed as
first-order constraints over the inputs and output of a network. While
adversarial robustness and fairness fall under this category, many real-world
properties (e.g., "an autonomous vehicle has to stop in front of a stop sign")
remain outside the scope of existing verification technology. To mitigate this
severe practical restriction, we introduce a novel framework for verifying
neural networks, named neuro-symbolic verification. The key idea is to use
neural networks as part of the otherwise logical specification, enabling the
verification of a wide variety of complex, real-world properties, including the
one above. Moreover, we demonstrate how neuro-symbolic verification can be
implemented on top of existing verification infrastructure for neural networks,
making our framework easily accessible to researchers and practitioners alike.
Related papers
- Quantum-Inspired Analysis of Neural Network Vulnerabilities: The Role of
Conjugate Variables in System Attacks [54.565579874913816]
Neural networks demonstrate inherent vulnerability to small, non-random perturbations, emerging as adversarial attacks.
A mathematical congruence manifests between this mechanism and the quantum physics' uncertainty principle, casting light on a hitherto unanticipated interdisciplinarity.
arXiv Detail & Related papers (2024-02-16T02:11:27Z) - Expediting Neural Network Verification via Network Reduction [4.8621567234713305]
We propose a network reduction technique as a pre-processing method prior to verification.
The proposed method reduces neural networks via eliminating stable ReLU neurons, and transforming them into a sequential neural network.
We instantiate the reduction technique on the state-of-the-art complete and incomplete verification tools.
arXiv Detail & Related papers (2023-08-07T06:23:24Z) - Fully Automatic Neural Network Reduction for Formal Verification [8.017543518311196]
We introduce a fully automatic and sound reduction of neural networks using reachability analysis.
The soundness ensures that the verification of the reduced network entails the verification of the original network.
We show that our approach can reduce the number of neurons to a fraction of the original number of neurons with minor outer-approximation.
arXiv Detail & Related papers (2023-05-03T07:13:47Z) - Efficient Symbolic Reasoning for Neural-Network Verification [48.384446430284676]
We present a novel program reasoning framework for neural-network verification.
The key components of our framework are the use of the symbolic domain and the quadratic relation.
We believe that our framework can bring new theoretical insights and practical tools to verification problems for neural networks.
arXiv Detail & Related papers (2023-03-23T18:08:11Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - CheckINN: Wide Range Neural Network Verification in Imandra [0.0]
We show how Imandra, a functional programming language and a theorem prover can offer a holistic infrastructure for neural network verification.
We develop a novel library CheckINN that formalises neural networks in Imandra, and covers different important facets of neural network verification.
arXiv Detail & Related papers (2022-07-21T16:06:58Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Vehicle: Interfacing Neural Network Verifiers with Interactive Theorem
Provers [1.5749416770494706]
Vehicle is equipped with an expressive domain specific language for stating neural network specifications.
It overcomes previous issues with maintainability and scalability in similar ITP formalisations.
We demonstrate its utility by using it to connect the neural network verifier Marabou to Agda and then formally verifying that a car steered by a neural network never leaves the road.
arXiv Detail & Related papers (2022-02-10T18:09:23Z) - Building Compact and Robust Deep Neural Networks with Toeplitz Matrices [93.05076144491146]
This thesis focuses on the problem of training neural networks which are compact, easy to train, reliable and robust to adversarial examples.
We leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.
arXiv Detail & Related papers (2021-09-02T13:58:12Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Towards Repairing Neural Networks Correctly [6.600380575920419]
We propose a runtime verification method to ensure the correctness of neural networks.
Experiment results show that our approach effectively generates neural networks which are guaranteed to satisfy the properties.
arXiv Detail & Related papers (2020-12-03T12:31:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.