Efficient Symbolic Reasoning for Neural-Network Verification
- URL: http://arxiv.org/abs/2303.13588v1
- Date: Thu, 23 Mar 2023 18:08:11 GMT
- Title: Efficient Symbolic Reasoning for Neural-Network Verification
- Authors: Zi Wang, Somesh Jha, Krishnamurthy (Dj) Dvijotham
- Abstract summary: We present a novel program reasoning framework for neural-network verification.
The key components of our framework are the use of the symbolic domain and the quadratic relation.
We believe that our framework can bring new theoretical insights and practical tools to verification problems for neural networks.
- Score: 48.384446430284676
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The neural network has become an integral part of modern software systems.
However, they still suffer from various problems, in particular, vulnerability
to adversarial attacks. In this work, we present a novel program reasoning
framework for neural-network verification, which we refer to as symbolic
reasoning. The key components of our framework are the use of the symbolic
domain and the quadratic relation. The symbolic domain has very flexible
semantics, and the quadratic relation is quite expressive. They allow us to
encode many verification problems for neural networks as quadratic programs.
Our scheme then relaxes the quadratic programs to semidefinite programs, which
can be efficiently solved. This framework allows us to verify various
neural-network properties under different scenarios, especially those that
appear challenging for non-symbolic domains. Moreover, it introduces new
representations and perspectives for the verification tasks. We believe that
our framework can bring new theoretical insights and practical tools to
verification problems for neural networks.
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Visualizing Neural Network Imagination [2.1749194587826026]
In certain situations, neural networks will represent environment states in their hidden activations.
Our goal is to visualize what environment states the networks are representing.
We define a quantitative interpretability metric and use it to demonstrate that hidden states can be highly interpretable.
arXiv Detail & Related papers (2024-05-10T11:43:35Z) - Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Extensions to Generalized Annotated Logic and an Equivalent Neural
Architecture [4.855957436171202]
We propose a list of desirable criteria for neuro symbolic systems and examine how some of the existing approaches address these criteria.
We then propose an extension to annotated generalized logic that allows for the creation of an equivalent neural architecture.
Unlike previous approaches that rely on continuous optimization for the training process, our framework is designed as a binarized neural network that uses discrete optimization.
arXiv Detail & Related papers (2023-02-23T17:39:46Z) - Neuro-Symbolic Verification of Deep Neural Networks [20.973078277539276]
We introduce a novel framework for verifying neural networks, named neuro-symbolic verification.
The key idea is to use neural networks as part of the otherwise logical specification.
We show how neuro-symbolic verification can be implemented on top of existing verification infrastructure for neural networks.
arXiv Detail & Related papers (2022-03-02T08:40:01Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Extending Answer Set Programs with Neural Networks [2.512827436728378]
We propose NeurASP -- a simple extension of answer set programs by embracing neural networks.
We show that NeurASP can not only improve the perception accuracy of a pre-trained neural network, but also help to train a neural network better by giving restrictions through logic rules.
arXiv Detail & Related papers (2020-09-22T00:52:30Z) - Machine Number Sense: A Dataset of Visual Arithmetic Problems for
Abstract and Relational Reasoning [95.18337034090648]
We propose a dataset, Machine Number Sense (MNS), consisting of visual arithmetic problems automatically generated using a grammar model--And-Or Graph (AOG)
These visual arithmetic problems are in the form of geometric figures.
We benchmark the MNS dataset using four predominant neural network models as baselines in this visual reasoning task.
arXiv Detail & Related papers (2020-04-25T17:14:58Z) - A neural network model of perception and reasoning [0.0]
We show that a simple set of biologically consistent organizing principles confer these capabilities to neuronal networks.
We implement these principles in a novel machine learning algorithm, based on concept construction instead of optimization, to design deep neural networks that reason with explainable neuron activity.
arXiv Detail & Related papers (2020-02-26T06:26:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.