On Neural Network Equivalence Checking using SMT Solvers
- URL: http://arxiv.org/abs/2203.11629v1
- Date: Tue, 22 Mar 2022 11:31:12 GMT
- Title: On Neural Network Equivalence Checking using SMT Solvers
- Authors: Charis Eleftheriadis, Nikolaos Kekatos, Panagiotis Katsaros, Stavros
Tripakis
- Abstract summary: Two pretrained neural networks are deemed equivalent if they yield similar outputs for the same inputs.
This work presents a first SMT-based encoding of the equivalence checking problem.
- Score: 0.7078672343881655
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Two pretrained neural networks are deemed equivalent if they yield similar
outputs for the same inputs. Equivalence checking of neural networks is of
great importance, due to its utility in replacing learning-enabled components
with equivalent ones, when there is need to fulfill additional requirements or
to address security threats, as is the case for example when using knowledge
distillation, adversarial training etc. SMT solvers can potentially provide
solutions to the problem of neural network equivalence checking that will be
sound and complete, but as it is expected any such solution is associated with
significant limitations with respect to the size of neural networks to be
checked. This work presents a first SMT-based encoding of the equivalence
checking problem, explores its utility and limitations and proposes avenues for
future research and improvements towards more scalable and practically
applicable solutions. We present experimental results that shed light to the
aforementioned issues, for diverse types of neural network models (classifiers
and regression networks) and equivalence criteria, towards a general and
application-independent equivalence checking approach.
Related papers
- Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - An Estimator for the Sensitivity to Perturbations of Deep Neural
Networks [0.31498833540989407]
This paper derives an estimator that can predict the sensitivity of a given Deep Neural Network to perturbations in input.
An approximation of the estimator is tested on two Convolutional Neural Networks, AlexNet and VGG-19, using the ImageNet dataset.
arXiv Detail & Related papers (2023-07-24T10:33:32Z) - Certified Invertibility in Neural Networks via Mixed-Integer Programming [16.64960701212292]
Neural networks are known to be vulnerable to adversarial attacks.
There may exist large, meaningful perturbations that do not affect the network's decision.
We discuss how our findings can be useful for invertibility certification in transformations between neural networks.
arXiv Detail & Related papers (2023-01-27T15:40:38Z) - Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification [83.20479832949069]
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood.
Here we investigate the problem-dependent power of QCs on multi-class classification tasks.
Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
arXiv Detail & Related papers (2022-12-29T10:46:40Z) - Understanding Weight Similarity of Neural Networks via Chain
Normalization Rule and Hypothesis-Training-Testing [58.401504709365284]
We present a weight similarity measure that can quantify the weight similarity of non-volution neural networks.
We first normalize the weights of neural networks by a chain normalization rule, which is used to introduce weight-training representation learning.
We extend traditional hypothesis-testing method to validate the hypothesis on the weight similarity of neural networks.
arXiv Detail & Related papers (2022-08-08T19:11:03Z) - Consistency of Neural Networks with Regularization [0.0]
This paper proposes the general framework of neural networks with regularization and prove its consistency.
Two types of activation functions: hyperbolic function(Tanh) and rectified linear unit(ReLU) have been taken into consideration.
arXiv Detail & Related papers (2022-06-22T23:33:39Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Input Hessian Regularization of Neural Networks [31.941188983286207]
We propose an efficient algorithm to train deep neural networks with Hessian operator-norm regularization.
We show that the new regularizer can, indeed, be feasible and, furthermore, that it increases the robustness of neural networks over input gradient regularization.
arXiv Detail & Related papers (2020-09-14T16:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.