NeuroDiff: Scalable Differential Verification of Neural Networks using
Fine-Grained Approximation
- URL: http://arxiv.org/abs/2009.09943v1
- Date: Mon, 21 Sep 2020 15:00:25 GMT
- Title: NeuroDiff: Scalable Differential Verification of Neural Networks using
Fine-Grained Approximation
- Authors: Brandon Paulsen, Jingbo Wang, Jiawei Wang, Chao Wang
- Abstract summary: NeuroDiff is a symbolic and fine-grained approximation technique that drastically increases the accuracy of differential verification.
Our results show that NeuroDiff is up to 1000X faster and 5X more accurate than the state-of-the-art tool.
- Score: 18.653663583989122
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As neural networks make their way into safety-critical systems, where
misbehavior can lead to catastrophes, there is a growing interest in certifying
the equivalence of two structurally similar neural networks. For example,
compression techniques are often used in practice for deploying trained neural
networks on computationally- and energy-constrained devices, which raises the
question of how faithfully the compressed network mimics the original network.
Unfortunately, existing methods either focus on verifying a single network or
rely on loose approximations to prove the equivalence of two networks. Due to
overly conservative approximation, differential verification lacks scalability
in terms of both accuracy and computational cost. To overcome these problems,
we propose NeuroDiff, a symbolic and fine-grained approximation technique that
drastically increases the accuracy of differential verification while achieving
many orders-of-magnitude speedup. NeuroDiff has two key contributions. The
first one is new convex approximations that more accurately bound the
difference neurons of two networks under all possible inputs. The second one is
judicious use of symbolic variables to represent neurons whose difference
bounds have accumulated significant error. We also find that these two
techniques are complementary, i.e., when combined, the benefit is greater than
the sum of their individual benefits. We have evaluated NeuroDiff on a variety
of differential verification tasks. Our results show that NeuroDiff is up to
1000X faster and 5X more accurate than the state-of-the-art tool.
Related papers
- Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - Zonotope Domains for Lagrangian Neural Network Verification [102.13346781220383]
We decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks.
Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques.
arXiv Detail & Related papers (2022-10-14T19:31:39Z) - Consistency of Neural Networks with Regularization [0.0]
This paper proposes the general framework of neural networks with regularization and prove its consistency.
Two types of activation functions: hyperbolic function(Tanh) and rectified linear unit(ReLU) have been taken into consideration.
arXiv Detail & Related papers (2022-06-22T23:33:39Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Stochastic Neural Networks with Infinite Width are Deterministic [7.07065078444922]
We study neural networks, a main type of neural network in use.
We prove that as the width of an optimized neural network tends to infinity, its predictive variance on the training set decreases to zero.
arXiv Detail & Related papers (2022-01-30T04:52:31Z) - Optimization-Based Separations for Neural Networks [57.875347246373956]
We show that gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations.
This is the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice.
arXiv Detail & Related papers (2021-12-04T18:07:47Z) - Fourier Neural Networks for Function Approximation [2.840363325289377]
It is proved extensively that neural networks are universal approximators.
It is specifically proved that for a narrow neural network to approximate a function which is otherwise implemented by a deep Neural network, the network take exponentially large number of neurons.
arXiv Detail & Related papers (2021-10-21T09:30:26Z) - ReluDiff: Differential Verification of Deep Neural Networks [8.601847909798165]
We develop a new method for differential verification of two closely related networks.
We exploit structural and behavioral similarities of the two networks to more accurately bound the difference between the output neurons of the two networks.
Our experiments show that, compared to state-of-the-art verification tools, our method can achieve orders-of-magnitude speedup.
arXiv Detail & Related papers (2020-01-10T20:47:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.