Efficient compilation of expressive problem space specifications to
neural network solvers
- URL: http://arxiv.org/abs/2402.01353v1
- Date: Wed, 24 Jan 2024 09:13:09 GMT
- Title: Efficient compilation of expressive problem space specifications to
neural network solvers
- Authors: Matthew L. Daggitt, Wen Kokke, Robert Atkey
- Abstract summary: We describe an algorithm for compiling the former to the latter.
We explore and overcome complications that arise from targeting neural network solvers as opposed to standard SMT solvers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work has described the presence of the embedding gap in neural network
verification. On one side of the gap is a high-level specification about the
network's behaviour, written by a domain expert in terms of the interpretable
problem space. On the other side are a logically-equivalent set of
satisfiability queries, expressed in the uninterpretable embedding space in a
form suitable for neural network solvers. In this paper we describe an
algorithm for compiling the former to the latter. We explore and overcome
complications that arise from targeting neural network solvers as opposed to
standard SMT solvers.
Related papers
- Identifying Sub-networks in Neural Networks via Functionally Similar Representations [41.028797971427124]
We take a step toward automating the understanding of the network by investigating the existence of distinct sub-networks.
Our approach offers meaningful insights into the behavior of neural networks with minimal human and computational cost.
arXiv Detail & Related papers (2024-10-21T20:19:00Z) - LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Zonotope Domains for Lagrangian Neural Network Verification [102.13346781220383]
We decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks.
Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques.
arXiv Detail & Related papers (2022-10-14T19:31:39Z) - Deep Neural Networks and PIDE discretizations [2.4063592468412276]
We propose neural networks that tackle the problems of stability and field-of-view of a Convolutional Neural Network (CNN)
We propose integral-based spatially nonlocal operators which are related to global weighted Laplacian, fractional Laplacian and fractional inverse Laplacian operators.
We test the effectiveness of the proposed neural architectures on benchmark image classification datasets and semantic segmentation tasks in autonomous driving.
arXiv Detail & Related papers (2021-08-05T08:03:01Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Bayesian Nested Neural Networks for Uncertainty Calibration and Adaptive
Compression [40.35734017517066]
Nested networks or slimmable networks are neural networks whose architectures can be adjusted instantly during testing time.
Recent studies have focused on a "nested dropout" layer, which is able to order the nodes of a layer by importance during training.
arXiv Detail & Related papers (2021-01-27T12:34:58Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Approximation in shift-invariant spaces with deep ReLU neural networks [7.7084107194202875]
We study the expressive power of deep ReLU neural networks for approximating functions in dilated shift-invariant spaces.
Approximation error bounds are estimated with respect to the width and depth of neural networks.
arXiv Detail & Related papers (2020-05-25T07:23:47Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.