Chordal Sparsity for SDP-based Neural Network Verification
- URL: http://arxiv.org/abs/2206.03482v3
- Date: Mon, 8 Jan 2024 05:58:47 GMT
- Title: Chordal Sparsity for SDP-based Neural Network Verification
- Authors: Anton Xue, Lars Lindemann, Rajeev Alur
- Abstract summary: We focus on improving semidefinite programming (SDP) based techniques for neural network verification.
By leveraging chordal sparsity, we can decompose the primary computational bottleneck of DeepSDP into an equivalent collection of smaller LMIs.
We show that additional analysis of Chordal-DeepSDP allows us to further rewrite its collection of LMIs in a second level of decomposition.
- Score: 1.9556053645976446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks are central to many emerging technologies, but verifying
their correctness remains a major challenge. It is known that network outputs
can be sensitive and fragile to even small input perturbations, thereby
increasing the risk of unpredictable and undesirable behavior. Fast and
accurate verification of neural networks is therefore critical to their
widespread adoption, and in recent years, various methods have been developed
as a response to this problem. In this paper, we focus on improving
semidefinite programming (SDP) based techniques for neural network
verification. Such techniques offer the power of expressing complex geometric
constraints while retaining a convex problem formulation, but scalability
remains a major issue in practice. Our starting point is the DeepSDP framework
proposed by Fazlyab et al., which uses quadratic constraints to abstract the
verification problem into a large-scale SDP. However, solving this SDP quickly
becomes intractable when the network grows. Our key observation is that by
leveraging chordal sparsity, we can decompose the primary computational
bottleneck of DeepSDP -- a large linear matrix inequality (LMI) -- into an
equivalent collection of smaller LMIs. We call our chordally sparse
optimization program Chordal-DeepSDP and prove that its construction is
identically expressive as that of DeepSDP. Moreover, we show that additional
analysis of Chordal-DeepSDP allows us to further rewrite its collection of LMIs
in a second level of decomposition that we call Chordal-DeepSDP-2 -- which
results in another significant computational gain. Finally, we provide
numerical experiments on real networks of learned cart-pole dynamics,
showcasing the computational advantage of Chordal-DeepSDP and Chordal-DeepSDP-2
over DeepSDP.
Related papers
- Node Centrality Approximation For Large Networks Based On Inductive
Graph Neural Networks [2.4012886591705738]
Closeness Centrality (CC) and Betweenness Centrality (BC) are crucial metrics in network analysis.
Their practical implementation on extensive networks remains computationally demanding due to their high time complexity.
We propose the CNCA-IGE model, which is an inductive graph encoder-decoder model designed to rank nodes based on specified CC or BC metrics.
arXiv Detail & Related papers (2024-03-08T01:23:12Z) - Spike-and-slab shrinkage priors for structurally sparse Bayesian neural networks [0.16385815610837165]
Sparse deep learning addresses challenges by recovering a sparse representation of the underlying target function.
Deep neural architectures compressed via structured sparsity provide low latency inference, higher data throughput, and reduced energy consumption.
We propose structurally sparse Bayesian neural networks which prune excessive nodes with (i) Spike-and-Slab Group Lasso (SS-GL), and (ii) Spike-and-Slab Group Horseshoe (SS-GHS) priors.
arXiv Detail & Related papers (2023-08-17T17:14:18Z) - Bayesian Interpolation with Deep Linear Networks [92.1721532941863]
Characterizing how neural network depth, width, and dataset size jointly impact model quality is a central problem in deep learning theory.
We show that linear networks make provably optimal predictions at infinite depth.
We also show that with data-agnostic priors, Bayesian model evidence in wide linear networks is maximized at infinite depth.
arXiv Detail & Related papers (2022-12-29T20:57:46Z) - i-SpaSP: Structured Neural Pruning via Sparse Signal Recovery [11.119895959906085]
We propose a novel, structured pruning algorithm for neural networks -- the iterative, Sparse Structured Pruning, dubbed as i-SpaSP.
i-SpaSP operates by identifying a larger set of important parameter groups within a network that contribute most to the residual between pruned and dense network output.
It is shown to discover high-performing sub-networks and improve upon the pruning efficiency of provable baseline methodologies by several orders of magnitude.
arXiv Detail & Related papers (2021-12-07T05:26:45Z) - Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks [83.58049517083138]
We consider a two-layer ReLU network trained via gradient descent.
We show that SGD is biased towards a simple solution.
We also provide empirical evidence that knots at locations distinct from the data points might occur.
arXiv Detail & Related papers (2021-11-03T15:14:20Z) - Reduced-Order Neural Network Synthesis with Robustness Guarantees [0.0]
Machine learning algorithms are being adapted to run locally on board, potentially hardware limited, devices to improve user privacy, reduce latency and be more energy efficient.
To address this issue, a method to automatically synthesize reduced-order neural networks (having fewer neurons) approxing the input/output mapping of a larger one is introduced.
Worst-case bounds for this approximation error are obtained and the approach can be applied to a wide variety of neural networks architectures.
arXiv Detail & Related papers (2021-02-18T12:03:57Z) - Solving Sparse Linear Inverse Problems in Communication Systems: A Deep
Learning Approach With Adaptive Depth [51.40441097625201]
We propose an end-to-end trainable deep learning architecture for sparse signal recovery problems.
The proposed method learns how many layers to execute to emit an output, and the network depth is dynamically adjusted for each task in the inference phase.
arXiv Detail & Related papers (2020-10-29T06:32:53Z) - A Sequential Framework Towards an Exact SDP Verification of Neural
Networks [14.191310794366075]
A number of techniques based on convex optimization have been proposed in the literature to study the robustness of neural networks.
The challenge to a robust certification approach is that it is prone to a large relaxation gap.
In this work, we address the issue by developing a sequential programming framework to shrink this gap to zero.
arXiv Detail & Related papers (2020-10-16T19:45:11Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.