A Sequential Framework Towards an Exact SDP Verification of Neural
Networks
- URL: http://arxiv.org/abs/2010.08603v2
- Date: Mon, 27 Sep 2021 07:15:03 GMT
- Title: A Sequential Framework Towards an Exact SDP Verification of Neural
Networks
- Authors: Ziye Ma, Somayeh Sojoudi
- Abstract summary: A number of techniques based on convex optimization have been proposed in the literature to study the robustness of neural networks.
The challenge to a robust certification approach is that it is prone to a large relaxation gap.
In this work, we address the issue by developing a sequential programming framework to shrink this gap to zero.
- Score: 14.191310794366075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although neural networks have been applied to several systems in recent
years, they still cannot be used in safety-critical systems due to the lack of
efficient techniques to certify their robustness. A number of techniques based
on convex optimization have been proposed in the literature to study the
robustness of neural networks, and the semidefinite programming (SDP) approach
has emerged as a leading contender for the robust certification of neural
networks. The major challenge to the SDP approach is that it is prone to a
large relaxation gap. In this work, we address this issue by developing a
sequential framework to shrink this gap to zero by adding non-convex cuts to
the optimization problem via disjunctive programming. We analyze the
performance of this sequential SDP method both theoretically and empirically,
and show that it bridges the gap as the number of cuts increases.
Related papers
- Confident magnitude-based neural network pruning [0.0]
Pruning neural networks has proven to be a successful approach to increase the efficiency and reduce the memory storage of deep learning models.
We leverage recent techniques on distribution-free uncertainty quantification to provide finite-sample statistical guarantees to compress deep neural networks.
This work presents experiments in computer vision tasks to illustrate how uncertainty-aware pruning is a useful approach to deploy sparse neural networks safely.
arXiv Detail & Related papers (2024-08-08T21:29:20Z) - Robust Stochastically-Descending Unrolled Networks [85.6993263983062]
Deep unrolling is an emerging learning-to-optimize method that unrolls a truncated iterative algorithm in the layers of a trainable neural network.
We show that convergence guarantees and generalizability of the unrolled networks are still open theoretical problems.
We numerically assess unrolled architectures trained under the proposed constraints in two different applications.
arXiv Detail & Related papers (2023-12-25T18:51:23Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - On Optimizing Back-Substitution Methods for Neural Network Verification [1.4394939014120451]
We present an approach for making back-substitution produce tighter bounds.
Our technique is general, in the sense that it can be integrated into numerous existing symbolic-bound propagation techniques.
arXiv Detail & Related papers (2022-08-16T11:16:44Z) - Chordal Sparsity for SDP-based Neural Network Verification [1.9556053645976446]
We focus on improving semidefinite programming (SDP) based techniques for neural network verification.
By leveraging chordal sparsity, we can decompose the primary computational bottleneck of DeepSDP into an equivalent collection of smaller LMIs.
We show that additional analysis of Chordal-DeepSDP allows us to further rewrite its collection of LMIs in a second level of decomposition.
arXiv Detail & Related papers (2022-06-07T17:57:53Z) - A Unified View of SDP-based Neural Network Verification through
Completely Positive Programming [27.742278216854714]
We develop an exact, convex formulation of verification as a completely positive program ( CPP)
We provide analysis showing that our formulation is minimal -- the removal of any constraint fundamentally misrepresents the neural network computation.
arXiv Detail & Related papers (2022-03-06T19:23:09Z) - Proxy Convexity: A Unified Framework for the Analysis of Neural Networks
Trained by Gradient Descent [95.94432031144716]
We propose a unified non- optimization framework for the analysis of a learning network.
We show that existing guarantees can be trained unified through gradient descent.
arXiv Detail & Related papers (2021-06-25T17:45:00Z) - Semi-Implicit Back Propagation [1.5533842336139065]
We propose a semi-implicit back propagation method for neural network training.
The difference on the neurons are propagated in a backward fashion and the parameters are updated with proximal mapping.
Experiments on both MNIST and CIFAR-10 demonstrate that the proposed algorithm leads to better performance in terms of both loss decreasing and training/validation accuracy.
arXiv Detail & Related papers (2020-02-10T03:26:09Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.