A Unified View of SDP-based Neural Network Verification through
Completely Positive Programming
- URL: http://arxiv.org/abs/2203.03034v1
- Date: Sun, 6 Mar 2022 19:23:09 GMT
- Title: A Unified View of SDP-based Neural Network Verification through
Completely Positive Programming
- Authors: Robin Brown, Edward Schmerling, Navid Azizan, Marco Pavone
- Abstract summary: We develop an exact, convex formulation of verification as a completely positive program ( CPP)
We provide analysis showing that our formulation is minimal -- the removal of any constraint fundamentally misrepresents the neural network computation.
- Score: 27.742278216854714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Verifying that input-output relationships of a neural network conform to
prescribed operational specifications is a key enabler towards deploying these
networks in safety-critical applications. Semidefinite programming (SDP)-based
approaches to Rectified Linear Unit (ReLU) network verification transcribe this
problem into an optimization problem, where the accuracy of any such
formulation reflects the level of fidelity in how the neural network
computation is represented, as well as the relaxations of intractable
constraints. While the literature contains much progress on improving the
tightness of SDP formulations while maintaining tractability, comparatively
little work has been devoted to the other extreme, i.e., how to most accurately
capture the original verification problem before SDP relaxation. In this work,
we develop an exact, convex formulation of verification as a completely
positive program (CPP), and provide analysis showing that our formulation is
minimal -- the removal of any constraint fundamentally misrepresents the neural
network computation. We leverage our formulation to provide a unifying view of
existing approaches, and give insight into the source of large relaxation gaps
observed in some cases.
Related papers
- Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Efficient Uncertainty Quantification and Reduction for
Over-Parameterized Neural Networks [23.7125322065694]
Uncertainty quantification (UQ) is important for reliability assessment and enhancement of machine learning models.
We create statistically guaranteed schemes to principally emphcharacterize, and emphremove, the uncertainty of over- parameterized neural networks.
In particular, our approach, based on what we call a procedural-noise-correcting (PNC) predictor, removes the procedural uncertainty by using only emphone auxiliary network that is trained on a suitably labeled dataset.
arXiv Detail & Related papers (2023-06-09T05:15:53Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Precise Multi-Neuron Abstractions for Neural Network Certification [2.149265948858581]
PRIMA is a framework that computes precise convex approximations of arbitrary non-linear activations.
We evaluate the effectiveness of PRIMA on challenging neural networks with ReLU, Sigmoid, and Tanh activations.
arXiv Detail & Related papers (2021-03-05T12:53:24Z) - Towards Optimal Branching of Linear and Semidefinite Relaxations for Neural Network Robustness Certification [10.349616734896522]
We study certifying the robustness of ReLU neural networks against adversarial input perturbations.
We take a branch-and-bound approach to propose partitioning the input uncertainty set and solving the relaxations on each part separately.
We show that this approach reduces relaxation error, and that the error is eliminated entirely upon performing an LP relaxation with a partition intelligently designed to exploit the nature of the ReLU activations.
arXiv Detail & Related papers (2021-01-22T19:36:40Z) - Enabling certification of verification-agnostic networks via
memory-efficient semidefinite programming [97.40955121478716]
We propose a first-order dual SDP algorithm that requires memory only linear in the total number of network activations.
We significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively.
We also demonstrate tight verification of a quadratic stability specification for the decoder of a variational autoencoder.
arXiv Detail & Related papers (2020-10-22T12:32:29Z) - A Sequential Framework Towards an Exact SDP Verification of Neural
Networks [14.191310794366075]
A number of techniques based on convex optimization have been proposed in the literature to study the robustness of neural networks.
The challenge to a robust certification approach is that it is prone to a large relaxation gap.
In this work, we address the issue by developing a sequential programming framework to shrink this gap to zero.
arXiv Detail & Related papers (2020-10-16T19:45:11Z) - Global Optimization of Objective Functions Represented by ReLU Networks [77.55969359556032]
Neural networks can learn complex, non- adversarial functions, and it is challenging to guarantee their correct behavior in safety-critical contexts.
Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures.
We propose an approach that integrates the optimization process into the verification procedure, achieving better performance than the naive approach.
arXiv Detail & Related papers (2020-10-07T08:19:48Z) - Activation Relaxation: A Local Dynamical Approximation to
Backpropagation in the Brain [62.997667081978825]
Activation Relaxation (AR) is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system.
Our algorithm converges rapidly and robustly to the correct backpropagation gradients, requires only a single type of computational unit, and can operate on arbitrary computation graphs.
arXiv Detail & Related papers (2020-09-11T11:56:34Z) - Tightened Convex Relaxations for Neural Network Robustness Certification [10.68833097448566]
We exploit the structure of ReLU networks to improve relaxation errors through a novel partition-based certification procedure.
The proposed method is proven to tighten existing linear programming relaxations, and achieves zero relaxation error as the result is made finer.
arXiv Detail & Related papers (2020-04-01T16:59:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.