Precise Multi-Neuron Abstractions for Neural Network Certification
- URL: http://arxiv.org/abs/2103.03638v1
- Date: Fri, 5 Mar 2021 12:53:24 GMT
- Title: Precise Multi-Neuron Abstractions for Neural Network Certification
- Authors: Mark Niklas M\"uller, Gleb Makarchuk, Gagandeep Singh, Markus
P\"uschel, Martin Vechev
- Abstract summary: PRIMA is a framework that computes precise convex approximations of arbitrary non-linear activations.
We evaluate the effectiveness of PRIMA on challenging neural networks with ReLU, Sigmoid, and Tanh activations.
- Score: 2.149265948858581
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Formal verification of neural networks is critical for their safe adoption in
real-world applications. However, designing a verifier which can handle
realistic networks in a precise manner remains an open and difficult challenge.
In this paper, we take a major step in addressing this challenge and present a
new framework, called PRIMA, that computes precise convex approximations of
arbitrary non-linear activations. PRIMA is based on novel approximation
algorithms that compute the convex hull of polytopes, leveraging concepts from
computational geometry. The algorithms have polynomial complexity, yield fewer
constraints, and minimize precision loss. We evaluate the effectiveness of
PRIMA on challenging neural networks with ReLU, Sigmoid, and Tanh activations.
Our results show that PRIMA is significantly more precise than the
state-of-the-art, verifying robustness for up to 16%, 30%, and 34% more images
than prior work on ReLU-, Sigmoid-, and Tanh-based networks, respectively.
Related papers
- Concurrent Training and Layer Pruning of Deep Neural Networks [0.0]
We propose an algorithm capable of identifying and eliminating irrelevant layers of a neural network during the early stages of training.
We employ a structure using residual connections around nonlinear network sections that allow the flow of information through the network once a nonlinear section is pruned.
arXiv Detail & Related papers (2024-06-06T23:19:57Z) - Fixing the NTK: From Neural Network Linearizations to Exact Convex
Programs [63.768739279562105]
We show that for a particular choice of mask weights that do not depend on the learning targets, this kernel is equivalent to the NTK of the gated ReLU network on the training data.
A consequence of this lack of dependence on the targets is that the NTK cannot perform better than the optimal MKL kernel on the training set.
arXiv Detail & Related papers (2023-09-26T17:42:52Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Provably Tightest Linear Approximation for Robustness Verification of
Sigmoid-like Neural Networks [22.239149433169747]
The robustness of deep neural networks is crucial to modern AI-enabled systems.
Due to their non-linearity, Sigmoid-like neural networks have been adopted in a wide range of applications.
arXiv Detail & Related papers (2022-08-21T12:07:36Z) - A Simple Approach to Improve Single-Model Deep Uncertainty via
Distance-Awareness [33.09831377640498]
We study approaches to improve uncertainty property of a single network, based on a single, deterministic representation.
We propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs.
On a suite of vision and language understanding benchmarks, SNGP outperforms other single-model approaches in prediction, calibration and out-of-domain detection.
arXiv Detail & Related papers (2022-05-01T05:46:13Z) - A Unified View of SDP-based Neural Network Verification through
Completely Positive Programming [27.742278216854714]
We develop an exact, convex formulation of verification as a completely positive program ( CPP)
We provide analysis showing that our formulation is minimal -- the removal of any constraint fundamentally misrepresents the neural network computation.
arXiv Detail & Related papers (2022-03-06T19:23:09Z) - A Survey of Quantization Methods for Efficient Neural Network Inference [75.55159744950859]
quantization is the problem of distributing continuous real-valued numbers over a fixed discrete set of numbers to minimize the number of bits required.
It has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas.
Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x.
arXiv Detail & Related papers (2021-03-25T06:57:11Z) - Scalable Verification of Quantized Neural Networks (Technical Report) [14.04927063847749]
We show that bit-exact implementation of quantized neural networks with bit-vector specifications is PSPACE-hard.
We propose three techniques for making SMT-based verification of quantized neural networks more scalable.
arXiv Detail & Related papers (2020-12-15T10:05:37Z) - Enabling certification of verification-agnostic networks via
memory-efficient semidefinite programming [97.40955121478716]
We propose a first-order dual SDP algorithm that requires memory only linear in the total number of network activations.
We significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively.
We also demonstrate tight verification of a quadratic stability specification for the decoder of a variational autoencoder.
arXiv Detail & Related papers (2020-10-22T12:32:29Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.