Provable Preimage Under-Approximation for Neural Networks (Full Version)
- URL: http://arxiv.org/abs/2305.03686v4
- Date: Sat, 27 Jan 2024 18:56:15 GMT
- Title: Provable Preimage Under-Approximation for Neural Networks (Full Version)
- Authors: Xiyue Zhang, Benjie Wang, Marta Kwiatkowska
- Abstract summary: We propose an efficient anytime algorithm for generating symbolic under-approximations of the preimage of any polyhedron output set for neural networks.
Empirically, we validate the efficacy of our method across a range of domains, including a high-dimensional MNIST classification task.
We present a sound and complete algorithm for the former, which exploits our disjoint union of polytopes representation to provide formal guarantees.
- Score: 27.519993407376862
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural network verification mainly focuses on local robustness properties,
which can be checked by bounding the image (set of outputs) of a given input
set. However, often it is important to know whether a given property holds
globally for the input domain, and if not then for what proportion of the input
the property is true. To analyze such properties requires computing preimage
abstractions of neural networks. In this work, we propose an efficient anytime
algorithm for generating symbolic under-approximations of the preimage of any
polyhedron output set for neural networks. Our algorithm combines a novel
technique for cheaply computing polytope preimage under-approximations using
linear relaxation, with a carefully-designed refinement procedure that
iteratively partitions the input region into subregions using input and ReLU
splitting in order to improve the approximation. Empirically, we validate the
efficacy of our method across a range of domains, including a high-dimensional
MNIST classification task beyond the reach of existing preimage computation
methods. Finally, as use cases, we showcase the application to quantitative
verification and robustness analysis. We present a sound and complete algorithm
for the former, which exploits our disjoint union of polytopes representation
to provide formal guarantees. For the latter, we find that our method can
provide useful quantitative information even when standard verifiers cannot
verify a robustness property.
Related papers
- PREMAP: A Unifying PREiMage APproximation Framework for Neural Networks [30.701422594374456]
We present a framework for preimage abstraction that produces under- and over-approximations of any polyhedral output set.
We evaluate our method on a range of tasks, demonstrating significant improvement in efficiency and scalability to high-input-dimensional image classification tasks.
arXiv Detail & Related papers (2024-08-17T17:24:47Z) - Automated Design of Linear Bounding Functions for Sigmoidal Nonlinearities in Neural Networks [23.01933325606068]
Existing complete verification techniques offer provable guarantees for all robustness queries but struggle to scale beyond small neural networks.
We propose a novel parameter search method to improve the quality of these linear approximations.
Specifically, we show that using a simple search method, carefully adapted to the given verification problem through state-of-the-art algorithm configuration techniques, improves the average global lower bound by 25% on average over the current state of the art.
arXiv Detail & Related papers (2024-06-14T16:16:26Z) - Leveraging Activations for Superpixel Explanations [2.8792218859042453]
Saliency methods have become standard in the explanation toolkit of deep neural networks.
In this paper, we aim to avoid relying on segmenters by extracting a segmentation from the activations of a deep neural network image classifier.
Our so-called Neuro-Activated Superpixels (NAS) can isolate the regions of interest in the input relevant to the model's prediction.
arXiv Detail & Related papers (2024-06-07T13:37:45Z) - Provable Data Subset Selection For Efficient Neural Network Training [73.34254513162898]
We introduce the first algorithm to construct coresets for emphRBFNNs, i.e., small weighted subsets that approximate the loss of the input data on any radial basis function network.
We then perform empirical evaluations on function approximation and dataset subset selection on popular network architectures and data sets.
arXiv Detail & Related papers (2023-03-09T10:08:34Z) - Provably Bounding Neural Network Preimages [18.810882386083655]
We present the INVPROP algorithm for verifying properties over the preimage of a linearly constrained output set.
Contrary to other approaches, our efficient algorithm is GPU-accelerated and does not require a linear programming solver.
Our results show that in certain settings, we find over-approximations over 2500x tighter than prior work while being 2.5x faster.
arXiv Detail & Related papers (2023-02-02T20:34:45Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Spelunking the Deep: Guaranteed Queries for General Neural Implicit
Surfaces [35.438964954948574]
This work presents a new approach to perform queries directly on general neural implicit functions for a wide range of existing architectures.
Our key tool is the application of range analysis to neural networks, using automatic arithmetic rules to bound the output of a network over a region.
We use the resulting bounds to develop queries including ray casting, intersection testing, constructing spatial hierarchies, fast mesh extraction, closest-point evaluation.
arXiv Detail & Related papers (2022-02-05T00:37:08Z) - Meta-Learning Sparse Implicit Neural Representations [69.15490627853629]
Implicit neural representations are a promising new avenue of representing general signals.
Current approach is difficult to scale for a large number of signals or a data set.
We show that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models.
arXiv Detail & Related papers (2021-10-27T18:02:53Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - Activation Relaxation: A Local Dynamical Approximation to
Backpropagation in the Brain [62.997667081978825]
Activation Relaxation (AR) is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system.
Our algorithm converges rapidly and robustly to the correct backpropagation gradients, requires only a single type of computational unit, and can operate on arbitrary computation graphs.
arXiv Detail & Related papers (2020-09-11T11:56:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.