Fast Geometric Projections for Local Robustness Certification
- URL: http://arxiv.org/abs/2002.04742v3
- Date: Thu, 18 Feb 2021 18:42:52 GMT
- Title: Fast Geometric Projections for Local Robustness Certification
- Authors: Aymeric Fromherz, Klas Leino, Matt Fredrikson, Bryan Parno, Corina
P\u{a}s\u{a}reanu
- Abstract summary: We present a fast procedure for checking local robustness in feed-forward neural networks.
We show how the regions around a point can be analyzed using simple geometric projections.
- Score: 29.833035157227553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Local robustness ensures that a model classifies all inputs within an
$\ell_2$-ball consistently, which precludes various forms of adversarial
inputs. In this paper, we present a fast procedure for checking local
robustness in feed-forward neural networks with piecewise-linear activation
functions. Such networks partition the input space into a set of convex
polyhedral regions in which the network's behavior is linear; hence, a
systematic search for decision boundaries within the regions around a given
input is sufficient for assessing robustness. Crucially, we show how the
regions around a point can be analyzed using simple geometric projections, thus
admitting an efficient, highly-parallel GPU implementation that excels
particularly for the $\ell_2$ norm, where previous work has been less
effective. Empirically we find this approach to be far more precise than many
approximate verification approaches, while at the same time performing multiple
orders of magnitude faster than complete verifiers, and scaling to much deeper
networks.
Related papers
- Provable Preimage Under-Approximation for Neural Networks (Full Version) [27.519993407376862]
We propose an efficient anytime algorithm for generating symbolic under-approximations of the preimage of any polyhedron output set for neural networks.
Empirically, we validate the efficacy of our method across a range of domains, including a high-dimensional MNIST classification task.
We present a sound and complete algorithm for the former, which exploits our disjoint union of polytopes representation to provide formal guarantees.
arXiv Detail & Related papers (2023-05-05T16:55:27Z) - Spelunking the Deep: Guaranteed Queries for General Neural Implicit
Surfaces [35.438964954948574]
This work presents a new approach to perform queries directly on general neural implicit functions for a wide range of existing architectures.
Our key tool is the application of range analysis to neural networks, using automatic arithmetic rules to bound the output of a network over a region.
We use the resulting bounds to develop queries including ray casting, intersection testing, constructing spatial hierarchies, fast mesh extraction, closest-point evaluation.
arXiv Detail & Related papers (2022-02-05T00:37:08Z) - Traversing the Local Polytopes of ReLU Neural Networks: A Unified
Approach for Network Verification [6.71092092685492]
neural networks (NNs) with ReLU activation functions have found success in a wide range of applications.
Previous works to examine robustness and to improve interpretability partially exploited the piecewise linear function form of ReLU NNs.
In this paper, we explore the unique topological structure that ReLU NNs create in the input space, identifying the adjacency among the partitioned local polytopes.
arXiv Detail & Related papers (2021-11-17T06:12:39Z) - DISCO Verification: Division of Input Space into COnvex polytopes for
neural network verification [0.0]
The impressive results of modern neural networks partly come from their non linear behaviour.
We propose a method to simplify the verification problem by operating a partitionning into multiple linear subproblems.
We also present the impact of a technique aiming at reducing the number of linear regions during training.
arXiv Detail & Related papers (2021-05-17T12:40:51Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - A Point-Cloud Deep Learning Framework for Prediction of Fluid Flow
Fields on Irregular Geometries [62.28265459308354]
Network learns end-to-end mapping between spatial positions and CFD quantities.
Incompress laminar steady flow past a cylinder with various shapes for its cross section is considered.
Network predicts the flow fields hundreds of times faster than our conventional CFD.
arXiv Detail & Related papers (2020-10-15T12:15:02Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z) - Fast local linear regression with anchor regularization [21.739281173516247]
We propose a simple yet effective local model training algorithm called the fast anchor regularized local linear method (FALL)
Through experiments on synthetic and real-world datasets, we demonstrate that FALL compares favorably in terms of accuracy with the state-of-the-art network Lasso algorithm.
arXiv Detail & Related papers (2020-02-21T10:03:33Z) - Depthwise Non-local Module for Fast Salient Object Detection Using a
Single Thread [136.2224792151324]
We propose a new deep learning algorithm for fast salient object detection.
The proposed algorithm achieves competitive accuracy and high inference efficiency simultaneously with a single CPU thread.
arXiv Detail & Related papers (2020-01-22T15:23:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.