Hierarchical Verification for Adversarial Robustness
- URL: http://arxiv.org/abs/2007.11826v1
- Date: Thu, 23 Jul 2020 07:03:05 GMT
- Title: Hierarchical Verification for Adversarial Robustness
- Authors: Cong Han Lim, Raquel Urtasun, Ersin Yumer
- Abstract summary: We introduce a new framework for the exact point-wise $ell_p$ robustness verification problem.
LayerCert exploits the layer-wise geometric structure of deep feed-forward networks with rectified linear activations (ReLU networks)
We show that LayerCert provably reduces the number and size of the convex programs that one needs to solve compared to GeoCert.
- Score: 89.30150585592648
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a new framework for the exact point-wise $\ell_p$ robustness
verification problem that exploits the layer-wise geometric structure of deep
feed-forward networks with rectified linear activations (ReLU networks). The
activation regions of the network partition the input space, and one can verify
the $\ell_p$ robustness around a point by checking all the activation regions
within the desired radius. The GeoCert algorithm (Jordan et al., NeurIPS 2019)
treats this partition as a generic polyhedral complex in order to detect which
region to check next. In contrast, our LayerCert framework considers the
\emph{nested hyperplane arrangement} structure induced by the layers of the
ReLU network and explores regions in a hierarchical manner. We show that, under
certain conditions on the algorithm parameters, LayerCert provably reduces the
number and size of the convex programs that one needs to solve compared to
GeoCert. Furthermore, our LayerCert framework allows the incorporation of lower
bounding routines based on convex relaxations to further improve performance.
Experimental results demonstrate that LayerCert can significantly reduce both
the number of convex programs solved and the running time over the
state-of-the-art.
Related papers
- Fast Point Cloud Geometry Compression with Context-based Residual Coding and INR-based Refinement [19.575833741231953]
We use the KNN method to determine the neighborhoods of raw surface points.
A conditional probability model is adaptive to local geometry, leading to significant rate reduction.
We incorporate an implicit neural representation into the refinement layer, allowing the decoder to sample points on the underlying surface at arbitrary densities.
arXiv Detail & Related papers (2024-08-06T05:24:06Z) - Robust Stochastically-Descending Unrolled Networks [85.6993263983062]
Deep unrolling is an emerging learning-to-optimize method that unrolls a truncated iterative algorithm in the layers of a trainable neural network.
We show that convergence guarantees and generalizability of the unrolled networks are still open theoretical problems.
We numerically assess unrolled architectures trained under the proposed constraints in two different applications.
arXiv Detail & Related papers (2023-12-25T18:51:23Z) - The Geometric Structure of Fully-Connected ReLU Layers [0.0]
We formalize and interpret the geometric structure of $d$-dimensional fully connected ReLU layers in neural networks.
We provide results on the geometric complexity of the decision boundary generated by such networks, as well as proving that modulo an affine transformation, such a network can only generate $d$ different decision boundaries.
arXiv Detail & Related papers (2023-10-05T11:54:07Z) - Adaptive Context Selection for Polyp Segmentation [99.9959901908053]
We propose an adaptive context selection based encoder-decoder framework which is composed of Local Context Attention (LCA) module, Global Context Module (GCM) and Adaptive Selection Module (ASM)
LCA modules deliver local context features from encoder layers to decoder layers, enhancing the attention to the hard region which is determined by the prediction map of previous layer.
GCM aims to further explore the global context features and send to the decoder layers. ASM is used for adaptive selection and aggregation of context features through channel-wise attention.
arXiv Detail & Related papers (2023-01-12T04:06:44Z) - Traversing the Local Polytopes of ReLU Neural Networks: A Unified
Approach for Network Verification [6.71092092685492]
neural networks (NNs) with ReLU activation functions have found success in a wide range of applications.
Previous works to examine robustness and to improve interpretability partially exploited the piecewise linear function form of ReLU NNs.
In this paper, we explore the unique topological structure that ReLU NNs create in the input space, identifying the adjacency among the partitioned local polytopes.
arXiv Detail & Related papers (2021-11-17T06:12:39Z) - Sign-Agnostic CONet: Learning Implicit Surface Reconstructions by
Sign-Agnostic Optimization of Convolutional Occupancy Networks [39.65056638604885]
We learn implicit surface reconstruction by sign-agnostic optimization of convolutional occupancy networks.
We show this goal can be effectively achieved by a simple yet effective design.
arXiv Detail & Related papers (2021-05-08T03:35:32Z) - Neural Architecture Search as Sparse Supernet [78.09905626281046]
This paper aims at enlarging the problem of Neural Architecture Search (NAS) from Single-Path and Multi-Path Search to automated Mixed-Path Search.
We model the NAS problem as a sparse supernet using a new continuous architecture representation with a mixture of sparsity constraints.
The sparse supernet enables us to automatically achieve sparsely-mixed paths upon a compact set of nodes.
arXiv Detail & Related papers (2020-07-31T14:51:52Z) - The Hidden Convex Optimization Landscape of Two-Layer ReLU Neural
Networks: an Exact Characterization of the Optimal Solutions [51.60996023961886]
We prove that finding all globally optimal two-layer ReLU neural networks can be performed by solving a convex optimization program with cone constraints.
Our analysis is novel, characterizes all optimal solutions, and does not leverage duality-based analysis which was recently used to lift neural network training into convex spaces.
arXiv Detail & Related papers (2020-06-10T15:38:30Z) - Local Propagation in Constraint-based Neural Network [77.37829055999238]
We study a constraint-based representation of neural network architectures.
We investigate a simple optimization procedure that is well suited to fulfil the so-called architectural constraints.
arXiv Detail & Related papers (2020-02-18T16:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.