Partition-Based Convex Relaxations for Certifying the Robustness of ReLU
Neural Networks
- URL: http://arxiv.org/abs/2101.09306v1
- Date: Fri, 22 Jan 2021 19:36:40 GMT
- Title: Partition-Based Convex Relaxations for Certifying the Robustness of ReLU
Neural Networks
- Authors: Brendon G. Anderson, Ziye Ma, Jingqi Li, Somayeh Sojoudi
- Abstract summary: In this paper, we study certifying the robustness of ReLU neural networks against adversarial input perturbations.
To diminish the relaxation error suffered by the popular linear programming (LP) and semidefinite programming (SDP) certification methods, we propose partitioning the input uncertainty set and solving the relaxations on each part separately.
We show that this approach reduces relaxation error, and that the error is eliminated entirely upon performing an LP relaxation with an intelligently designed partition.
- Score: 10.992151305603267
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study certifying the robustness of ReLU neural networks
against adversarial input perturbations. To diminish the relaxation error
suffered by the popular linear programming (LP) and semidefinite programming
(SDP) certification methods, we propose partitioning the input uncertainty set
and solving the relaxations on each part separately. We show that this approach
reduces relaxation error, and that the error is eliminated entirely upon
performing an LP relaxation with an intelligently designed partition. To scale
this approach to large networks, we consider courser partitions that take the
same form as this motivating partition. We prove that computing such a
partition that directly minimizes the LP relaxation error is NP-hard. By
instead minimizing the worst-case LP relaxation error, we develop a
computationally tractable scheme with a closed-form optimal two-part partition.
We extend the analysis to the SDP, where the feasible set geometry is exploited
to design a two-part partition that minimizes the worst-case SDP relaxation
error. Experiments on IRIS classifiers demonstrate significant reduction in
relaxation error, offering certificates that are otherwise void without
partitioning. By independently increasing the input size and the number of
layers, we empirically illustrate under which regimes the partitioned LP and
SDP are best applied.
Related papers
- Design and Prototyping Distributed CNN Inference Acceleration in Edge
Computing [85.74517957717363]
HALP accelerates inference by designing a seamless collaboration among edge devices (EDs) in Edge Computing.
Experiments show that the distributed inference HALP achieves 1.7x inference acceleration for VGG-16.
It is shown that the model selection with distributed inference HALP can significantly improve service reliability.
arXiv Detail & Related papers (2022-11-24T19:48:30Z) - Receptive Field-based Segmentation for Distributed CNN Inference
Acceleration in Collaborative Edge Computing [93.67044879636093]
We study inference acceleration using distributed convolutional neural networks (CNNs) in collaborative edge computing network.
We propose a novel collaborative edge computing using fused-layer parallelization to partition a CNN model into multiple blocks of convolutional layers.
arXiv Detail & Related papers (2022-07-22T18:38:11Z) - A Unified View of SDP-based Neural Network Verification through
Completely Positive Programming [27.742278216854714]
We develop an exact, convex formulation of verification as a completely positive program ( CPP)
We provide analysis showing that our formulation is minimal -- the removal of any constraint fundamentally misrepresents the neural network computation.
arXiv Detail & Related papers (2022-03-06T19:23:09Z) - Certifiable Outlier-Robust Geometric Perception: Exact Semidefinite
Relaxations and Scalable Global Optimization [29.738513596063946]
We propose the first general framework to design cert algorithms for robust geometric perception in the presence of outliers.
Our experiments demonstrate that our SDP relaxation is exact with up to outliers across applications.
arXiv Detail & Related papers (2021-09-07T21:42:16Z) - DeepSplit: Scalable Verification of Deep Neural Networks via Operator
Splitting [70.62923754433461]
Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non- optimization problem.
We propose a novel method that can directly solve a convex relaxation of the problem to high accuracy, by splitting it into smaller subproblems that often have analytical solutions.
arXiv Detail & Related papers (2021-06-16T20:43:49Z) - STRIDE along Spectrahedral Vertices for Solving Large-Scale Rank-One
Semidefinite Relaxations [27.353023427198806]
We consider solving high-order semidefinite programming relaxations of nonconstrained optimization problems (POPs)
Existing approaches, which solve the SDP independently from the POP, either cannot scale to large problems or suffer from slow convergence due to the typical uneneracy of such SDPs.
We propose a new algorithmic framework called SpecTrahedral vErtices (STRIDE)
arXiv Detail & Related papers (2021-05-28T18:07:16Z) - Lattice partition recovery with dyadic CART [79.96359947166592]
We study piece-wise constant signals corrupted by additive Gaussian noise over a $d$-dimensional lattice.
Data of this form naturally arise in a host of applications, and the tasks of signal detection or testing, de-noising and estimation have been studied extensively in the statistical and signal processing literature.
In this paper we consider instead the problem of partition recovery, i.e.of estimating the partition of the lattice induced by the constancy regions of the unknown signal.
We prove that a DCART-based procedure consistently estimates the underlying partition at a rate of order $sigma2 k*
arXiv Detail & Related papers (2021-05-27T23:41:01Z) - Partition-based formulations for mixed-integer optimization of trained
ReLU neural networks [66.88252321870085]
This paper introduces a class of mixed-integer formulations for trained ReLU neural networks.
At one extreme, one partition per input recovers the convex hull of a node, i.e., the tightest possible formulation for each node.
arXiv Detail & Related papers (2021-02-08T17:27:34Z) - Adaptive Subcarrier, Parameter, and Power Allocation for Partitioned
Edge Learning Over Broadband Channels [69.18343801164741]
partitioned edge learning (PARTEL) implements parameter-server training, a well known distributed learning method, in wireless network.
We consider the case of deep neural network (DNN) models which can be trained using PARTEL by introducing some auxiliary variables.
arXiv Detail & Related papers (2020-10-08T15:27:50Z) - One Ring to Rule Them All: Certifiably Robust Geometric Perception with
Outliers [32.1176248075545]
We propose the first general and practical to design certifiable algorithms for perception in the presence of a large amount of outliers.
Our dual certifiers leverage solution-of-any suboptimal optimality of any problem.
arXiv Detail & Related papers (2020-06-11T19:46:42Z) - Tightened Convex Relaxations for Neural Network Robustness Certification [10.68833097448566]
We exploit the structure of ReLU networks to improve relaxation errors through a novel partition-based certification procedure.
The proposed method is proven to tighten existing linear programming relaxations, and achieves zero relaxation error as the result is made finer.
arXiv Detail & Related papers (2020-04-01T16:59:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.