Partition-Based Convex Relaxations for Certifying the Robustness of ReLU
Neural Networks
- URL: http://arxiv.org/abs/2101.09306v1
- Date: Fri, 22 Jan 2021 19:36:40 GMT
- Title: Partition-Based Convex Relaxations for Certifying the Robustness of ReLU
Neural Networks
- Authors: Brendon G. Anderson, Ziye Ma, Jingqi Li, Somayeh Sojoudi
- Abstract summary: In this paper, we study certifying the robustness of ReLU neural networks against adversarial input perturbations.
To diminish the relaxation error suffered by the popular linear programming (LP) and semidefinite programming (SDP) certification methods, we propose partitioning the input uncertainty set and solving the relaxations on each part separately.
We show that this approach reduces relaxation error, and that the error is eliminated entirely upon performing an LP relaxation with an intelligently designed partition.
- Score: 10.992151305603267
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study certifying the robustness of ReLU neural networks
against adversarial input perturbations. To diminish the relaxation error
suffered by the popular linear programming (LP) and semidefinite programming
(SDP) certification methods, we propose partitioning the input uncertainty set
and solving the relaxations on each part separately. We show that this approach
reduces relaxation error, and that the error is eliminated entirely upon
performing an LP relaxation with an intelligently designed partition. To scale
this approach to large networks, we consider courser partitions that take the
same form as this motivating partition. We prove that computing such a
partition that directly minimizes the LP relaxation error is NP-hard. By
instead minimizing the worst-case LP relaxation error, we develop a
computationally tractable scheme with a closed-form optimal two-part partition.
We extend the analysis to the SDP, where the feasible set geometry is exploited
to design a two-part partition that minimizes the worst-case SDP relaxation
error. Experiments on IRIS classifiers demonstrate significant reduction in
relaxation error, offering certificates that are otherwise void without
partitioning. By independently increasing the input size and the number of
layers, we empirically illustrate under which regimes the partitioned LP and
SDP are best applied.
Related papers
- Robust Stochastically-Descending Unrolled Networks [85.6993263983062]
Deep unrolling is an emerging learning-to-optimize method that unrolls a truncated iterative algorithm in the layers of a trainable neural network.
We show that convergence guarantees and generalizability of the unrolled networks are still open theoretical problems.
We numerically assess unrolled architectures trained under the proposed constraints in two different applications.
arXiv Detail & Related papers (2023-12-25T18:51:23Z) - A Unified View of SDP-based Neural Network Verification through
Completely Positive Programming [27.742278216854714]
We develop an exact, convex formulation of verification as a completely positive program ( CPP)
We provide analysis showing that our formulation is minimal -- the removal of any constraint fundamentally misrepresents the neural network computation.
arXiv Detail & Related papers (2022-03-06T19:23:09Z) - Certifiable Outlier-Robust Geometric Perception: Exact Semidefinite
Relaxations and Scalable Global Optimization [29.738513596063946]
We propose the first general framework to design cert algorithms for robust geometric perception in the presence of outliers.
Our experiments demonstrate that our SDP relaxation is exact with up to outliers across applications.
arXiv Detail & Related papers (2021-09-07T21:42:16Z) - DeepSplit: Scalable Verification of Deep Neural Networks via Operator
Splitting [70.62923754433461]
Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non- optimization problem.
We propose a novel method that can directly solve a convex relaxation of the problem to high accuracy, by splitting it into smaller subproblems that often have analytical solutions.
arXiv Detail & Related papers (2021-06-16T20:43:49Z) - STRIDE along Spectrahedral Vertices for Solving Large-Scale Rank-One
Semidefinite Relaxations [27.353023427198806]
We consider solving high-order semidefinite programming relaxations of nonconstrained optimization problems (POPs)
Existing approaches, which solve the SDP independently from the POP, either cannot scale to large problems or suffer from slow convergence due to the typical uneneracy of such SDPs.
We propose a new algorithmic framework called SpecTrahedral vErtices (STRIDE)
arXiv Detail & Related papers (2021-05-28T18:07:16Z) - Fast and Complete: Enabling Complete Neural Network Verification with
Rapid and Massively Parallel Incomplete Verifiers [112.23981192818721]
We propose to use backward mode linear relaxation based analysis (LiRPA) to replace Linear Programming (LP) during the BaB process.
Unlike LP, LiRPA when applied naively can produce much weaker bounds and even cannot check certain conflicts of sub-domains during splitting.
We demonstrate an order of magnitude speedup compared to existing LP-based approaches.
arXiv Detail & Related papers (2020-11-27T16:42:12Z) - Investigating the Scalability and Biological Plausibility of the
Activation Relaxation Algorithm [62.997667081978825]
Activation Relaxation (AR) algorithm provides a simple and robust approach for approximating the backpropagation of error algorithm.
We show that the algorithm can be further simplified and made more biologically plausible by introducing a learnable set of backwards weights.
We also investigate whether another biologically implausible assumption of the original AR algorithm -- the frozen feedforward pass -- can be relaxed without damaging performance.
arXiv Detail & Related papers (2020-10-13T08:02:38Z) - One Ring to Rule Them All: Certifiably Robust Geometric Perception with
Outliers [32.1176248075545]
We propose the first general and practical to design certifiable algorithms for perception in the presence of a large amount of outliers.
Our dual certifiers leverage solution-of-any suboptimal optimality of any problem.
arXiv Detail & Related papers (2020-06-11T19:46:42Z) - Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing
its Gradient Estimator Bias [65.13042449121411]
In practice, training a network with the gradient estimates provided by EP does not scale to visual tasks harder than MNIST.
We show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon.
We apply these techniques to train an architecture with asymmetric forward and backward connections, yielding a 13.2% test error.
arXiv Detail & Related papers (2020-06-06T09:36:07Z) - Tightened Convex Relaxations for Neural Network Robustness Certification [10.68833097448566]
We exploit the structure of ReLU networks to improve relaxation errors through a novel partition-based certification procedure.
The proposed method is proven to tighten existing linear programming relaxations, and achieves zero relaxation error as the result is made finer.
arXiv Detail & Related papers (2020-04-01T16:59:21Z) - Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable
Neural Distribution Alignment [52.02794488304448]
We propose a new distribution alignment method based on a log-likelihood ratio statistic and normalizing flows.
We experimentally verify that minimizing the resulting objective results in domain alignment that preserves the local structure of input domains.
arXiv Detail & Related papers (2020-03-26T22:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.