DeepSplit: Scalable Verification of Deep Neural Networks via Operator
Splitting
- URL: http://arxiv.org/abs/2106.09117v1
- Date: Wed, 16 Jun 2021 20:43:49 GMT
- Title: DeepSplit: Scalable Verification of Deep Neural Networks via Operator
Splitting
- Authors: Shaoru Chen, Eric Wong, J. Zico Kolter, Mahyar Fazlyab
- Abstract summary: Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non- optimization problem.
We propose a novel method that can directly solve a convex relaxation of the problem to high accuracy, by splitting it into smaller subproblems that often have analytical solutions.
- Score: 70.62923754433461
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Analyzing the worst-case performance of deep neural networks against input
perturbations amounts to solving a large-scale non-convex optimization problem,
for which several past works have proposed convex relaxations as a promising
alternative. However, even for reasonably-sized neural networks, these
relaxations are not tractable, and so must be replaced by even weaker
relaxations in practice. In this work, we propose a novel operator splitting
method that can directly solve a convex relaxation of the problem to high
accuracy, by splitting it into smaller sub-problems that often have analytical
solutions. The method is modular and scales to problem instances that were
previously impossible to solve exactly due to their size. Furthermore, the
solver operations are amenable to fast parallelization with GPU acceleration.
We demonstrate our method in obtaining tighter bounds on the worst-case
performance of large convolutional networks in image classification and
reinforcement learning settings.
Related papers
- Optimization Over Trained Neural Networks: Taking a Relaxing Walk [4.517039147450688]
We propose a more scalable solver based on exploring global and local linear relaxations of the neural network model.
Our solver is competitive with a state-of-the-art MILP solver and the prior while producing better solutions with increases in input, depth, and number of neurons.
arXiv Detail & Related papers (2024-01-07T11:15:00Z) - Zonotope Domains for Lagrangian Neural Network Verification [102.13346781220383]
We decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks.
Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques.
arXiv Detail & Related papers (2022-10-14T19:31:39Z) - Physics-informed neural network simulation of multiphase poroelasticity
using stress-split sequential training [0.0]
We present a framework for solving problems governed by partial differential equations (PDEs) based on elastic networks.
We find that the approach converges to solve problems of porosci, Barry-Scier's injection-production problem, and a two-phase drainage problem.
arXiv Detail & Related papers (2021-10-06T20:09:09Z) - Learning from Images: Proactive Caching with Parallel Convolutional
Neural Networks [94.85780721466816]
A novel framework for proactive caching is proposed in this paper.
It combines model-based optimization with data-driven techniques by transforming an optimization problem into a grayscale image.
Numerical results show that the proposed scheme can reduce 71.6% computation time with only 0.8% additional performance cost.
arXiv Detail & Related papers (2021-08-15T21:32:47Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - Tunable Subnetwork Splitting for Model-parallelism of Neural Network
Training [12.755664985045582]
We propose a Tunable Subnetwork Splitting Method (TSSM) to tune the decomposition of deep neural networks.
Our proposed TSSM can achieve significant speedup without observable loss of training accuracy.
arXiv Detail & Related papers (2020-09-09T01:05:12Z) - Efficient and Sparse Neural Networks by Pruning Weights in a
Multiobjective Learning Approach [0.0]
We propose a multiobjective perspective on the training of neural networks by treating its prediction accuracy and the network complexity as two individual objective functions.
Preliminary numerical results on exemplary convolutional neural networks confirm that large reductions in the complexity of neural networks with neglibile loss of accuracy are possible.
arXiv Detail & Related papers (2020-08-31T13:28:03Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.