Automatic Perturbation Analysis for Scalable Certified Robustness and
Beyond
- URL: http://arxiv.org/abs/2002.12920v3
- Date: Mon, 26 Oct 2020 03:26:40 GMT
- Title: Automatic Perturbation Analysis for Scalable Certified Robustness and
Beyond
- Authors: Kaidi Xu, Zhouxing Shi, Huan Zhang, Yihan Wang, Kai-Wei Chang, Minlie
Huang, Bhavya Kailkhura, Xue Lin, Cho-Jui Hsieh
- Abstract summary: Linear relaxation based perturbation analysis (LiRPA) for neural networks has become a core component in robustness verification and certified defense.
We develop an automatic framework to enable perturbation analysis on any neural network structures.
We demonstrate LiRPA based certified defense on Tiny ImageNet and Downscaled ImageNet.
- Score: 171.07853346630057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Linear relaxation based perturbation analysis (LiRPA) for neural networks,
which computes provable linear bounds of output neurons given a certain amount
of input perturbation, has become a core component in robustness verification
and certified defense. The majority of LiRPA-based methods focus on simple
feed-forward networks and need particular manual derivations and
implementations when extended to other architectures. In this paper, we develop
an automatic framework to enable perturbation analysis on any neural network
structures, by generalizing existing LiRPA algorithms such as CROWN to operate
on general computational graphs. The flexibility, differentiability and ease of
use of our framework allow us to obtain state-of-the-art results on LiRPA based
certified defense on fairly complicated networks like DenseNet, ResNeXt and
Transformer that are not supported by prior works. Our framework also enables
loss fusion, a technique that significantly reduces the computational
complexity of LiRPA for certified defense. For the first time, we demonstrate
LiRPA based certified defense on Tiny ImageNet and Downscaled ImageNet where
previous approaches cannot scale to due to the relatively large number of
classes. Our work also yields an open-source library for the community to apply
LiRPA to areas beyond certified defense without much LiRPA expertise, e.g., we
create a neural network with a probably flat optimization landscape by applying
LiRPA to network parameters. Our opensource library is available at
https://github.com/KaidiXu/auto_LiRPA.
Related papers
- Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness [172.61581010141978]
Certifiable robustness is a desirable property for adopting deep neural networks (DNNs) in safety-critical scenarios.
We propose a novel solution to strategically manipulate neurons, by "grafting" appropriate levels of linearity.
arXiv Detail & Related papers (2022-06-15T22:42:29Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z) - Reduced-Order Neural Network Synthesis with Robustness Guarantees [0.0]
Machine learning algorithms are being adapted to run locally on board, potentially hardware limited, devices to improve user privacy, reduce latency and be more energy efficient.
To address this issue, a method to automatically synthesize reduced-order neural networks (having fewer neurons) approxing the input/output mapping of a larger one is introduced.
Worst-case bounds for this approximation error are obtained and the approach can be applied to a wide variety of neural networks architectures.
arXiv Detail & Related papers (2021-02-18T12:03:57Z) - Certifying Incremental Quadratic Constraints for Neural Networks via
Convex Optimization [2.388501293246858]
We propose a convex program to certify incremental quadratic constraints on the map of neural networks over a region of interest.
certificates can capture several useful properties such as (local) Lipschitz continuity, one-sided Lipschitz continuity, invertibility, and contraction.
arXiv Detail & Related papers (2020-12-10T21:15:00Z) - Weight Pruning via Adaptive Sparsity Loss [31.978830843036658]
Pruning neural networks has regained interest in recent years as a means to compress state-of-the-art deep neural networks.
We propose a robust learning framework that efficiently prunes network parameters during training with minimal computational overhead.
arXiv Detail & Related papers (2020-06-04T10:55:16Z) - Reach-SDP: Reachability Analysis of Closed-Loop Systems with Neural
Network Controllers via Semidefinite Programming [19.51345816555571]
We propose a novel forward reachability analysis method for the safety verification of linear time-varying systems with neural networks in feedback.
We show that we can compute these approximate reachable sets using semidefinite programming.
We illustrate our method in a quadrotor example, in which we first approximate a nonlinear model predictive controller via a deep neural network and then apply our analysis tool to certify finite-time reachability and constraint satisfaction of the closed-loop system.
arXiv Detail & Related papers (2020-04-16T18:48:25Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z) - Lossless Compression of Deep Neural Networks [17.753357839478575]
Deep neural networks have been successful in many predictive modeling tasks, such as image and language recognition.
It is challenging to deploy these networks under limited computational resources, such as in mobile devices.
We introduce an algorithm that removes units and layers of a neural network while not changing the output that is produced.
arXiv Detail & Related papers (2020-01-01T15:04:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.