Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness
- URL: http://arxiv.org/abs/2206.07839v1
- Date: Wed, 15 Jun 2022 22:42:29 GMT
- Title: Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness
- Authors: Tianlong Chen, Huan Zhang, Zhenyu Zhang, Shiyu Chang, Sijia Liu,
Pin-Yu Chen, Zhangyang Wang
- Abstract summary: Certifiable robustness is a desirable property for adopting deep neural networks (DNNs) in safety-critical scenarios.
We propose a novel solution to strategically manipulate neurons, by "grafting" appropriate levels of linearity.
- Score: 172.61581010141978
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Certifiable robustness is a highly desirable property for adopting deep
neural networks (DNNs) in safety-critical scenarios, but often demands tedious
computations to establish. The main hurdle lies in the massive amount of
non-linearity in large DNNs. To trade off the DNN expressiveness (which calls
for more non-linearity) and robustness certification scalability (which prefers
more linearity), we propose a novel solution to strategically manipulate
neurons, by "grafting" appropriate levels of linearity. The core of our
proposal is to first linearize insignificant ReLU neurons, to eliminate the
non-linear components that are both redundant for DNN performance and harmful
to its certification. We then optimize the associated slopes and intercepts of
the replaced linear activations for restoring model performance while
maintaining certifiability. Hence, typical neuron pruning could be viewed as a
special case of grafting a linear function of the fixed zero slopes and
intercept, that might overly restrict the network flexibility and sacrifice its
performance. Extensive experiments on multiple datasets and network backbones
show that our linearity grafting can (1) effectively tighten certified bounds;
(2) achieve competitive certifiable robustness without certified robust
training (i.e., over 30% improvements on CIFAR-10 models); and (3) scale up
complete verification to large adversarially trained models with 17M
parameters. Codes are available at
https://github.com/VITA-Group/Linearity-Grafting.
Related papers
- Towards General Robustness Verification of MaxPool-based Convolutional Neural Networks via Tightening Linear Approximation [51.235583545740674]
MaxLin is a robustness verifier for MaxPool-based CNNs with tight linear approximation.
We evaluate MaxLin with open-sourced benchmarks, including LeNet and networks trained on the MNIST, CIFAR-10, and Tiny ImageNet datasets.
arXiv Detail & Related papers (2024-06-02T10:33:04Z) - A Novel Explanation Against Linear Neural Networks [1.223779595809275]
Linear Regression and neural networks are widely used to model data.
We show that neural networks without activation functions, or linear neural networks, actually reduce both training and testing performance.
We prove this hypothesis through an analysis of the optimization of an LNN and rigorous testing comparing the performance between both LNNs and linear regression on noisy datasets.
arXiv Detail & Related papers (2023-12-30T09:44:51Z) - Certifying Robustness of Convolutional Neural Networks with Tight Linear
Approximation [5.678314425261842]
Ti-Lin is a Tight Linear approximation approach for robustness verification of Conal Neural Networks.
We present a new linear constraints for S-shaped activation functions, which is better than both existing Neuron-wise Tightest and Network-wise Tightest tools.
We evaluate it with 48 different CNNs trained on MNIST, CIFAR-10, and Tiny ImageNet datasets.
arXiv Detail & Related papers (2022-11-13T08:37:13Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - FitAct: Error Resilient Deep Neural Networks via Fine-Grained
Post-Trainable Activation Functions [0.05249805590164901]
Deep neural networks (DNNs) are increasingly being deployed in safety-critical systems such as personal healthcare devices and self-driving cars.
In this paper, we propose FitAct, a low-cost approach to enhance the error resilience of DNNs by deploying fine-grained post-trainable activation functions.
arXiv Detail & Related papers (2021-12-27T07:07:50Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - FTBNN: Rethinking Non-linearity for 1-bit CNNs and Going Beyond [23.5996182207431]
We show that binarized convolution process owns an increasing linearity towards the target of minimizing such error, which in turn hampers BNN's discriminative ability.
We re-investigate and tune proper non-linear modules to fix that contradiction, leading to a strong baseline which achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-10-19T08:11:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.