Towards Evaluating and Training Verifiably Robust Neural Networks
- URL: http://arxiv.org/abs/2104.00447v2
- Date: Mon, 5 Apr 2021 02:31:33 GMT
- Title: Towards Evaluating and Training Verifiably Robust Neural Networks
- Authors: Zhaoyang Lyu, Minghao Guo, Tong Wu, Guodong Xu, Kehuan Zhang, Dahua
Lin
- Abstract summary: We study the relationship between IBP and CROWN, and prove that CROWN is always tighter than IBP when choosing appropriate bounding lines.
We propose a relaxed version of CROWN, linear bound propagation (LBP), that can be used to verify large networks to obtain lower verified errors.
- Score: 81.39994285743555
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent works have shown that interval bound propagation (IBP) can be used to
train verifiably robust neural networks. Reseachers observe an intriguing
phenomenon on these IBP trained networks: CROWN, a bounding method based on
tight linear relaxation, often gives very loose bounds on these networks. We
also observe that most neurons become dead during the IBP training process,
which could hurt the representation capability of the network. In this paper,
we study the relationship between IBP and CROWN, and prove that CROWN is always
tighter than IBP when choosing appropriate bounding lines. We further propose a
relaxed version of CROWN, linear bound propagation (LBP), that can be used to
verify large networks to obtain lower verified errors than IBP. We also design
a new activation function, parameterized ramp function (ParamRamp), which has
more diversity of neuron status than ReLU. We conduct extensive experiments on
MNIST, CIFAR-10 and Tiny-ImageNet with ParamRamp activation and achieve
state-of-the-art verified robustness. Code and the appendix are available at
https://github.com/ZhaoyangLyu/VerifiablyRobustNN.
Related papers
- Predictive Coding Networks and Inference Learning: Tutorial and Survey [0.7510165488300368]
Predictive coding networks (PCNs) are based on the neuroscientific framework of predictive coding.
Unlike traditional neural networks trained with backpropagation (BP), PCNs utilize inference learning (IL), a more biologically plausible algorithm.
As inherently probabilistic (graphical) latent variable models, PCNs provide a versatile framework for both supervised learning and unsupervised (generative) modeling.
arXiv Detail & Related papers (2024-07-04T18:39:20Z) - Evolutionary algorithms as an alternative to backpropagation for
supervised training of Biophysical Neural Networks and Neural ODEs [12.357635939839696]
We investigate the use of "gradient-estimating" evolutionary algorithms for training biophysically based neural networks.
We find that EAs have several advantages making them desirable over direct BP.
Our findings suggest that biophysical neurons could provide useful benchmarks for testing the limits of BP methods.
arXiv Detail & Related papers (2023-11-17T20:59:57Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Constrained Parameter Inference as a Principle for Learning [5.080518039966762]
We propose constrained parameter inference (COPI) as a new principle for learning.
COPI allows for the estimation of network parameters under the constraints of decorrelated neural inputs and top-down perturbations of neural states.
We show that COPI not only is more biologically plausible but also provides distinct advantages for fast learning, compared with standard backpropagation of error.
arXiv Detail & Related papers (2022-03-22T13:40:57Z) - On the Convergence of Certified Robust Training with Interval Bound
Propagation [147.77638840942447]
We present a theoretical analysis on the convergence of IBP training.
We show that when using IBP training to train a randomly two-layer ReLU neural network with logistic loss, gradient descent can linearly converge to zero robust training error.
arXiv Detail & Related papers (2022-03-16T21:49:13Z) - Predictive Coding Can Do Exact Backpropagation on Convolutional and
Recurrent Neural Networks [40.51949948934705]
Predictive coding networks (PCNs) are an influential model for information processing in the brain.
BP is commonly regarded to be the most successful learning method in modern machine learning.
We show that a biologically plausible algorithm is able to exactly replicate the accuracy of BP on complex architectures.
arXiv Detail & Related papers (2021-03-05T14:57:01Z) - Encoding the latent posterior of Bayesian Neural Networks for
uncertainty quantification [10.727102755903616]
We aim for efficient deep BNNs amenable to complex computer vision architectures.
We achieve this by leveraging variational autoencoders (VAEs) to learn the interaction and the latent distribution of the parameters at each network layer.
Our approach, Latent-Posterior BNN (LP-BNN), is compatible with the recent BatchEnsemble method, leading to highly efficient (in terms of computation and memory during both training and testing) ensembles.
arXiv Detail & Related papers (2020-12-04T19:50:09Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.