Certifying Robustness of Convolutional Neural Networks with Tight Linear
Approximation
- URL: http://arxiv.org/abs/2211.09810v1
- Date: Sun, 13 Nov 2022 08:37:13 GMT
- Title: Certifying Robustness of Convolutional Neural Networks with Tight Linear
Approximation
- Authors: Yuan Xiao, Tongtong Bai, Mingzheng Gu, Chunrong Fang, Zhenyu Chen
- Abstract summary: Ti-Lin is a Tight Linear approximation approach for robustness verification of Conal Neural Networks.
We present a new linear constraints for S-shaped activation functions, which is better than both existing Neuron-wise Tightest and Network-wise Tightest tools.
We evaluate it with 48 different CNNs trained on MNIST, CIFAR-10, and Tiny ImageNet datasets.
- Score: 5.678314425261842
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The robustness of neural network classifiers is becoming important in the
safety-critical domain and can be quantified by robustness verification.
However, at present, efficient and scalable verification techniques are always
sound but incomplete. Therefore, the improvement of certified robustness bounds
is the key criterion to evaluate the superiority of robustness verification
approaches. In this paper, we present a Tight Linear approximation approach for
robustness verification of Convolutional Neural Networks(Ti-Lin). For general
CNNs, we first provide a new linear constraints for S-shaped activation
functions, which is better than both existing Neuron-wise Tightest and
Network-wise Tightest tools. We then propose Neuron-wise Tightest linear bounds
for Maxpool function. We implement Ti-Lin, the resulting verification method.
We evaluate it with 48 different CNNs trained on MNIST, CIFAR-10, and Tiny
ImageNet datasets. Experimental results show that Ti-Lin significantly
outperforms other five state-of-the-art methods(CNN-Cert, DeepPoly, DeepCert,
VeriNet, Newise). Concretely, Ti-Lin certifies much more precise robustness
bounds on pure CNNs with Sigmoid/Tanh/Arctan functions and CNNs with Maxpooling
function with at most 63.70% and 253.54% improvement, respectively.
Related papers
- Computable Lipschitz Bounds for Deep Neural Networks [0.0]
We analyse three existing upper bounds written for the $l2$ norm.
We propose two novel bounds for both feed-forward fully-connected neural networks and convolutional neural networks.
arXiv Detail & Related papers (2024-10-28T14:09:46Z) - Towards General Robustness Verification of MaxPool-based Convolutional Neural Networks via Tightening Linear Approximation [51.235583545740674]
MaxLin is a robustness verifier for MaxPool-based CNNs with tight linear approximation.
We evaluate MaxLin with open-sourced benchmarks, including LeNet and networks trained on the MNIST, CIFAR-10, and Tiny ImageNet datasets.
arXiv Detail & Related papers (2024-06-02T10:33:04Z) - Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness [172.61581010141978]
Certifiable robustness is a desirable property for adopting deep neural networks (DNNs) in safety-critical scenarios.
We propose a novel solution to strategically manipulate neurons, by "grafting" appropriate levels of linearity.
arXiv Detail & Related papers (2022-06-15T22:42:29Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z) - Second-Order Provable Defenses against Adversarial Attacks [63.34032156196848]
We show that if the eigenvalues of the network are bounded, we can compute a certificate in the $l$ norm efficiently using convex optimization.
We achieve certified accuracy of 5.78%, and 44.96%, and 43.19% on 2,59% and 4BP-based methods respectively.
arXiv Detail & Related papers (2020-06-01T05:55:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.