Lipschitz Continuity Retained Binary Neural Network
- URL: http://arxiv.org/abs/2207.06540v1
- Date: Wed, 13 Jul 2022 22:55:04 GMT
- Title: Lipschitz Continuity Retained Binary Neural Network
- Authors: Yuzhang Shang, Dan Xu, Bin Duan, Ziliang Zong, Liqiang Nie, Yan Yan
- Abstract summary: We introduce the Lipschitz continuity as the rigorous criteria to define the model robustness for BNN.
We then propose to retain the Lipschitz continuity as a regularization term to improve the model robustness.
Our experiments prove that our BNN-specific regularization method can effectively strengthen the robustness of BNN.
- Score: 52.17734681659175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Relying on the premise that the performance of a binary neural network can be
largely restored with eliminated quantization error between full-precision
weight vectors and their corresponding binary vectors, existing works of
network binarization frequently adopt the idea of model robustness to reach the
aforementioned objective. However, robustness remains to be an ill-defined
concept without solid theoretical support. In this work, we introduce the
Lipschitz continuity, a well-defined functional property, as the rigorous
criteria to define the model robustness for BNN. We then propose to retain the
Lipschitz continuity as a regularization term to improve the model robustness.
Particularly, while the popular Lipschitz-involved regularization methods often
collapse in BNN due to its extreme sparsity, we design the Retention Matrices
to approximate spectral norms of the targeted weight matrices, which can be
deployed as the approximation for the Lipschitz constant of BNNs without the
exact Lipschitz constant computation (NP-hard). Our experiments prove that our
BNN-specific regularization method can effectively strengthen the robustness of
BNN (testified on ImageNet-C), achieving state-of-the-art performance on CIFAR
and ImageNet.
Related papers
- Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - CLIP: Cheap Lipschitz Training of Neural Networks [0.0]
We investigate a variational regularization method named CLIP for controlling the Lipschitz constant of a neural network.
We mathematically analyze the proposed model, in particular discussing the impact of the chosen regularization parameter on the output of the network.
arXiv Detail & Related papers (2021-03-23T13:29:24Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z) - On Lipschitz Regularization of Convolutional Layers using Toeplitz
Matrix Theory [77.18089185140767]
Lipschitz regularity is established as a key property of modern deep learning.
computing the exact value of the Lipschitz constant of a neural network is known to be NP-hard.
We introduce a new upper bound for convolutional layers that is both tight and easy to compute.
arXiv Detail & Related papers (2020-06-15T13:23:34Z) - Training robust neural networks using Lipschitz bounds [0.0]
neural networks (NNs) are hardly used in safety-critical applications.
One measure of robustness to adversarial perturbations is the Lipschitz constant of the input-output map defined by an NN.
We propose a framework to train multi-layer NNs while at the same time encouraging robustness by keeping their Lipschitz constant small.
arXiv Detail & Related papers (2020-05-06T16:07:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.