Local Lipschitz Constant Computation of ReLU-FNNs: Upper Bound Computation with Exactness Verification
- URL: http://arxiv.org/abs/2310.11104v2
- Date: Sun, 7 Apr 2024 20:05:43 GMT
- Title: Local Lipschitz Constant Computation of ReLU-FNNs: Upper Bound Computation with Exactness Verification
- Authors: Yoshio Ebihara, Xin Dai, Victor Magron, Dimitri Peaucelle, Sophie Tarbouriech,
- Abstract summary: This paper is concerned with the computation of the local Lipschitz constant of feedforward neural networks (FNNs) with activation functions being rectified linear units (ReLUs)
By following a standard procedure using multipliers that capture the behavior of ReLUs,we first reduce the upper bound problem of the local Lipschitz constant into a semidefinite programming problem (SDP)
We propose a method to construct a reduced order model whose input-output property is identical to the original FNN over a neighborhood of the target input.
- Score: 3.021446475031579
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper is concerned with the computation of the local Lipschitz constant of feedforward neural networks (FNNs) with activation functions being rectified linear units (ReLUs). The local Lipschitz constant of an FNN for a target input is a reasonable measure for its quantitative evaluation of the reliability. By following a standard procedure using multipliers that capture the behavior of ReLUs,we first reduce the upper bound computation problem of the local Lipschitz constant into a semidefinite programming problem (SDP). Here we newly introduce copositive multipliers to capture the ReLU behavior accurately. Then, by considering the dual of the SDP for the upper bound computation, we second derive a viable test to conclude the exactness of the computed upper bound. However, these SDPs are intractable for practical FNNs with hundreds of ReLUs. To address this issue, we further propose a method to construct a reduced order model whose input-output property is identical to the original FNN over a neighborhood of the target input. We finally illustrate the effectiveness of the model reduction and exactness verification methods with numerical examples of practical FNNs.
Related papers
- ECLipsE-Gen-Local: Efficient Compositional Local Lipschitz Estimates for Deep Neural Networks [4.752559512511423]
Lipschitz constant is a key measure for certifying the robustness of neural networks to input perturbations.<n>Standard approaches to estimate the Lipschitz constant involve solving a large matrix semidefinite program (SDP) that scales poorly with network size.<n>We propose a compositional framework that yields tight yet scalable Lipschitz estimates for deep feedforward neural networks.
arXiv Detail & Related papers (2025-10-06T18:26:46Z) - Lipschitz-Based Robustness Certification for Recurrent Neural Networks via Convex Relaxation [0.0]
We present RNN-SDP, a relaxation based method that models the RNN's layer interactions as a convex problem.<n>We also explore an extension that incorporates known input constraints to further tighten the resulting Lipschitz bounds.
arXiv Detail & Related papers (2025-09-22T15:26:46Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Lipschitz constant estimation for 1D convolutional neural networks [0.0]
We propose a dissipativity-based method for Lipschitz constant estimation of 1D convolutional neural networks (CNNs)
In particular, we analyze the dissipativity properties of convolutional, pooling, and fully connected layers.
arXiv Detail & Related papers (2022-11-28T12:09:06Z) - Lipschitz Continuity Retained Binary Neural Network [52.17734681659175]
We introduce the Lipschitz continuity as the rigorous criteria to define the model robustness for BNN.
We then propose to retain the Lipschitz continuity as a regularization term to improve the model robustness.
Our experiments prove that our BNN-specific regularization method can effectively strengthen the robustness of BNN.
arXiv Detail & Related papers (2022-07-13T22:55:04Z) - Sparsest Univariate Learning Models Under Lipschitz Constraint [31.28451181040038]
We propose continuous-domain formulations for one-dimensional regression problems.
We control the Lipschitz constant explicitly using a user-defined upper-bound.
We show that both problems admit global minimizers that are continuous and piecewise-linear.
arXiv Detail & Related papers (2021-12-27T07:03:43Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - On Lipschitz Regularization of Convolutional Layers using Toeplitz
Matrix Theory [77.18089185140767]
Lipschitz regularity is established as a key property of modern deep learning.
computing the exact value of the Lipschitz constant of a neural network is known to be NP-hard.
We introduce a new upper bound for convolutional layers that is both tight and easy to compute.
arXiv Detail & Related papers (2020-06-15T13:23:34Z) - Training robust neural networks using Lipschitz bounds [0.0]
neural networks (NNs) are hardly used in safety-critical applications.
One measure of robustness to adversarial perturbations is the Lipschitz constant of the input-output map defined by an NN.
We propose a framework to train multi-layer NNs while at the same time encouraging robustness by keeping their Lipschitz constant small.
arXiv Detail & Related papers (2020-05-06T16:07:46Z) - Exactly Computing the Local Lipschitz Constant of ReLU Networks [98.43114280459271]
The local Lipschitz constant of a neural network is a useful metric for robustness, generalization, and fairness evaluation.
We show strong inapproximability results for estimating Lipschitz constants of ReLU networks.
We leverage this algorithm to evaluate the tightness of competing Lipschitz estimators and the effects of regularized training on the Lipschitz constant.
arXiv Detail & Related papers (2020-03-02T22:15:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.