Lipschitz-Based Robustness Certification for Recurrent Neural Networks via Convex Relaxation
- URL: http://arxiv.org/abs/2509.17898v1
- Date: Mon, 22 Sep 2025 15:26:46 GMT
- Title: Lipschitz-Based Robustness Certification for Recurrent Neural Networks via Convex Relaxation
- Authors: Paul Hamelbeck, Johannes Schiffer,
- Abstract summary: We present RNN-SDP, a relaxation based method that models the RNN's layer interactions as a convex problem.<n>We also explore an extension that incorporates known input constraints to further tighten the resulting Lipschitz bounds.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Robustness certification against bounded input noise or adversarial perturbations is increasingly important for deployment recurrent neural networks (RNNs) in safety-critical control applications. To address this challenge, we present RNN-SDP, a relaxation based method that models the RNN's layer interactions as a convex problem and computes a certified upper bound on the Lipschitz constant via semidefinite programming (SDP). We also explore an extension that incorporates known input constraints to further tighten the resulting Lipschitz bounds. RNN-SDP is evaluated on a synthetic multi-tank system, with upper bounds compared to empirical estimates. While incorporating input constraints yields only modest improvements, the general method produces reasonably tight and certifiable bounds, even as sequence length increases. The results also underscore the often underestimated impact of initialization errors, an important consideration for applications where models are frequently re-initialized, such as model predictive control (MPC).
Related papers
- Scalable Verification of Neural Control Barrier Functions Using Linear Bound Propagation [50.53301323864253]
Control barrier functions (CBFs) are a popular tool for safety certification of nonlinear dynamical control systems.<n>We present a novel framework for verifying neural CBFs based on piecewise linear upper and lower bounds on the conditions required for a neural network to be a CBF.<n>Our approach scales to larger neural networks than state-of-the-art verification procedures for CBFs.
arXiv Detail & Related papers (2025-11-09T11:51:15Z) - A Scalable Approach for Safe and Robust Learning via Lipschitz-Constrained Networks [2.8960888722909566]
Lipschitz-constrained global training constraints for neural networks (NNs) are proposed.<n>We show that the proposed formulation of Lipschitz-constrained NNs can be significantly improved.
arXiv Detail & Related papers (2025-06-30T15:42:23Z) - Policy Verification in Stochastic Dynamical Systems Using Logarithmic Neural Certificates [7.9898826915621965]
We consider the verification of neural network policies for discrete-time systems with respect to reach-avoid specifications.<n>Existing approaches for such a verification task rely on computed Lipschitz constants of neural networks.<n>We present two key contributions to obtain smaller Lipschitz constants than existing approaches.
arXiv Detail & Related papers (2024-06-02T18:19:19Z) - ECLipsE: Efficient Compositional Lipschitz Constant Estimation for Deep Neural Networks [0.8993153817914281]
Lipschitz constant plays a crucial role in certifying the robustness of neural networks to input perturbations.
Efforts have been made to obtain tight upper bounds on the Lipschitz constant.
We provide a compositional approach to estimate Lipschitz constants for deep feed-forward neural networks.
arXiv Detail & Related papers (2024-04-05T19:36:26Z) - Local Lipschitz Constant Computation of ReLU-FNNs: Upper Bound Computation with Exactness Verification [3.021446475031579]
This paper is concerned with the computation of the local Lipschitz constant of feedforward neural networks (FNNs) with activation functions being rectified linear units (ReLUs)
By following a standard procedure using multipliers that capture the behavior of ReLUs,we first reduce the upper bound problem of the local Lipschitz constant into a semidefinite programming problem (SDP)
We propose a method to construct a reduced order model whose input-output property is identical to the original FNN over a neighborhood of the target input.
arXiv Detail & Related papers (2023-10-17T09:37:16Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Lipschitz Continuity Retained Binary Neural Network [52.17734681659175]
We introduce the Lipschitz continuity as the rigorous criteria to define the model robustness for BNN.
We then propose to retain the Lipschitz continuity as a regularization term to improve the model robustness.
Our experiments prove that our BNN-specific regularization method can effectively strengthen the robustness of BNN.
arXiv Detail & Related papers (2022-07-13T22:55:04Z) - Chordal Sparsity for Lipschitz Constant Estimation of Deep Neural
Networks [77.82638674792292]
Lipschitz constants of neural networks allow for guarantees of robustness in image classification, safety in controller design, and generalizability beyond the training data.
As calculating Lipschitz constants is NP-hard, techniques for estimating Lipschitz constants must navigate the trade-off between scalability and accuracy.
In this work, we significantly push the scalability frontier of a semidefinite programming technique known as LipSDP while achieving zero accuracy loss.
arXiv Detail & Related papers (2022-04-02T11:57:52Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - CLIP: Cheap Lipschitz Training of Neural Networks [0.0]
We investigate a variational regularization method named CLIP for controlling the Lipschitz constant of a neural network.
We mathematically analyze the proposed model, in particular discussing the impact of the chosen regularization parameter on the output of the network.
arXiv Detail & Related papers (2021-03-23T13:29:24Z) - PEREGRiNN: Penalized-Relaxation Greedy Neural Network Verifier [1.1011268090482575]
We introduce a new approach to formally verify the most commonly considered safety specifications for ReLU NNs.
We use a convex solver not only as a linear feasibility checker, but also as a means of penalizing the amount of relaxation allowed in solutions.
arXiv Detail & Related papers (2020-06-18T21:33:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.