Delay Independent Safe Control with Neural Networks: Positive Lur'e Certificates for Risk Aware Autonomy
- URL: http://arxiv.org/abs/2510.06661v1
- Date: Wed, 08 Oct 2025 05:22:28 GMT
- Title: Delay Independent Safe Control with Neural Networks: Positive Lur'e Certificates for Risk Aware Autonomy
- Authors: Hamidreza Montazeri Hedesh, Milad Siami,
- Abstract summary: We present a risk-aware safety certification method for autonomous, learning enabled control systems.<n>We model the neural network (NN) controller with local sector bounds and exploit positivity structure to derive linear, delay-independent certificates.
- Score: 0.5729426778193398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a risk-aware safety certification method for autonomous, learning enabled control systems. Focusing on two realistic risks, state/input delays and interval matrix uncertainty, we model the neural network (NN) controller with local sector bounds and exploit positivity structure to derive linear, delay-independent certificates that guarantee local exponential stability across admissible uncertainties. To benchmark performance, we adopt and implement a state-of-the-art IQC NN verification pipeline. On representative cases, our positivity-based tests run orders of magnitude faster than SDP-based IQC while certifying regimes the latter cannot-providing scalable safety guarantees that complement risk-aware control.
Related papers
- A Safety-Constrained Reinforcement Learning Framework for Reliable Wireless Autonomy [1.5469452301122173]
We propose a proactive safety-constrained RL framework that integrates proof-carrying control with empowerment-budgeted (EB) enforcement.<n>Our method achieves provable safety guarantees with minimal performance degradation.<n>Results highlight the potential of proactive safety constrained RL to enable trustworthy wireless autonomy in future 6G networks.
arXiv Detail & Related papers (2026-01-12T02:02:52Z) - Robustness Certificates for Neural Networks against Adversarial Attacks [9.365069861121944]
This paper introduces a principled formal robustness certification framework that models gradient-based training as a discrete-time dynamical system.<n>Our framework also extends to certification against test-time attacks, making it the first unified framework to provide formal guarantees in both training and test-time attack settings.
arXiv Detail & Related papers (2025-12-24T00:49:47Z) - Reliable LLM-Based Edge-Cloud-Expert Cascades for Telecom Knowledge Systems [54.916243942641444]
Large language models (LLMs) are emerging as key enablers of automation in domains such as telecommunications.<n>We study an edge-cloud-expert cascaded LLM-based knowledge system that supports decision-making through a question-and-answer pipeline.
arXiv Detail & Related papers (2025-12-23T03:10:09Z) - Distributionally Robust Safety Verification of Neural Networks via Worst-Case CVaR [3.0458514384586404]
This paper builds on Fazlyab's quadratic-constraint (QC) and semidefinite-programming (SDP) framework for neural network verification.<n>The integration broadens input-uncertainty geometry-covering ellipsoids, polytopes, and hyperplanes-and extends applicability to safety-critical domains.
arXiv Detail & Related papers (2025-09-22T07:04:53Z) - Risk-Averse Certification of Bayesian Neural Networks [70.44969603471903]
We propose a Risk-Averse Certification framework for Bayesian neural networks called RAC-BNN.<n>Our method leverages sampling and optimisation to compute a sound approximation of the output set of a BNN.<n>We validate RAC-BNN on a range of regression and classification benchmarks and compare its performance with a state-of-the-art method.
arXiv Detail & Related papers (2024-11-29T14:22:51Z) - Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Reachability Verification Based Reliability Assessment for Deep
Reinforcement Learning Controlled Robotics and Autonomous Systems [17.679681019347065]
Deep Reinforcement Learning (DRL) has achieved impressive performance in robotics and autonomous systems (RAS)
A key challenge to its deployment in real-life operations is the presence of spuriously unsafe DRL policies.
This paper proposes a novel quantitative reliability assessment framework for DRL-controlled RAS.
arXiv Detail & Related papers (2022-10-26T19:25:46Z) - Risk Verification of Stochastic Systems with Neural Network Controllers [0.0]
We present a data-driven framework for verifying the risk of dynamical systems with neural network (NN) controllers.
Given a control system, an NN controller, and a specification equipped with a notion of trace robustness, we collect trajectories from the system.
We compute risk metrics over these robustness values to estimate the risk that the NN controller will not satisfy the specification.
arXiv Detail & Related papers (2022-08-26T20:09:55Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Robust Stability of Neural-Network Controlled Nonlinear Systems with
Parametric Variability [2.0199917525888895]
We develop a theory for stability and stabilizability of a class of neural-network controlled nonlinear systems.
For computing such a robust stabilizing NN controller, a stability guaranteed training (SGT) is also proposed.
arXiv Detail & Related papers (2021-09-13T05:09:30Z) - Certifiably Adversarially Robust Detection of Out-of-Distribution Data [111.67388500330273]
We aim for certifiable worst case guarantees for OOD detection by enforcing low confidence at the OOD point.
We show that non-trivial bounds on the confidence for OOD data generalizing beyond the OOD dataset seen at training time are possible.
arXiv Detail & Related papers (2020-07-16T17:16:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.