Distributionally Robust Safety Verification of Neural Networks via Worst-Case CVaR
- URL: http://arxiv.org/abs/2509.17413v1
- Date: Mon, 22 Sep 2025 07:04:53 GMT
- Title: Distributionally Robust Safety Verification of Neural Networks via Worst-Case CVaR
- Authors: Masako Kishida,
- Abstract summary: This paper builds on Fazlyab's quadratic-constraint (QC) and semidefinite-programming (SDP) framework for neural network verification.<n>The integration broadens input-uncertainty geometry-covering ellipsoids, polytopes, and hyperplanes-and extends applicability to safety-critical domains.
- Score: 3.0458514384586404
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ensuring the safety of neural networks under input uncertainty is a fundamental challenge in safety-critical applications. This paper builds on and expands Fazlyab's quadratic-constraint (QC) and semidefinite-programming (SDP) framework for neural network verification to a distributionally robust and tail-risk-aware setting by integrating worst-case Conditional Value-at-Risk (WC-CVaR) over a moment-based ambiguity set with fixed mean and covariance. The resulting conditions remain SDP-checkable and explicitly account for tail risk. This integration broadens input-uncertainty geometry-covering ellipsoids, polytopes, and hyperplanes-and extends applicability to safety-critical domains where tail-event severity matters. Applications to closed-loop reachability of control systems and classification are demonstrated through numerical experiments, illustrating how the risk level $\varepsilon$ trades conservatism for tolerance to tail events-while preserving the computational structure of prior QC/SDP methods for neural network verification and robustness analysis.
Related papers
- Concave Certificates: Geometric Framework for Distributionally Robust Risk and Complexity Analysis [0.7106986689736828]
Distributionally Robust (DR) optimization aims to certify worst-case risk within a Wasserstein uncertainty set.<n>This paper introduces a novel geometric framework based on the least concave majorants of the growth rate function.<n>We extend this framework to complexity analysis, introducing a deterministic bound that complements standard statistical bound.
arXiv Detail & Related papers (2026-01-04T00:24:43Z) - Scalable Verification of Neural Control Barrier Functions Using Linear Bound Propagation [50.53301323864253]
Control barrier functions (CBFs) are a popular tool for safety certification of nonlinear dynamical control systems.<n>We present a novel framework for verifying neural CBFs based on piecewise linear upper and lower bounds on the conditions required for a neural network to be a CBF.<n>Our approach scales to larger neural networks than state-of-the-art verification procedures for CBFs.
arXiv Detail & Related papers (2025-11-09T11:51:15Z) - Tail-Safe Hedging: Explainable Risk-Sensitive Reinforcement Learning with a White-Box CBF--QP Safety Layer in Arbitrage-Free Markets [4.235667373386689]
Tail-Safe is a deployability-oriented framework for derivatives hedging.<n>The learning component combines an IQN-based distributional critic with a CVaR objective.<n>The safety component enforces discrete-time CBF inequalities together with domain-specific constraints.
arXiv Detail & Related papers (2025-10-06T07:39:45Z) - Lipschitz-Based Robustness Certification for Recurrent Neural Networks via Convex Relaxation [0.0]
We present RNN-SDP, a relaxation based method that models the RNN's layer interactions as a convex problem.<n>We also explore an extension that incorporates known input constraints to further tighten the resulting Lipschitz bounds.
arXiv Detail & Related papers (2025-09-22T15:26:46Z) - Probabilistic Robustness Analysis in High Dimensional Space: Application to Semantic Segmentation Network [6.587910936799125]
We introduce a probabilistic verification framework that is both architecture-agnostic and scalable to high-dimensional outputs.<n>Our approach combines sampling-based reachability analysis with conformal inference (CI) to deliver provable guarantees.<n>We demonstrate that our method provides reliable safety guarantees while substantially tightening bounds compared to SOTA.
arXiv Detail & Related papers (2025-09-15T12:25:25Z) - Enhancing Uncertainty Quantification for Runtime Safety Assurance Using Causal Risk Analysis and Operational Design Domain [0.0]
We propose an enhancement of traditional uncertainty quantification by explicitly incorporating environmental conditions.<n>We leverage Hazard Analysis and Risk Assessment (HARA) and fault tree modeling to identify critical operational conditions affecting system functionality.<n>At runtime, this BN is instantiated using real-time environmental observations to infer a probabilistic distribution over the safety estimation.
arXiv Detail & Related papers (2025-07-04T12:12:32Z) - COIN: Uncertainty-Guarding Selective Question Answering for Foundation Models with Provable Risk Guarantees [51.5976496056012]
COIN is an uncertainty-guarding selection framework that calibrates statistically valid thresholds to filter a single generated answer per question.<n>COIN estimates the empirical error rate on a calibration set and applies confidence interval methods to establish a high-probability upper bound on the true error rate.<n>We demonstrate COIN's robustness in risk control, strong test-time power in retaining admissible answers, and predictive efficiency under limited calibration data.
arXiv Detail & Related papers (2025-06-25T07:04:49Z) - Advancing Neural Network Verification through Hierarchical Safety Abstract Interpretation [52.626086874715284]
We introduce a novel problem formulation called Abstract DNN-Verification, which verifies a hierarchical structure of unsafe outputs.<n>By leveraging abstract interpretation and reasoning about output reachable sets, our approach enables assessing multiple safety levels during the formal verification process.<n>Our contributions include a theoretical exploration of the relationship between our novel abstract safety formulation and existing approaches.
arXiv Detail & Related papers (2025-05-08T13:29:46Z) - Risk-Averse Certification of Bayesian Neural Networks [70.44969603471903]
We propose a Risk-Averse Certification framework for Bayesian neural networks called RAC-BNN.<n>Our method leverages sampling and optimisation to compute a sound approximation of the output set of a BNN.<n>We validate RAC-BNN on a range of regression and classification benchmarks and compare its performance with a state-of-the-art method.
arXiv Detail & Related papers (2024-11-29T14:22:51Z) - Distributionally Robust Policy and Lyapunov-Certificate Learning [13.38077406934971]
Key challenge in designing controllers with stability guarantees for uncertain systems is the accurate determination of and adaptation to shifts in model parametric uncertainty during online deployment.
We tackle this with a novel distributionally robust formulation of the Lyapunov derivative chance constraint ensuring a monotonic decrease of the Lyapunov certificate.
We show that, for the resulting closed-loop system, the global stability of its equilibrium can be certified with high confidence, even with Out-of-Distribution uncertainties.
arXiv Detail & Related papers (2024-04-03T18:57:54Z) - Wasserstein Distributionally Robust Control Barrier Function using
Conditional Value-at-Risk with Differentiable Convex Programming [4.825619788907192]
Control Barrier functions (CBFs) have attracted extensive attention for designing safe controllers for real-world safety-critical systems.
We present distributional robust CBF to achieve resilience under distributional shift.
We also provide an approximate variant of DR-CBF for higher-order systems.
arXiv Detail & Related papers (2023-09-15T18:45:09Z) - Capsa: A Unified Framework for Quantifying Risk in Deep Neural Networks [142.67349734180445]
Existing algorithms that provide risk-awareness to deep neural networks are complex and ad-hoc.
Here we present capsa, a framework for extending models with risk-awareness.
arXiv Detail & Related papers (2023-08-01T02:07:47Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.