Advancing Neural Network Verification through Hierarchical Safety Abstract Interpretation
- URL: http://arxiv.org/abs/2505.05235v1
- Date: Thu, 08 May 2025 13:29:46 GMT
- Title: Advancing Neural Network Verification through Hierarchical Safety Abstract Interpretation
- Authors: Luca Marzari, Isabella Mastroeni, Alessandro Farinelli,
- Abstract summary: We introduce a novel problem formulation called Abstract DNN-Verification, which verifies a hierarchical structure of unsafe outputs.<n>By leveraging abstract interpretation and reasoning about output reachable sets, our approach enables assessing multiple safety levels during the formal verification process.<n>Our contributions include a theoretical exploration of the relationship between our novel abstract safety formulation and existing approaches.
- Score: 52.626086874715284
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional methods for formal verification (FV) of deep neural networks (DNNs) are constrained by a binary encoding of safety properties, where a model is classified as either safe or unsafe (robust or not robust). This binary encoding fails to capture the nuanced safety levels within a model, often resulting in either overly restrictive or too permissive requirements. In this paper, we introduce a novel problem formulation called Abstract DNN-Verification, which verifies a hierarchical structure of unsafe outputs, providing a more granular analysis of the safety aspect for a given DNN. Crucially, by leveraging abstract interpretation and reasoning about output reachable sets, our approach enables assessing multiple safety levels during the FV process, requiring the same (in the worst case) or even potentially less computational effort than the traditional binary verification approach. Specifically, we demonstrate how this formulation allows rank adversarial inputs according to their abstract safety level violation, offering a more detailed evaluation of the model's safety and robustness. Our contributions include a theoretical exploration of the relationship between our novel abstract safety formulation and existing approaches that employ abstract interpretation for robustness verification, complexity analysis of the novel problem introduced, and an empirical evaluation considering both a complex deep reinforcement learning task (based on Habitat 3.0) and standard DNN-Verification benchmarks.
Related papers
- Verification-Guided Falsification for Safe RL via Explainable Abstraction and Risk-Aware Exploration [8.246285288584625]
We propose a hybrid framework that integrates explainability, model checking, and risk-guided falsification to achieve both rigor and coverage.<n>Our approach begins by constructing a human-interpretable abstraction of the RL policy using Comprehensible Abstract Policy Summarization (CAPS)<n>If no violation is detected, we cannot conclude satisfaction due to potential limitation in the abstraction and coverage of the offline dataset.
arXiv Detail & Related papers (2025-06-04T00:54:01Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Online Safety Property Collection and Refinement for Safe Deep
Reinforcement Learning in Mapless Navigation [79.89605349842569]
We introduce the Collection and Refinement of Online Properties (CROP) framework to design properties at training time.
CROP employs a cost signal to identify unsafe interactions and use them to shape safety properties.
We evaluate our approach in several robotic mapless navigation tasks and demonstrate that the violation metric computed with CROP allows higher returns and lower violations over previous Safe DRL approaches.
arXiv Detail & Related papers (2023-02-13T21:19:36Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - An Abstraction-based Method to Verify Multi-Agent Deep
Reinforcement-Learning Behaviours [8.95294551927446]
Multi-agent reinforcement learning (RL) often struggles to ensure the safe behaviours of the learning agents.
We present a methodology that combines formal verification with (deep) RL algorithms to guarantee the satisfaction of formally-specified safety constraints.
arXiv Detail & Related papers (2021-02-02T11:12:30Z) - PEREGRiNN: Penalized-Relaxation Greedy Neural Network Verifier [1.1011268090482575]
We introduce a new approach to formally verify the most commonly considered safety specifications for ReLU NNs.
We use a convex solver not only as a linear feasibility checker, but also as a means of penalizing the amount of relaxation allowed in solutions.
arXiv Detail & Related papers (2020-06-18T21:33:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.