Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing
- URL: http://arxiv.org/abs/2312.05890v1
- Date: Sun, 10 Dec 2023 13:51:25 GMT
- Title: Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing
- Authors: Luca Marzari, Gabriele Roncolato and Alessandro Farinelli
- Abstract summary: Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
- Score: 57.49021927832259
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary
results in many scenarios, ranging from pattern recognition to complex robotic
problems. However, their intricate designs and lack of transparency raise
safety concerns when applied in real-world applications. In this context,
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide
provable guarantees on the safety aspect. Nonetheless, the binary answer (i.e.,
safe or unsafe) could be not informative enough for direct safety interventions
such as safety model ranking or selection. To address this limitation, the FV
problem has recently been extended to the counting version, called
#DNN-Verification, for the computation of the size of the unsafe regions in a
given safety property's domain. Still, due to the complexity of the problem,
existing solutions struggle to scale on real-world robotic scenarios, where the
DNN can be large and complex. To address this limitation, inspired by advances
in FV, in this work, we propose a novel strategy based on reachability analysis
combined with Symbolic Linear Relaxation and parallel computing to enhance the
efficiency of existing exact and approximate FV for DNN counters. The empirical
evaluation on standard FV benchmarks and realistic robotic scenarios shows a
remarkable improvement in scalability and efficiency, enabling the use of such
techniques even for complex robotic applications.
Related papers
- Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Online Safety Property Collection and Refinement for Safe Deep
Reinforcement Learning in Mapless Navigation [79.89605349842569]
We introduce the Collection and Refinement of Online Properties (CROP) framework to design properties at training time.
CROP employs a cost signal to identify unsafe interactions and use them to shape safety properties.
We evaluate our approach in several robotic mapless navigation tasks and demonstrate that the violation metric computed with CROP allows higher returns and lower violations over previous Safe DRL approaches.
arXiv Detail & Related papers (2023-02-13T21:19:36Z) - Enhancing Deep Learning with Scenario-Based Override Rules: a Case Study [0.0]
Deep neural networks (DNNs) have become a crucial instrument in the software development toolkit.
DNNs are highly opaque, and can behave in an unexpected manner when they encounter unfamiliar input.
One promising approach is by extending DNN-based systems with hand-crafted override rules.
arXiv Detail & Related papers (2023-01-19T15:06:32Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Generating Formal Safety Assurances for High-Dimensional Reachability [4.523089386111081]
Hamilton-Jacobi (HJ) reachability analysis is a popular formal verification tool for providing safety and performance guarantees for autonomous systems.
A recently proposed method called DeepReach overcomes this challenge by leveraging a sinusoidal neural PDE solver for high-dimensional reachability problems.
We propose a method to compute an error bound for the DeepReach solution, resulting in a safe approximation of the true reachable tube.
arXiv Detail & Related papers (2022-09-25T22:15:53Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Automated Repair of Neural Networks [0.26651200086513094]
We introduce a framework for repairing unsafe NNs w.r.t. safety specification.
Our method is able to search for a new, safe NN representation, by modifying only a few of its weight values.
We perform extensive experiments which demonstrate the capability of our proposed framework to yield safe NNs w.r.t.
arXiv Detail & Related papers (2022-07-17T12:42:24Z) - Towards a Safety Case for Hardware Fault Tolerance in Convolutional
Neural Networks Using Activation Range Supervision [1.7968112116887602]
Convolutional neural networks (CNNs) have become an established part of numerous safety-critical computer vision applications.
We build a prototypical safety case for CNNs by demonstrating that range supervision represents a highly reliable fault detector.
We explore novel, non-uniform range restriction methods that effectively suppress the probability of silent data corruptions and uncorrectable errors.
arXiv Detail & Related papers (2021-08-16T11:13:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.