Verifying Safety of Neural Networks from Topological Perspectives
- URL: http://arxiv.org/abs/2306.15403v1
- Date: Tue, 27 Jun 2023 12:02:25 GMT
- Title: Verifying Safety of Neural Networks from Topological Perspectives
- Authors: Zhen Liang, Dejin Ren, Bai Xue, Ji Wang, Wenjing Yang and Wanwei Liu
- Abstract summary: Neural networks (NNs) are increasingly applied in safety-critical systems such as autonomous vehicles.
We propose a set-boundary reachability method to investigate the safety verification problem of NNs from a topological perspective.
- Score: 5.487915758677296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks (NNs) are increasingly applied in safety-critical systems
such as autonomous vehicles. However, they are fragile and are often
ill-behaved. Consequently, their behaviors should undergo rigorous guarantees
before deployment in practice. In this paper, we propose a set-boundary
reachability method to investigate the safety verification problem of NNs from
a topological perspective. Given an NN with an input set and a safe set, the
safety verification problem is to determine whether all outputs of the NN
resulting from the input set fall within the safe set. In our method, the
homeomorphism property and the open map property of NNs are mainly exploited,
which establish rigorous guarantees between the boundaries of the input set and
the boundaries of the output set. The exploitation of these two properties
facilitates reachability computations via extracting subsets of the input set
rather than the entire input set, thus controlling the wrapping effect in
reachability analysis and facilitating the reduction of computation burdens for
safety verification. The homeomorphism property exists in some widely used NNs
such as invertible residual networks (i-ResNets) and Neural ordinary
differential equations (Neural ODEs), and the open map is a less strict
property and easier to satisfy compared with the homeomorphism property. For
NNs establishing either of these properties, our set-boundary reachability
method only needs to perform reachability analysis on the boundary of the input
set. Moreover, for NNs that do not feature these properties with respect to the
input set, we explore subsets of the input set for establishing the local
homeomorphism property and then abandon these subsets for reachability
computations. Finally, some examples demonstrate the performance of the
proposed method.
Related papers
- Provably Safe Neural Network Controllers via Differential Dynamic Logic [2.416907802598482]
We present the first general approach that allows reusing control theory results for NNCS verification.
Based on provably safe control envelopes in dL, we derive specifications for the NN which is proven via NN verification.
We show that a proof of the NN adhering to the specification is mirrored by a dL proof on the infinite-time safety of the NNCS.
arXiv Detail & Related papers (2024-02-16T16:15:25Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Scaling Model Checking for DNN Analysis via State-Space Reduction and
Input Segmentation (Extended Version) [12.272381003294026]
Existing frameworks provide robustness and/or safety guarantees for the trained NNs.
We proposed FANNet, the first model checking-based framework for analyzing a broader range of NN properties.
This work develops state-space reduction and input segmentation approaches, to improve the scalability and timing efficiency of formal NN analysis.
arXiv Detail & Related papers (2023-06-29T22:18:07Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Safety Verification for Neural Networks Based on Set-boundary Analysis [5.487915758677295]
Neural networks (NNs) are increasingly applied in safety-critical systems such as autonomous vehicles.
We propose a set-boundary reachability method to investigate the safety verification problem of NNs from a topological perspective.
arXiv Detail & Related papers (2022-10-09T05:55:37Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Backward Reachability Analysis for Neural Feedback Loops [40.989393438716476]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present an algorithm to iteratively find BP set estimates over a given time horizon and demonstrate the ability to reduce conservativeness by up to 88% with low additional computational cost.
arXiv Detail & Related papers (2022-04-14T01:13:14Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Robustness Analysis of Neural Networks via Efficient Partitioning with
Applications in Control Systems [45.35808135708894]
Neural networks (NNs) are now routinely implemented on systems that must operate in uncertain environments.
This paper unifies propagation and partition approaches to provide a family of robustness analysis algorithms.
New partitioning techniques are aware of their current bound estimates and desired boundary shape.
arXiv Detail & Related papers (2020-10-01T16:51:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.