Safety Verification for Neural Networks Based on Set-boundary Analysis
- URL: http://arxiv.org/abs/2210.04175v1
- Date: Sun, 9 Oct 2022 05:55:37 GMT
- Title: Safety Verification for Neural Networks Based on Set-boundary Analysis
- Authors: Zhen Liang, Dejin Ren, Wanwei Liu, Ji Wang, Wenjing Yang and Bai Xue
- Abstract summary: Neural networks (NNs) are increasingly applied in safety-critical systems such as autonomous vehicles.
We propose a set-boundary reachability method to investigate the safety verification problem of NNs from a topological perspective.
- Score: 5.487915758677295
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks (NNs) are increasingly applied in safety-critical systems
such as autonomous vehicles. However, they are fragile and are often
ill-behaved. Consequently, their behaviors should undergo rigorous guarantees
before deployment in practice. In this paper we propose a set-boundary
reachability method to investigate the safety verification problem of NNs from
a topological perspective. Given an NN with an input set and a safe set, the
safety verification problem is to determine whether all outputs of the NN
resulting from the input set fall within the safe set. In our method, the
homeomorphism property of NNs is mainly exploited, which establishes a
relationship mapping boundaries to boundaries. The exploitation of this
property facilitates reachability computations via extracting subsets of the
input set rather than the entire input set, thus controlling the wrapping
effect in reachability analysis and facilitating the reduction of computation
burdens for safety verification. The homeomorphism property exists in some
widely used NNs such as invertible NNs. Notable representations are invertible
residual networks (i-ResNets) and Neural ordinary differential equations
(Neural ODEs). For these NNs, our set-boundary reachability method only needs
to perform reachability analysis on the boundary of the input set. For NNs
which do not feature this property with respect to the input set, we explore
subsets of the input set for establishing the local homeomorphism property, and
then abandon these subsets for reachability computations. Finally, some
examples demonstrate the performance of the proposed method.
Related papers
- Provably Safe Neural Network Controllers via Differential Dynamic Logic [2.416907802598482]
We present the first general approach that allows reusing control theory results for NNCS verification.
Based on provably safe control envelopes in dL, we derive specifications for the NN which is proven via NN verification.
We show that a proof of the NN adhering to the specification is mirrored by a dL proof on the infinite-time safety of the NNCS.
arXiv Detail & Related papers (2024-02-16T16:15:25Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Scaling Model Checking for DNN Analysis via State-Space Reduction and
Input Segmentation (Extended Version) [12.272381003294026]
Existing frameworks provide robustness and/or safety guarantees for the trained NNs.
We proposed FANNet, the first model checking-based framework for analyzing a broader range of NN properties.
This work develops state-space reduction and input segmentation approaches, to improve the scalability and timing efficiency of formal NN analysis.
arXiv Detail & Related papers (2023-06-29T22:18:07Z) - Verifying Safety of Neural Networks from Topological Perspectives [5.487915758677296]
Neural networks (NNs) are increasingly applied in safety-critical systems such as autonomous vehicles.
We propose a set-boundary reachability method to investigate the safety verification problem of NNs from a topological perspective.
arXiv Detail & Related papers (2023-06-27T12:02:25Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Backward Reachability Analysis for Neural Feedback Loops [40.989393438716476]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present an algorithm to iteratively find BP set estimates over a given time horizon and demonstrate the ability to reduce conservativeness by up to 88% with low additional computational cost.
arXiv Detail & Related papers (2022-04-14T01:13:14Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Robustness Analysis of Neural Networks via Efficient Partitioning with
Applications in Control Systems [45.35808135708894]
Neural networks (NNs) are now routinely implemented on systems that must operate in uncertain environments.
This paper unifies propagation and partition approaches to provide a family of robustness analysis algorithms.
New partitioning techniques are aware of their current bound estimates and desired boundary shape.
arXiv Detail & Related papers (2020-10-01T16:51:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.