Continuous Safety Verification of Neural Networks
- URL: http://arxiv.org/abs/2010.05689v3
- Date: Sat, 25 Sep 2021 12:10:29 GMT
- Title: Continuous Safety Verification of Neural Networks
- Authors: Chih-Hong Cheng, Rongjie Yan
- Abstract summary: This paper considers approaches to transfer results established in the previous DNN safety verification problem to a modified problem setting.
The overall concept is evaluated in a $1/10$-scaled vehicle that equips a DNN controller to determine the visual waypoint from the perceived image.
- Score: 1.7056768055368385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deploying deep neural networks (DNNs) as core functions in autonomous driving
creates unique verification and validation challenges. In particular, the
continuous engineering paradigm of gradually perfecting a DNN-based perception
can make the previously established result of safety verification no longer
valid. This can occur either due to the newly encountered examples (i.e., input
domain enlargement) inside the Operational Design Domain or due to the
subsequent parameter fine-tuning activities of a DNN. This paper considers
approaches to transfer results established in the previous DNN safety
verification problem to the modified problem setting. By considering the reuse
of state abstractions, network abstractions, and Lipschitz constants, we
develop several sufficient conditions that only require formally analyzing a
small part of the DNN in the new problem. The overall concept is evaluated in a
$1/10$-scaled vehicle that equips a DNN controller to determine the visual
waypoint from the perceived image.
Related papers
- Harnessing Neuron Stability to Improve DNN Verification [42.65507402735545]
We present VeriStable, a novel extension of recently proposed DPLL-based constraint DNN verification approach.
We evaluate the effectiveness of VeriStable across a range of challenging benchmarks including fully-connected feed networks (FNNs), convolutional neural networks (CNNs) and residual networks (ResNets)
Preliminary results show that VeriStable is competitive and outperforms state-of-the-art verification tools, including $alpha$-$beta$-CROWN and MN-BaB, the first and second performers of the VNN-COMP, respectively.
arXiv Detail & Related papers (2024-01-19T23:48:04Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Incremental Satisfiability Modulo Theory for Verification of Deep Neural
Networks [22.015676101940077]
We present an incremental satisfiability modulo theory (SMT) algorithm based on the Reluplex framework.
We implement our algorithm as an incremental solver called DeepInc, and exerimental results show that DeepInc is more efficient in most cases.
arXiv Detail & Related papers (2023-02-10T04:31:28Z) - Reachability Analysis of Neural Network Control Systems [10.023618778236697]
Existing verification approaches for neural network control systems (NNCSs) can only work on a limited type of activation functions.
This paper proposes a verification framework for NNCS based on Lipschitzian optimisation, called DeepNNC.
DeepNNC shows superior performance in terms of efficiency and accuracy over a wide range of NNCs.
arXiv Detail & Related papers (2023-01-28T05:57:37Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Black-box Safety Analysis and Retraining of DNNs based on Feature
Extraction and Clustering [0.9590956574213348]
We propose SAFE, a black-box approach to automatically characterize the root causes of DNN errors.
It relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images.
It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images modeling plausible causes of error.
arXiv Detail & Related papers (2022-01-13T17:02:57Z) - A Biased Graph Neural Network Sampler with Near-Optimal Regret [57.70126763759996]
Graph neural networks (GNN) have emerged as a vehicle for applying deep network architectures to graph and relational data.
In this paper, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem.
We introduce a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded payouts.
arXiv Detail & Related papers (2021-03-01T15:55:58Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.