Towards the Quantification of Safety Risks in Deep Neural Networks
- URL: http://arxiv.org/abs/2009.06114v1
- Date: Sun, 13 Sep 2020 23:30:09 GMT
- Title: Towards the Quantification of Safety Risks in Deep Neural Networks
- Authors: Peipei Xu and Wenjie Ruan and Xiaowei Huang
- Abstract summary: In this paper, we define safety risks by requesting the alignment of the network's decision with human perception.
For the quantification of risks, we take the maximum radius of safe norm balls, in which no safety risk exists.
In addition to the known adversarial example, reachability example, and invariant example, in this paper we identify a new class of risk - uncertainty example.
- Score: 9.161046484753841
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Safety concerns on the deep neural networks (DNNs) have been raised when they
are applied to critical sectors. In this paper, we define safety risks by
requesting the alignment of the network's decision with human perception. To
enable a general methodology for quantifying safety risks, we define a generic
safety property and instantiate it to express various safety risks. For the
quantification of risks, we take the maximum radius of safe norm balls, in
which no safety risk exists. The computation of the maximum safe radius is
reduced to the computation of their respective Lipschitz metrics - the
quantities to be computed. In addition to the known adversarial example,
reachability example, and invariant example, in this paper we identify a new
class of risk - uncertainty example - on which humans can tell easily but the
network is unsure. We develop an algorithm, inspired by derivative-free
optimization techniques and accelerated by tensor-based parallelization on
GPUs, to support efficient computation of the metrics. We perform evaluations
on several benchmark neural networks, including ACSC-Xu, MNIST, CIFAR-10, and
ImageNet networks. The experiments show that, our method can achieve
competitive performance on safety quantification in terms of the tightness and
the efficiency of computation. Importantly, as a generic approach, our method
can work with a broad class of safety risks and without restrictions on the
structure of neural networks.
Related papers
- Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - A computationally lightweight safe learning algorithm [1.9295598343317182]
We propose a safe learning algorithm that provides probabilistic safety guarantees but leverages the Nadaraya-Watson estimator.
We provide theoretical guarantees for the estimates, embed them into a safe learning algorithm, and show numerical experiments on a simulated seven-degrees-of-freedom robot manipulator.
arXiv Detail & Related papers (2023-09-07T12:21:22Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Capsa: A Unified Framework for Quantifying Risk in Deep Neural Networks [142.67349734180445]
Existing algorithms that provide risk-awareness to deep neural networks are complex and ad-hoc.
Here we present capsa, a framework for extending models with risk-awareness.
arXiv Detail & Related papers (2023-08-01T02:07:47Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Self-Repairing Neural Networks: Provable Safety for Deep Networks via
Dynamic Repair [16.208330991060976]
We propose a way to construct neural network classifiers that dynamically repair violations of non-relational safety constraints.
Our approach is based on a novel self-repairing layer, which provably yields safe outputs.
We show that our approach can be implemented using vectorized computations that execute efficiently on a GPU.
arXiv Detail & Related papers (2021-07-23T20:08:52Z) - Fast Falsification of Neural Networks using Property Directed Testing [0.1529342790344802]
We propose a falsification algorithm for neural networks that directs the search for a counterexample.
Our algorithm uses a derivative-free sampling-based optimization method.
We show that our falsification procedure detects all the unsafe instances that other verification tools also report as unsafe.
arXiv Detail & Related papers (2021-04-26T09:16:27Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Data-Driven Assessment of Deep Neural Networks with Random Input
Uncertainty [14.191310794366075]
We develop a data-driven optimization-based method capable of simultaneously certifying the safety of network outputs and localizing them.
We experimentally demonstrate the efficacy and tractability of the method on a deep ReLU network.
arXiv Detail & Related papers (2020-10-02T19:13:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.