DeepDyve: Dynamic Verification for Deep Neural Networks
- URL: http://arxiv.org/abs/2009.09663v2
- Date: Fri, 16 Oct 2020 03:00:09 GMT
- Title: DeepDyve: Dynamic Verification for Deep Neural Networks
- Authors: Yu Li, Min Li, Bo Luo, Ye Tian, and Qiang Xu
- Abstract summary: DeepDyve employs pre-trained neural networks that are far simpler and smaller than the original DNN for dynamic verification.
We develop efficient and effective architecture and task exploration techniques to achieve optimized risk/overhead trade-off in DeepDyve.
- Score: 16.20238078882485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) have become one of the enabling technologies in
many safety-critical applications, e.g., autonomous driving and medical image
analysis. DNN systems, however, suffer from various kinds of threats, such as
adversarial example attacks and fault injection attacks. While there are many
defense methods proposed against maliciously crafted inputs, solutions against
faults presented in the DNN system itself (e.g., parameters and calculations)
are far less explored. In this paper, we develop a novel lightweight
fault-tolerant solution for DNN-based systems, namely DeepDyve, which employs
pre-trained neural networks that are far simpler and smaller than the original
DNN for dynamic verification. The key to enabling such lightweight checking is
that the smaller neural network only needs to produce approximate results for
the initial task without sacrificing fault coverage much. We develop efficient
and effective architecture and task exploration techniques to achieve optimized
risk/overhead trade-off in DeepDyve. Experimental results show that DeepDyve
can reduce 90% of the risks at around 10% overhead.
Related papers
- Model-Agnostic Reachability Analysis on Deep Neural Networks [25.54542656637704]
We develop a model-agnostic verification framework, called DeepAgn.
It can be applied to FNNs, Recurrent Neural Networks (RNNs), or a mixture of both.
It does not require access to the network's internal structures, such as layers and parameters.
arXiv Detail & Related papers (2023-04-03T09:01:59Z) - OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep
Neural Networks [7.797299214812479]
Occlusion is a prevalent and easily realizable semantic perturbation to deep neural networks (DNNs)
It can fool a DNN into misclassifying an input image by occluding some segments, possibly resulting in severe errors.
Most existing robustness verification approaches for DNNs are focused on non-semantic perturbations.
arXiv Detail & Related papers (2023-01-27T18:54:00Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Towards Adversarial-Resilient Deep Neural Networks for False Data
Injection Attack Detection in Power Grids [7.351477761427584]
False data injection attacks (FDIAs) pose a significant security threat to power system state estimation.
Recent studies have proposed machine learning (ML) techniques, particularly deep neural networks (DNNs)
arXiv Detail & Related papers (2021-02-17T22:26:34Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - NeuroAttack: Undermining Spiking Neural Networks Security through
Externally Triggered Bit-Flips [11.872768663147776]
Spiking Neural Networks (SNNs) emerged as a promising solution to the accuracy, resource-utilization, and energy-efficiency challenges in machine-learning systems.
While these systems are going mainstream, they have inherent security and reliability issues.
We propose NeuroAttack, a cross-layer attack that threatens the SNNs integrity by exploiting low-level reliability issues.
arXiv Detail & Related papers (2020-05-16T16:54:00Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z) - DeepHammer: Depleting the Intelligence of Deep Neural Networks through
Targeted Chain of Bit Flips [29.34622626909906]
We demonstrate the first hardware-based attack on quantized deep neural networks (DNNs)
DeepHammer is able to successfully tamper DNN inference behavior at run-time within a few minutes.
Our work highlights the need to incorporate security mechanisms in future deep learning system.
arXiv Detail & Related papers (2020-03-30T18:51:59Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.