Abstraction and Symbolic Execution of Deep Neural Networks with Bayesian
Approximation of Hidden Features
- URL: http://arxiv.org/abs/2103.03704v1
- Date: Fri, 5 Mar 2021 14:28:42 GMT
- Title: Abstraction and Symbolic Execution of Deep Neural Networks with Bayesian
Approximation of Hidden Features
- Authors: Nicolas Berthier, Amany Alshareef, James Sharp, Sven Schewe, Xiaowei
Huang
- Abstract summary: We propose a novel abstraction method which abstracts a deep neural network and a dataset into a Bayesian network.
We make use of dimensionality reduction techniques to identify hidden features that have been learned by hidden layers of the DNN.
We can derive a runtime monitoring approach to detect in operational time rare inputs.
- Score: 8.723426955657345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intensive research has been conducted on the verification and validation of
deep neural networks (DNNs), aiming to understand if, and how, DNNs can be
applied to safety critical applications. However, existing verification and
validation techniques are limited by their scalability, over both the size of
the DNN and the size of the dataset. In this paper, we propose a novel
abstraction method which abstracts a DNN and a dataset into a Bayesian network
(BN). We make use of dimensionality reduction techniques to identify hidden
features that have been learned by hidden layers of the DNN, and associate each
hidden feature with a node of the BN. On this BN, we can conduct probabilistic
inference to understand the behaviours of the DNN processing data. More
importantly, we can derive a runtime monitoring approach to detect in
operational time rare inputs and covariate shift of the input data. We can also
adapt existing structural coverage-guided testing techniques (i.e., based on
low-level elements of the DNN such as neurons), in order to generate test cases
that better exercise hidden features. We implement and evaluate the BN
abstraction technique using our DeepConcolic tool available at
https://github.com/TrustAI/DeepConcolic.
Related papers
- SFOD: Spiking Fusion Object Detector [10.888008544975662]
Spiking Fusion Object Detector (SFOD) is a simple and efficient approach to SNN-based object detection.
We design a Spiking Fusion Module, achieving the first-time fusion of feature maps from different scales in SNNs applied to event cameras.
We establish state-of-the-art classification results based on SNNs, achieving 93.7% accuracy on the NCAR dataset.
arXiv Detail & Related papers (2024-03-22T13:24:50Z) - An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks [13.271286153792058]
Quantized neural networks (QNNs) have been developed, with binarized neural networks (BNNs) restricted to binary values as a special case.
This paper presents an automata-theoretic approach to synthesizing BNNs that meet designated properties.
arXiv Detail & Related papers (2023-07-29T06:27:28Z) - Model-Agnostic Reachability Analysis on Deep Neural Networks [25.54542656637704]
We develop a model-agnostic verification framework, called DeepAgn.
It can be applied to FNNs, Recurrent Neural Networks (RNNs), or a mixture of both.
It does not require access to the network's internal structures, such as layers and parameters.
arXiv Detail & Related papers (2023-04-03T09:01:59Z) - OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep
Neural Networks [7.797299214812479]
Occlusion is a prevalent and easily realizable semantic perturbation to deep neural networks (DNNs)
It can fool a DNN into misclassifying an input image by occluding some segments, possibly resulting in severe errors.
Most existing robustness verification approaches for DNNs are focused on non-semantic perturbations.
arXiv Detail & Related papers (2023-01-27T18:54:00Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Taming Reachability Analysis of DNN-Controlled Systems via
Abstraction-Based Training [14.787056022080625]
This paper presents a novel abstraction-based approach to bypass the crux of over-approximating DNNs in reachability analysis.
We extend conventional DNNs by inserting an additional abstraction layer, which abstracts a real number to an interval for training.
We devise the first black-box reachability analysis approach for DNN-controlled systems, where trained DNNs are only queried as black-box oracles for the actions on abstract states.
arXiv Detail & Related papers (2022-11-21T00:11:50Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.