SyReNN: A Tool for Analyzing Deep Neural Networks
- URL: http://arxiv.org/abs/2101.03263v1
- Date: Sat, 9 Jan 2021 00:27:23 GMT
- Title: SyReNN: A Tool for Analyzing Deep Neural Networks
- Authors: Matthew Sotoudeh and Aditya V. Thakur
- Abstract summary: Deep Neural Networks (DNNs) are rapidly gaining popularity in a variety of important domains.
This paper introduces SyReNN, a tool for understanding and analyzing a DNN by computing its symbolic representation.
- Score: 8.55884254206878
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) are rapidly gaining popularity in a variety of
important domains. Formally, DNNs are complicated vector-valued functions which
come in a variety of sizes and applications. Unfortunately, modern DNNs have
been shown to be vulnerable to a variety of attacks and buggy behavior. This
has motivated recent work in formally analyzing the properties of such DNNs.
This paper introduces SyReNN, a tool for understanding and analyzing a DNN by
computing its symbolic representation. The key insight is to decompose the DNN
into linear functions. Our tool is designed for analyses using low-dimensional
subsets of the input space, a unique design point in the space of DNN analysis
tools. We describe the tool and the underlying theory, then evaluate its use
and performance on three case studies: computing Integrated Gradients,
visualizing a DNN's decision boundaries, and patching a DNN.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - Harnessing Neuron Stability to Improve DNN Verification [42.65507402735545]
We present VeriStable, a novel extension of recently proposed DPLL-based constraint DNN verification approach.
We evaluate the effectiveness of VeriStable across a range of challenging benchmarks including fully-connected feed networks (FNNs), convolutional neural networks (CNNs) and residual networks (ResNets)
Preliminary results show that VeriStable is competitive and outperforms state-of-the-art verification tools, including $alpha$-$beta$-CROWN and MN-BaB, the first and second performers of the VNN-COMP, respectively.
arXiv Detail & Related papers (2024-01-19T23:48:04Z) - Uncovering the Representation of Spiking Neural Networks Trained with
Surrogate Gradient [11.0542573074431]
Spiking Neural Networks (SNNs) are recognized as the candidate for the next-generation neural networks due to their bio-plausibility and energy efficiency.
Recently, researchers have demonstrated that SNNs are able to achieve nearly state-of-the-art performance in image recognition tasks using surrogate gradient training.
arXiv Detail & Related papers (2023-04-25T19:08:29Z) - Distributed Graph Neural Network Training: A Survey [51.77035975191926]
Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains.
Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs.
As a remedy, distributed computing becomes a promising solution of training large-scale GNNs.
arXiv Detail & Related papers (2022-11-01T01:57:00Z) - Wide and Deep Graph Neural Network with Distributed Online Learning [174.8221510182559]
Graph neural networks (GNNs) are naturally distributed architectures for learning representations from network data.
Online learning can be leveraged to retrain GNNs at testing time to overcome this issue.
This paper develops the Wide and Deep GNN (WD-GNN), a novel architecture that can be updated with distributed online learning mechanisms.
arXiv Detail & Related papers (2021-07-19T23:56:48Z) - Abstract Neural Networks [7.396342576390398]
This paper introduces the notion of Abstract Neural Networks (ANNs), which can be used to soundly overapproximate Deep Neural Networks (DNNs)
We present a framework parameterized by the abstract domain and activation functions used in the DNN that can be used to construct a corresponding ANN.
Our framework can be instantiated with other abstract domains such as octagons and polyhedra, as well as other activation functions such as Leaky ReLU, Sigmoid, and Hyperbolic Tangent.
arXiv Detail & Related papers (2020-09-11T21:17:38Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - An Efficient Spiking Neural Network for Recognizing Gestures with a DVS
Camera on the Loihi Neuromorphic Processor [12.118084418840152]
Spiking Neural Networks (SNNs) have come under the spotlight for machine learning based applications.
We show our methodology for the design of an SNN that achieves nearly the same accuracy results as its corresponding Deep Neural Networks (DNNs)
Our SNN achieves 89.64% classification accuracy and occupies only 37 Loihi cores.
arXiv Detail & Related papers (2020-05-16T17:00:10Z) - CodNN -- Robust Neural Networks From Coded Classification [27.38642191854458]
Deep Neural Networks (DNNs) are a revolutionary force in the ongoing information revolution.
DNNs are highly sensitive to noise, whether adversarial or random.
This poses a fundamental challenge for hardware implementations of DNNs, and for their deployment in critical applications such as autonomous driving.
By our approach, either the data or internal layers of the DNN are coded with error correcting codes, and successful computation under noise is guaranteed.
arXiv Detail & Related papers (2020-04-22T17:07:15Z) - Architecture Disentanglement for Deep Neural Networks [174.16176919145377]
We introduce neural architecture disentanglement (NAD) to explain the inner workings of deep neural networks (DNNs)
NAD learns to disentangle a pre-trained DNN into sub-architectures according to independent tasks, forming information flows that describe the inference processes.
Results show that misclassified images have a high probability of being assigned to task sub-architectures similar to the correct ones.
arXiv Detail & Related papers (2020-03-30T08:34:33Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.