Abstract Neural Networks
- URL: http://arxiv.org/abs/2009.05660v1
- Date: Fri, 11 Sep 2020 21:17:38 GMT
- Title: Abstract Neural Networks
- Authors: Matthew Sotoudeh and Aditya V. Thakur
- Abstract summary: This paper introduces the notion of Abstract Neural Networks (ANNs), which can be used to soundly overapproximate Deep Neural Networks (DNNs)
We present a framework parameterized by the abstract domain and activation functions used in the DNN that can be used to construct a corresponding ANN.
Our framework can be instantiated with other abstract domains such as octagons and polyhedra, as well as other activation functions such as Leaky ReLU, Sigmoid, and Hyperbolic Tangent.
- Score: 7.396342576390398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) are rapidly being applied to safety-critical
domains such as drone and airplane control, motivating techniques for verifying
the safety of their behavior. Unfortunately, DNN verification is NP-hard, with
current algorithms slowing exponentially with the number of nodes in the DNN.
This paper introduces the notion of Abstract Neural Networks (ANNs), which can
be used to soundly overapproximate DNNs while using fewer nodes. An ANN is like
a DNN except weight matrices are replaced by values in a given abstract domain.
We present a framework parameterized by the abstract domain and activation
functions used in the DNN that can be used to construct a corresponding ANN. We
present necessary and sufficient conditions on the DNN activation functions for
the constructed ANN to soundly over-approximate the given DNN. Prior work on
DNN abstraction was restricted to the interval domain and ReLU activation
function. Our framework can be instantiated with other abstract domains such as
octagons and polyhedra, as well as other activation functions such as Leaky
ReLU, Sigmoid, and Hyperbolic Tangent.
Related papers
- Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - On the Computational Complexity and Formal Hierarchy of Second Order
Recurrent Neural Networks [59.85314067235965]
We extend the theoretical foundation for the $2nd$-order recurrent network ($2nd$ RNN)
We prove there exists a class of a $2nd$ RNN that is Turing-complete with bounded time.
We also demonstrate that $2$nd order RNNs, without memory, outperform modern-day models such as vanilla RNNs and gated recurrent units in recognizing regular grammars.
arXiv Detail & Related papers (2023-09-26T06:06:47Z) - Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency
Spiking Neural Networks [22.532709609646066]
Spiking Neural Networks (SNNs) have gained great attraction due to their distinctive properties of low power consumption and fast inference on neuromorphic hardware.
As the most effective method to get deep SNNs, ANN-SNN conversion has achieved comparable performance as ANNs on large-scale datasets.
In this paper, we theoretically analyze ANN-SNN conversion error and derive the estimated activation function of SNNs.
We prove that the expected conversion error between SNNs and ANNs is zero, enabling us to achieve high-accuracy and ultra-low-latency SNN
arXiv Detail & Related papers (2023-03-08T03:04:53Z) - SNN2ANN: A Fast and Memory-Efficient Training Framework for Spiking
Neural Networks [117.56823277328803]
Spiking neural networks are efficient computation models for low-power environments.
We propose a SNN-to-ANN (SNN2ANN) framework to train the SNN in a fast and memory-efficient way.
Experiment results show that our SNN2ANN-based models perform well on the benchmark datasets.
arXiv Detail & Related papers (2022-06-19T16:52:56Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Beyond Classification: Directly Training Spiking Neural Networks for
Semantic Segmentation [5.800785186389827]
Spiking Neural Networks (SNNs) have emerged as the low-power alternative to Artificial Neural Networks (ANNs)
In this paper, we explore the SNN applications beyond classification and present semantic segmentation networks configured with spiking neurons.
arXiv Detail & Related papers (2021-10-14T21:53:03Z) - Strengthening the Training of Convolutional Neural Networks By Using
Walsh Matrix [0.0]
We have modified the training and structure of DNN to increase the classification performance.
A minimum distance network (MDN) following the last layer of the convolutional neural network (CNN) is used as the classifier.
In different areas, it has been observed that a higher classification performance was obtained by using the DivFE with less number of nodes.
arXiv Detail & Related papers (2021-03-31T18:06:11Z) - SyReNN: A Tool for Analyzing Deep Neural Networks [8.55884254206878]
Deep Neural Networks (DNNs) are rapidly gaining popularity in a variety of important domains.
This paper introduces SyReNN, a tool for understanding and analyzing a DNN by computing its symbolic representation.
arXiv Detail & Related papers (2021-01-09T00:27:23Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z) - Fractional Deep Neural Network via Constrained Optimization [0.0]
This paper introduces a novel algorithmic framework for a deep neural network (DNN)
Fractional-DNN can be viewed as a time-discretization of a fractional in time nonlinear ordinary differential equation (ODE)
arXiv Detail & Related papers (2020-04-01T21:58:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.