DiffRNN: Differential Verification of Recurrent Neural Networks
- URL: http://arxiv.org/abs/2007.10135v1
- Date: Mon, 20 Jul 2020 14:14:35 GMT
- Title: DiffRNN: Differential Verification of Recurrent Neural Networks
- Authors: Sara Mohammadinejad, Brandon Paulsen, Chao Wang, Jyotirmoy V. Deshmukh
- Abstract summary: Recurrent neural networks (RNNs) have become popular in a variety of applications such as image processing, data classification, speech recognition, and as controllers in autonomous systems.
We propose DIFFRNN, the first differential verification method for RNNs to certify the equivalence of two structurally similar neural networks.
We demonstrate the practical efficacy of our technique on a variety of benchmarks and show that DIFFRNN outperforms state-of-the-art verification tools such as POPQORN.
- Score: 3.4423518864863154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recurrent neural networks (RNNs) such as Long Short Term Memory (LSTM)
networks have become popular in a variety of applications such as image
processing, data classification, speech recognition, and as controllers in
autonomous systems. In practical settings, there is often a need to deploy such
RNNs on resource-constrained platforms such as mobile phones or embedded
devices. As the memory footprint and energy consumption of such components
become a bottleneck, there is interest in compressing and optimizing such
networks using a range of heuristic techniques. However, these techniques do
not guarantee the safety of the optimized network, e.g., against adversarial
inputs, or equivalence of the optimized and original networks. To address this
problem, we propose DIFFRNN, the first differential verification method for
RNNs to certify the equivalence of two structurally similar neural networks.
Existing work on differential verification for ReLUbased feed-forward neural
networks does not apply to RNNs where nonlinear activation functions such as
Sigmoid and Tanh cannot be avoided. RNNs also pose unique challenges such as
handling sequential inputs, complex feedback structures, and interactions
between the gates and states. In DIFFRNN, we overcome these challenges by
bounding nonlinear activation functions with linear constraints and then
solving constrained optimization problems to compute tight bounding boxes on
nonlinear surfaces in a high-dimensional space. The soundness of these bounding
boxes is then proved using the dReal SMT solver. We demonstrate the practical
efficacy of our technique on a variety of benchmarks and show that DIFFRNN
outperforms state-of-the-art RNN verification tools such as POPQORN.
Related papers
- Accurate Mapping of RNNs on Neuromorphic Hardware with Adaptive Spiking Neurons [2.9410174624086025]
We present a $SigmaDelta$-low-pass RNN (lpRNN) for mapping rate-based RNNs to spiking neural networks (SNNs)
An adaptive spiking neuron model encodes signals using $SigmaDelta$-modulation and enables precise mapping.
We demonstrate the implementation of the lpRNN on Intel's neuromorphic research chip Loihi.
arXiv Detail & Related papers (2024-07-18T14:06:07Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - iRNN: Integer-only Recurrent Neural Network [0.8766022970635899]
We present a quantization-aware training method for obtaining a highly accurate integer-only recurrent neural network (iRNN)
Our iRNN maintains similar performance as its full-precision counterpart, their deployment on smartphones improves the runtime performance by $2times$, and reduces the model size by $4times$.
arXiv Detail & Related papers (2021-09-20T20:17:40Z) - Pruning and Slicing Neural Networks using Formal Verification [0.2538209532048866]
Deep neural networks (DNNs) play an increasingly important role in various computer systems.
In order to create these networks, engineers typically specify a desired topology, and then use an automated training algorithm to select the network's weights.
Here, we propose to address this challenge by harnessing recent advances in DNN verification.
arXiv Detail & Related papers (2021-05-28T07:53:50Z) - Skip-Connected Self-Recurrent Spiking Neural Networks with Joint
Intrinsic Parameter and Synaptic Weight Training [14.992756670960008]
We propose a new type of RSNN called Skip-Connected Self-Recurrent SNNs (ScSr-SNNs)
ScSr-SNNs can boost performance by up to 2.55% compared with other types of RSNNs trained by state-of-the-art BP methods.
arXiv Detail & Related papers (2020-10-23T22:27:13Z) - A Fully Tensorized Recurrent Neural Network [48.50376453324581]
We introduce a "fully tensorized" RNN architecture which jointly encodes the separate weight matrices within each recurrent cell.
This approach reduces model size by several orders of magnitude, while still maintaining similar or better performance compared to standard RNNs.
arXiv Detail & Related papers (2020-10-08T18:24:12Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Fractional Deep Neural Network via Constrained Optimization [0.0]
This paper introduces a novel algorithmic framework for a deep neural network (DNN)
Fractional-DNN can be viewed as a time-discretization of a fractional in time nonlinear ordinary differential equation (ODE)
arXiv Detail & Related papers (2020-04-01T21:58:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.