Spiking Neural Networks with Single-Spike Temporal-Coded Neurons for
Network Intrusion Detection
- URL: http://arxiv.org/abs/2010.07803v1
- Date: Thu, 15 Oct 2020 14:46:18 GMT
- Title: Spiking Neural Networks with Single-Spike Temporal-Coded Neurons for
Network Intrusion Detection
- Authors: Shibo Zhou, Xiaohua Li
- Abstract summary: Spiking neural network (SNN) is interesting due to its strong bio-plausibility and high energy efficiency.
However, its performance is falling far behind conventional deep neural networks (DNNs)
- Score: 6.980076213134383
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking neural network (SNN) is interesting due to its strong
bio-plausibility and high energy efficiency. However, its performance is
falling far behind conventional deep neural networks (DNNs). In this paper,
considering a general class of single-spike temporal-coded integrate-and-fire
neurons, we analyze the input-output expressions of both leaky and nonleaky
neurons. We show that SNNs built with leaky neurons suffer from the
overly-nonlinear and overly-complex input-output response, which is the major
reason for their difficult training and low performance. This reason is more
fundamental than the commonly believed problem of nondifferentiable spikes. To
support this claim, we show that SNNs built with nonleaky neurons can have a
less-complex and less-nonlinear input-output response. They can be easily
trained and can have superior performance, which is demonstrated by
experimenting with the SNNs over two popular network intrusion detection
datasets, i.e., the NSL-KDD and the AWID datasets. Our experiment results show
that the proposed SNNs outperform a comprehensive list of DNN models and
classic machine learning models. This paper demonstrates that SNNs can be
promising and competitive in contrast to common beliefs.
Related papers
- Harnessing Neuron Stability to Improve DNN Verification [42.65507402735545]
We present VeriStable, a novel extension of recently proposed DPLL-based constraint DNN verification approach.
We evaluate the effectiveness of VeriStable across a range of challenging benchmarks including fully-connected feed networks (FNNs), convolutional neural networks (CNNs) and residual networks (ResNets)
Preliminary results show that VeriStable is competitive and outperforms state-of-the-art verification tools, including $alpha$-$beta$-CROWN and MN-BaB, the first and second performers of the VNN-COMP, respectively.
arXiv Detail & Related papers (2024-01-19T23:48:04Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Toward Robust Spiking Neural Network Against Adversarial Perturbation [22.56553160359798]
spiking neural networks (SNNs) are deployed increasingly in real-world efficiency critical applications.
Researchers have already demonstrated an SNN can be attacked with adversarial examples.
To the best of our knowledge, this is the first analysis on robust training of SNNs.
arXiv Detail & Related papers (2022-04-12T21:26:49Z) - A Spiking Neural Network Structure Implementing Reinforcement Learning [0.0]
In the present paper, I describe an SNN structure which, seemingly, can be used in wide range of reinforcement learning tasks.
The SNN structure considered in the paper includes spiking neurons described by a generalization of the LIFAT (leaky integrate-and-fire neuron with adaptive threshold) model.
My concept is based on very general assumptions about RL task characteristics and has no visible limitations on its applicability.
arXiv Detail & Related papers (2022-04-09T09:08:10Z) - Deep Learning in Spiking Phasor Neural Networks [0.6767885381740952]
Spiking Neural Networks (SNNs) have attracted the attention of the deep learning community for use in low-latency, low-power neuromorphic hardware.
In this paper, we introduce Spiking Phasor Neural Networks (SPNNs)
SPNNs are based on complex-valued Deep Neural Networks (DNNs), representing phases by spike times.
arXiv Detail & Related papers (2022-04-01T15:06:15Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Spiking Neural Networks for Visual Place Recognition via Weighted
Neuronal Assignments [24.754429120321365]
Spiking neural networks (SNNs) offer both compelling potential advantages, including energy efficiency and low latencies.
One promising area for high performance SNNs is template matching and image recognition.
This research introduces the first high performance SNN for the Visual Place Recognition (VPR) task.
arXiv Detail & Related papers (2021-09-14T05:40:40Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z) - Event-Based Angular Velocity Regression with Spiking Networks [51.145071093099396]
Spiking Neural Networks (SNNs) process information conveyed as temporal spikes rather than numeric values.
We propose, for the first time, a temporal regression problem of numerical values given events from an event camera.
We show that we can successfully train an SNN to perform angular velocity regression.
arXiv Detail & Related papers (2020-03-05T17:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.