File Classification Based on Spiking Neural Networks
- URL: http://arxiv.org/abs/2004.03953v1
- Date: Wed, 8 Apr 2020 11:50:29 GMT
- Title: File Classification Based on Spiking Neural Networks
- Authors: Ana Stanojevic, Giovanni Cherubini, Timoleon Moraitis, Abu Sebastian
- Abstract summary: We propose a system for file classification in large data sets based on spiking neural networks (SNNs)
The proposed system may represent a valid alternative to classical machine learning algorithms for inference tasks.
- Score: 0.5065947993017157
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a system for file classification in large data sets
based on spiking neural networks (SNNs). File information contained in
key-value metadata pairs is mapped by a novel correlative temporal encoding
scheme to spike patterns that are input to an SNN. The correlation between
input spike patterns is determined by a file similarity measure. Unsupervised
training of such networks using spike-timing-dependent plasticity (STDP) is
addressed first. Then, supervised SNN training is considered by backpropagation
of an error signal that is obtained by comparing the spike pattern at the
output neurons with a target pattern representing the desired class. The
classification accuracy is measured for various publicly available data sets
with tens of thousands of elements, and compared with other learning
algorithms, including logistic regression and support vector machines.
Simulation results indicate that the proposed SNN-based system using memristive
synapses may represent a valid alternative to classical machine learning
algorithms for inference tasks, especially in environments with asynchronous
ingest of input data and limited resources.
Related papers
- Unveiling the Power of Sparse Neural Networks for Feature Selection [60.50319755984697]
Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection.
We show that SNNs trained with dynamic sparse training (DST) algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
Our findings show that feature selection with SNNs trained with DST algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
arXiv Detail & Related papers (2024-08-08T16:48:33Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Supervised learning of spatial features with STDP and homeostasis using Spiking Neural Networks on SpiNNaker [42.057348666938736]
This paper shows a new method to perform supervised learning on Spiking Neural Networks (SNNs), using Spike Timing Dependent Plasticity (STDP) and homeostasis.
A SNN is trained to recognise one or multiple patterns and performance metrics are extracted to measure the performance of the network.
This method of training an SNN to detect spatial patterns may be applied to pattern recognition in static images or traffic analysis in computer networks.
arXiv Detail & Related papers (2023-12-05T10:53:31Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Set-based Neural Network Encoding Without Weight Tying [91.37161634310819]
We propose a neural network weight encoding method for network property prediction.
Our approach is capable of encoding neural networks in a model zoo of mixed architecture.
We introduce two new tasks for neural network property prediction: cross-dataset and cross-architecture.
arXiv Detail & Related papers (2023-05-26T04:34:28Z) - Parallel Neural Networks in Golang [0.0]
This paper describes the design and implementation of parallel neural networks (PNNs) with the novel programming language Golang.
Golang and its inherent parallelization support proved very well for parallel neural network simulation by considerable decreased processing times compared to sequential variants.
arXiv Detail & Related papers (2023-04-19T11:56:36Z) - Spiking Generative Adversarial Networks With a Neural Network
Discriminator: Local Training, Bayesian Models, and Continual Meta-Learning [31.78005607111787]
Training neural networks to reproduce spiking patterns is a central problem in neuromorphic computing.
This work proposes to train SNNs so as to match spiking signals rather than individual spiking signals.
arXiv Detail & Related papers (2021-11-02T17:20:54Z) - BioLCNet: Reward-modulated Locally Connected Spiking Neural Networks [0.6193838300896449]
We propose a spiking neural network (SNN) trained using spike-timing-dependent plasticity (STDP) and its reward-modulated variant (R-STDP) learning rules.
Our network consists of a rate-coded input layer followed by a locally connected hidden layer and a decoding output layer.
We used the MNIST dataset to obtain image classification accuracy and to assess the robustness of our rewarding system to varying target responses.
arXiv Detail & Related papers (2021-09-12T15:28:48Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - NSL: Hybrid Interpretable Learning From Noisy Raw Data [66.15862011405882]
This paper introduces a hybrid neural-symbolic learning framework, called NSL, that learns interpretable rules from labelled unstructured data.
NSL combines pre-trained neural networks for feature extraction with FastLAS, a state-of-the-art ILP system for rule learning under the answer set semantics.
We demonstrate that NSL is able to learn robust rules from MNIST data and achieve comparable or superior accuracy when compared to neural network and random forest baselines.
arXiv Detail & Related papers (2020-12-09T13:02:44Z) - Multi-Sample Online Learning for Probabilistic Spiking Neural Networks [43.8805663900608]
Spiking Neural Networks (SNNs) capture some of the efficiency of biological brains for inference and learning.
This paper introduces an online learning rule based on generalized expectation-maximization (GEM)
Experimental results on structured output memorization and classification on a standard neuromorphic data set demonstrate significant improvements in terms of log-likelihood, accuracy, and calibration.
arXiv Detail & Related papers (2020-07-23T10:03:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.