A Unified Platform to Evaluate STDP Learning Rule and Synapse Model using Pattern Recognition in a Spiking Neural Network
- URL: http://arxiv.org/abs/2506.19377v1
- Date: Tue, 24 Jun 2025 07:10:43 GMT
- Title: A Unified Platform to Evaluate STDP Learning Rule and Synapse Model using Pattern Recognition in a Spiking Neural Network
- Authors: Jaskirat Singh Maskeen, Sandip Lashkare,
- Abstract summary: We develop a unified platform to evaluate Ideal, Linear, and Non-linear $textPr_0.7textCa_0.3textMnO_3$ memristor-based synapse models.<n>On MNIST with small train set and large test set, our two-layer SNN with ideal, 25-state, and 12-state nonlinear memristor synapses achieves 92.73 %, 91.07 %, and 80 % accuracy, respectively.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a unified platform to evaluate Ideal, Linear, and Non-linear $\text{Pr}_{0.7}\text{Ca}_{0.3}\text{MnO}_{3}$ memristor-based synapse models, each getting progressively closer to hardware realism, alongside four STDP learning rules in a two-layer SNN with LIF neurons and adaptive thresholds for five-class MNIST classification. On MNIST with small train set and large test set, our two-layer SNN with ideal, 25-state, and 12-state nonlinear memristor synapses achieves 92.73 %, 91.07 %, and 80 % accuracy, respectively, while converging faster and using fewer parameters than comparable ANN/CNN baselines.
Related papers
- Entanglement Classification of Arbitrary Three-Qubit States via Artificial Neural Networks [2.715284063484557]
We design and implement artificial neural networks (ANNs) to detect and classify entanglement for three-qubit systems.
The models are trained and validated on a simulated dataset of randomly generated states.
Remarkably, we find that feeding only 7 diagonal elements of the density matrix into the ANN results in an accuracy greater than 94% for both tasks.
arXiv Detail & Related papers (2024-11-18T06:50:10Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Prompt Tuning for Parameter-efficient Medical Image Segmentation [79.09285179181225]
We propose and investigate several contributions to achieve a parameter-efficient but effective adaptation for semantic segmentation on two medical imaging datasets.
We pre-train this architecture with a dedicated dense self-supervision scheme based on assignments to online generated prototypes.
We demonstrate that the resulting neural network model is able to attenuate the gap between fully fine-tuned and parameter-efficiently adapted models.
arXiv Detail & Related papers (2022-11-16T21:55:05Z) - Bayesian Neural Network Language Modeling for Speech Recognition [59.681758762712754]
State-of-the-art neural network language models (NNLMs) represented by long short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming highly complex.
In this paper, an overarching full Bayesian learning framework is proposed to account for the underlying uncertainty in LSTM-RNN and Transformer LMs.
arXiv Detail & Related papers (2022-08-28T17:50:19Z) - Text Classification in Memristor-based Spiking Neural Networks [0.0]
We develop a simulation framework with a virtual memristor array to demonstrate a sentiment analysis task in the IMDB movie reviews dataset.
We achieve the classification accuracy of 85.88% by converting a pre-trained ANN to a memristor-based SNN and 84.86% by training the memristor-based SNN directly.
We also investigate how global parameters such as spike train length, the read noise, and the weight updating stop conditions affect the neural networks in both approaches.
arXiv Detail & Related papers (2022-07-27T18:08:31Z) - Supervised Training of Siamese Spiking Neural Networks with Earth's
Mover Distance [4.047840018793636]
This study adapts the highly-versatile siamese neural network model to the event data domain.
We introduce a supervised training framework for optimizing Earth's Mover Distance between spike trains with spiking neural networks (SNN)
arXiv Detail & Related papers (2022-02-20T00:27:57Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - NL-CNN: A Resources-Constrained Deep Learning Model based on Nonlinear
Convolution [0.0]
A novel convolution neural network model, abbreviated NL-CNN, is proposed, where nonlinear convolution is emulated in a cascade of convolution + nonlinearity layers.
Performance evaluation for several widely known datasets is provided, showing several relevant features.
arXiv Detail & Related papers (2021-01-30T13:38:42Z) - Generating Efficient DNN-Ensembles with Evolutionary Computation [3.28217012194635]
We leverage ensemble learning as a tool for the creation of faster, smaller, and more accurate deep learning models.
We run EARN on 10 image classification datasets with an initial pool of 32 state-of-the-art DCNN on both CPU and GPU platforms.
We generate models with speedups up to $7.60times$, reductions of parameters by $10times$, or increases in accuracy up to $6.01%$ regarding the best DNN in the pool.
arXiv Detail & Related papers (2020-09-18T09:14:56Z) - L-Vector: Neural Label Embedding for Domain Adaptation [62.112885747045766]
We propose a novel neural label embedding (NLE) scheme for the domain adaptation of a deep neural network (DNN) acoustic model with unpaired data samples.
NLE achieves up to 14.1% relative word error rate reduction over direct re-training with one-hot labels.
arXiv Detail & Related papers (2020-04-25T06:40:31Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.