CoLaNET -- A Spiking Neural Network with Columnar Layered Architecture for Classification
- URL: http://arxiv.org/abs/2409.01230v5
- Date: Sat, 14 Sep 2024 09:34:24 GMT
- Title: CoLaNET -- A Spiking Neural Network with Columnar Layered Architecture for Classification
- Authors: Mikhail Kiselev,
- Abstract summary: I describe a spiking neural network (SNN) architecture which, can be used in wide range of supervised learning classification tasks.
It is assumed, that all participating signals (the classified object description, correct class label and SNN decision) have spiking nature.
I illustrate the high performance of my network on a task related to model-based reinforcement learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In the present paper, I describe a spiking neural network (SNN) architecture which, can be used in wide range of supervised learning classification tasks. It is assumed, that all participating signals (the classified object description, correct class label and SNN decision) have spiking nature. The distinctive feature of this architecture is a combination of prototypical network structures corresponding to different classes and significantly distinctive instances of one class (=columns) and functionally differing populations of neurons inside columns (=layers). The other distinctive feature is a novel combination of anti-Hebbian and dopamine-modulated plasticity. The plasticity rules are local and do not use the backpropagation principle. Besides that, as in my previous studies, I was guided by the requirement that the all neuron/plasticity models should be easily implemented on modern neurochips. I illustrate the high performance of my network on a task related to model-based reinforcement learning, namely, evaluation of proximity of an external world state to the target state.
Related papers
- Neuro-symbolic Rule Learning in Real-world Classification Tasks [75.0907310059298]
We extend pix2rule's neural DNF module to support rule learning in real-world multi-class and multi-label classification tasks.
We propose a novel extended model called neural DNF-EO (Exactly One) which enforces mutual exclusivity in multi-class classification.
arXiv Detail & Related papers (2023-03-29T13:27:14Z) - Dynamical systems' based neural networks [0.7874708385247353]
We build neural networks using a suitable, structure-preserving, numerical time-discretisation.
The structure of the neural network is then inferred from the properties of the ODE vector field.
We present two universal approximation results and demonstrate how to impose some particular properties on the neural networks.
arXiv Detail & Related papers (2022-10-05T16:30:35Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - Towards Disentangling Information Paths with Coded ResNeXt [11.884259630414515]
We take a novel approach to enhance the transparency of the function of the whole network.
We propose a neural network architecture for classification, in which the information that is relevant to each class flows through specific paths.
arXiv Detail & Related papers (2022-02-10T21:45:49Z) - Neural networks with linear threshold activations: structure and
algorithms [1.795561427808824]
We show that 2 hidden layers are necessary and sufficient to represent any function representable in the class.
We also give precise bounds on the sizes of the neural networks required to represent any function in the class.
We propose a new class of neural networks that we call shortcut linear threshold networks.
arXiv Detail & Related papers (2021-11-15T22:33:52Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - Exploiting Heterogeneity in Operational Neural Networks by Synaptic
Plasticity [87.32169414230822]
Recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs)
In this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the Synaptic Plasticity paradigm that poses the essential learning theory in biological neurons.
Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs.
arXiv Detail & Related papers (2020-08-21T19:03:23Z) - Locality Guided Neural Networks for Explainable Artificial Intelligence [12.435539489388708]
We propose a novel algorithm for back propagation, called Locality Guided Neural Network(LGNN)
LGNN preserves locality between neighbouring neurons within each layer of a deep network.
In our experiments, we train various VGG and Wide ResNet (WRN) networks for image classification on CIFAR100.
arXiv Detail & Related papers (2020-07-12T23:45:51Z) - Learn Class Hierarchy using Convolutional Neural Networks [0.9569316316728905]
We propose a new architecture for hierarchical classification of images, introducing a stack of deep linear layers with cross-entropy loss functions and center loss combined.
We experimentally show that our hierarchical classifier presents advantages to the traditional classification approaches finding application in computer vision tasks.
arXiv Detail & Related papers (2020-05-18T12:06:43Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.