STDP enhances learning by backpropagation in a spiking neural network
- URL: http://arxiv.org/abs/2102.10530v1
- Date: Sun, 21 Feb 2021 06:55:02 GMT
- Title: STDP enhances learning by backpropagation in a spiking neural network
- Authors: Kotaro Furuya and Jun Ohkubo
- Abstract summary: The proposed method improves the accuracy without additional labeling when a small amount of labeled data is used.
It is possible to implement the proposed learning method for event-driven systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A semi-supervised learning method for spiking neural networks is proposed.
The proposed method consists of supervised learning by backpropagation and
subsequent unsupervised learning by spike-timing-dependent plasticity (STDP),
which is a biologically plausible learning rule. Numerical experiments show
that the proposed method improves the accuracy without additional labeling when
a small amount of labeled data is used. This feature has not been achieved by
existing semi-supervised learning methods of discriminative models. It is
possible to implement the proposed learning method for event-driven systems.
Hence, it would be highly efficient in real-time problems if it were
implemented on neuromorphic hardware. The results suggest that STDP plays an
important role other than self-organization when applied after supervised
learning, which differs from the previous method of using STDP as pre-training
interpreted as self-organization.
Related papers
- Gradient-Free Supervised Learning using Spike-Timing-Dependent Plasticity for Image Recognition [3.087000217989688]
An approach to supervised learning in spiking neural networks is presented using a gradient-free method combined with spike-timing-dependent plasticity for image recognition.
The proposed network architecture is scalable to multiple layers, enabling the development of more complex and deeper SNN models.
arXiv Detail & Related papers (2024-10-21T21:32:17Z) - Out-of-Distribution Detection using Neural Activation Prior [15.673290330356194]
Out-of-distribution detection (OOD) is a crucial technique for deploying machine learning models in the real world.
We propose a simple yet effective Neural Activation Prior (NAP) for OOD detection.
Our method achieves the state-of-the-art performance on CIFAR benchmark and ImageNet dataset.
arXiv Detail & Related papers (2024-02-28T08:45:07Z) - Querying Easily Flip-flopped Samples for Deep Active Learning [63.62397322172216]
Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.
One effective selection strategy is to base it on the model's predictive uncertainty, which can be interpreted as a measure of how informative a sample is.
This paper proposes the it least disagree metric (LDM) as the smallest probability of disagreement of the predicted label.
arXiv Detail & Related papers (2024-01-18T08:12:23Z) - Data Efficient Contrastive Learning in Histopathology using Active Sampling [0.0]
Deep learning algorithms can provide robust quantitative analysis in digital pathology.
These algorithms require large amounts of annotated training data.
Self-supervised methods have been proposed to learn features using ad-hoc pretext tasks.
We propose a new method for actively sampling informative members from the training set using a small proxy network.
arXiv Detail & Related papers (2023-03-28T18:51:22Z) - Understanding and Improving the Role of Projection Head in
Self-Supervised Learning [77.59320917894043]
Self-supervised learning (SSL) aims to produce useful feature representations without access to human-labeled data annotations.
Current contrastive learning approaches append a parametrized projection head to the end of some backbone network to optimize the InfoNCE objective.
This raises a fundamental question: Why is a learnable projection head required if we are to discard it after training?
arXiv Detail & Related papers (2022-12-22T05:42:54Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Biologically-Motivated Deep Learning Method using Hierarchical
Competitive Learning [0.0]
I propose to introduce unsupervised competitive learning which only requires forward propagating signals as a pre-training method for CNNs.
The proposed method could be useful for a variety of poorly labeled data, for example, time series or medical data.
arXiv Detail & Related papers (2020-01-04T20:07:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.