Training spiking neural networks using reinforcement learning
- URL: http://arxiv.org/abs/2005.05941v1
- Date: Tue, 12 May 2020 17:40:36 GMT
- Title: Training spiking neural networks using reinforcement learning
- Authors: Sneha Aenugu
- Abstract summary: We propose biologically-plausible alternatives to backpropagation to facilitate the training of spiking neural networks.
We focus on investigating the candidacy of reinforcement learning rules in solving the spatial and temporal credit assignment problems.
We compare and contrast the two approaches by applying them to traditional RL domains such as gridworld, cartpole and mountain car.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neurons in the brain communicate with each other through discrete action
spikes as opposed to continuous signal transmission in artificial neural
networks. Therefore, the traditional techniques for optimization of parameters
in neural networks which rely on the assumption of differentiability of
activation functions are no longer applicable to modeling the learning
processes in the brain. In this project, we propose biologically-plausible
alternatives to backpropagation to facilitate the training of spiking neural
networks. We primarily focus on investigating the candidacy of reinforcement
learning (RL) rules in solving the spatial and temporal credit assignment
problems to enable decision-making in complex tasks. In one approach, we
consider each neuron in a multi-layer neural network as an independent RL agent
forming a different representation of the feature space while the network as a
whole forms the representation of the complex policy to solve the task at hand.
In other approach, we apply the reparameterization trick to enable
differentiation through stochastic transformations in spiking neural networks.
We compare and contrast the two approaches by applying them to traditional RL
domains such as gridworld, cartpole and mountain car. Further we also suggest
variations and enhancements to enable future research in this area.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Topological Representations of Heterogeneous Learning Dynamics of Recurrent Spiking Neural Networks [16.60622265961373]
Spiking Neural Networks (SNNs) have become an essential paradigm in neuroscience and artificial intelligence.
Recent advances in literature have studied the network representations of deep neural networks.
arXiv Detail & Related papers (2024-03-19T05:37:26Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Expressivity of Spiking Neural Networks [15.181458163440634]
We study the capabilities of spiking neural networks where information is encoded in the firing time of neurons.
In contrast to ReLU networks, we prove that spiking neural networks can realize both continuous and discontinuous functions.
arXiv Detail & Related papers (2023-08-16T08:45:53Z) - Learning to Act through Evolution of Neural Diversity in Random Neural
Networks [9.387749254963595]
In most artificial neural networks (ANNs), neural computation is abstracted to an activation function that is usually shared between all neurons.
We propose the optimization of neuro-centric parameters to attain a set of diverse neurons that can perform complex computations.
arXiv Detail & Related papers (2023-05-25T11:33:04Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Functional neural network for decision processing, a racing network of
programmable neurons with fuzzy logic where the target operating model relies
on the network itself [1.1602089225841632]
This paper introduces a novel model of artificial intelligence, the functional neural network for modeling human decision-making processes.
We believe that this functional neural network has a promising potential to transform the way we can compute decision-making.
arXiv Detail & Related papers (2021-02-24T15:19:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.