Embodied Synaptic Plasticity with Online Reinforcement learning
- URL: http://arxiv.org/abs/2003.01431v1
- Date: Tue, 3 Mar 2020 10:29:02 GMT
- Title: Embodied Synaptic Plasticity with Online Reinforcement learning
- Authors: Jacques Kaiser, Michael Hoff, Andreas Konle, J. Camilo Vasquez Tieck,
David Kappel, Daniel Reichard, Anand Subramoney, Robert Legenstein, Arne
Roennau, Wolfgang Maass, Rudiger Dillmann
- Abstract summary: This paper contributes to bringing the fields of computational neuroscience and robotics closer together by integrating open-source software components from these two fields.
We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks.
- Score: 5.6006805285925445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The endeavor to understand the brain involves multiple collaborating research
fields. Classically, synaptic plasticity rules derived by theoretical
neuroscientists are evaluated in isolation on pattern classification tasks.
This contrasts with the biological brain which purpose is to control a body in
closed-loop. This paper contributes to bringing the fields of computational
neuroscience and robotics closer together by integrating open-source software
components from these two fields. The resulting framework allows to evaluate
the validity of biologically-plausibe plasticity models in closed-loop robotics
environments. We demonstrate this framework to evaluate Synaptic Plasticity
with Online REinforcement learning (SPORE), a reward-learning rule based on
synaptic sampling, on two visuomotor tasks: reaching and lane following. We
show that SPORE is capable of learning to perform policies within the course of
simulated hours for both tasks. Provisional parameter explorations indicate
that the learning rate and the temperature driving the stochastic processes
that govern synaptic learning dynamics need to be regulated for performance
improvements to be retained. We conclude by discussing the recent deep
reinforcement learning techniques which would be beneficial to increase the
functionality of SPORE on visuomotor tasks.
Related papers
- Brain-inspired continual pre-trained learner via silent synaptic consolidation [2.872028467114491]
Artsy is inspired by the activation mechanisms of silent synapses via spike-timing-dependent plasticity observed in mature brains.
It mimics mature brain dynamics by maintaining memory stability for previously learned knowledge within the pre-trained network.
During inference, artificial silent and functional synapses are utilized to establish precise connections between the pre-trained network and the sub-networks.
arXiv Detail & Related papers (2024-10-08T10:56:19Z) - Theories of synaptic memory consolidation and intelligent plasticity for continual learning [7.573586022424398]
synaptic plasticity mechanisms must maintain and evolve an internal state.
plasticity algorithms must leverage the internal state to intelligently regulate plasticity at individual synapses.
arXiv Detail & Related papers (2024-05-27T08:13:39Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Tuning Synaptic Connections instead of Weights by Genetic Algorithm in Spiking Policy Network [16.876474167808784]
Modern deep reinforcement learning (DRL) explores a computational approach to learning from interaction.
We optimized a spiking policy network (SPN) using a genetic algorithm as an energy-efficient alternative to DRL.
Inspired by biological research showing that the brain forms memories by creating new synaptic connections, we tuned the synaptic connections instead of weights in the SPN to solve given tasks.
arXiv Detail & Related papers (2022-12-29T12:36:36Z) - A Spiking Neuron Synaptic Plasticity Model Optimized for Unsupervised
Learning [0.0]
Spiking neural networks (SNN) are considered as a perspective basis for performing all kinds of learning tasks - unsupervised, supervised and reinforcement learning.
Learning in SNN is implemented through synaptic plasticity - the rules which determine dynamics of synaptic weights depending usually on activity of the pre- and post-synaptic neurons.
arXiv Detail & Related papers (2021-11-12T15:26:52Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Meta-Learning through Hebbian Plasticity in Random Networks [12.433600693422235]
Lifelong learning and adaptability are two defining aspects of biological agents.
Inspired by this biological mechanism, we propose a search method that only searches for synapse-specific Hebbian learning rules.
We find that starting from completely random weights, the discovered Hebbian rules enable an agent to navigate a dynamical 2D-pixel environment.
arXiv Detail & Related papers (2020-07-06T14:32:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.