Evolving-to-Learn Reinforcement Learning Tasks with Spiking Neural
Networks
- URL: http://arxiv.org/abs/2202.12322v1
- Date: Thu, 24 Feb 2022 19:07:23 GMT
- Title: Evolving-to-Learn Reinforcement Learning Tasks with Spiking Neural
Networks
- Authors: J. Lu, J. J. Hagenaars, G. C. H. E. de Croon
- Abstract summary: We introduce an evolutionary algorithm that evolves suitable synaptic plasticity rules for the task at hand.
We find learning rules that successfully solve an XOR and cart-pole task, and discover new learning rules that outperform the baseline rules from literature.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by the natural nervous system, synaptic plasticity rules are applied
to train spiking neural networks with local information, making them suitable
for online learning on neuromorphic hardware. However, when such rules are
implemented to learn different new tasks, they usually require a significant
amount of work on task-dependent fine-tuning. This paper aims to make this
process easier by employing an evolutionary algorithm that evolves suitable
synaptic plasticity rules for the task at hand. More specifically, we provide a
set of various local signals, a set of mathematical operators, and a global
reward signal, after which a Cartesian genetic programming process finds an
optimal learning rule from these components. Using this approach, we find
learning rules that successfully solve an XOR and cart-pole task, and discover
new learning rules that outperform the baseline rules from literature.
Related papers
- From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks [47.13391046553908]
In artificial networks, the effectiveness of these models relies on their ability to build task specific representation.
Prior studies highlight that different initializations can place networks in either a lazy regime, where representations remain static, or a rich/feature learning regime, where representations evolve dynamically.
These solutions capture the evolution of representations and the Neural Kernel across the spectrum from the rich to the lazy regimes.
arXiv Detail & Related papers (2024-09-22T23:19:04Z) - Rule Based Learning with Dynamic (Graph) Neural Networks [0.8158530638728501]
We present rule based graph neural networks (RuleGNNs) that overcome some limitations of ordinary graph neural networks.
Our experiments show that the predictive performance of RuleGNNs is comparable to state-of-the-art graph classifiers.
We introduce new synthetic benchmark graph datasets to show how to integrate expert knowledge into RuleGNNs.
arXiv Detail & Related papers (2024-06-14T12:01:18Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - IF2Net: Innately Forgetting-Free Networks for Continual Learning [49.57495829364827]
Continual learning can incrementally absorb new concepts without interfering with previously learned knowledge.
Motivated by the characteristics of neural networks, we investigated how to design an Innately Forgetting-Free Network (IF2Net)
IF2Net allows a single network to inherently learn unlimited mapping rules without telling task identities at test time.
arXiv Detail & Related papers (2023-06-18T05:26:49Z) - When Deep Learning Meets Polyhedral Theory: A Survey [6.899761345257773]
In the past decade, deep became the prevalent methodology for predictive modeling thanks to the remarkable accuracy of deep neural learning.
Meanwhile, the structure of neural networks converged back to simplerwise and linear functions.
arXiv Detail & Related papers (2023-04-29T11:46:53Z) - Fully Online Meta-Learning Without Task Boundaries [80.09124768759564]
We study how meta-learning can be applied to tackle online problems of this nature.
We propose a Fully Online Meta-Learning (FOML) algorithm, which does not require any ground truth knowledge about the task boundaries.
Our experiments show that FOML was able to learn new tasks faster than the state-of-the-art online learning methods.
arXiv Detail & Related papers (2022-02-01T07:51:24Z) - Towards fuzzification of adaptation rules in self-adaptive architectures [2.730650695194413]
We focus on exploiting neural networks for the analysis and planning stage in self-adaptive architectures.
One simple option to address such a need is to replace the reasoning based on logical rules with a neural network.
We show how to navigate in this continuum and create a neural network architecture that naturally embeds the original logical rules.
arXiv Detail & Related papers (2021-12-17T12:17:16Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Evolving Neuronal Plasticity Rules using Cartesian Genetic Programming [1.1980325577555802]
We employ genetic programming to evolve biologically plausible human-interpretable plasticity rules.
We demonstrate that the evolved rules perform competitively with known hand-designed solutions.
arXiv Detail & Related papers (2021-02-08T16:17:15Z) - Learning Adaptive Exploration Strategies in Dynamic Environments Through
Informed Policy Regularization [100.72335252255989]
We study the problem of learning exploration-exploitation strategies that effectively adapt to dynamic environments.
We propose a novel algorithm that regularizes the training of an RNN-based policy using informed policies trained to maximize the reward in each task.
arXiv Detail & Related papers (2020-05-06T16:14:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.