Adaptive Reinforcement Learning through Evolving Self-Modifying Neural
Networks
- URL: http://arxiv.org/abs/2006.05832v1
- Date: Fri, 22 May 2020 02:24:44 GMT
- Title: Adaptive Reinforcement Learning through Evolving Self-Modifying Neural
Networks
- Authors: Samuel Schmidgall
- Abstract summary: Current methods in Reinforcement Learning (RL) only adjust to new interactions after reflection over a specified time interval.
Recent work addressing this by endowing artificial neural networks with neuromodulated plasticity have been shown to improve performance on simple RL tasks trained using backpropagation.
Here we study the problem of meta-learning in a challenging quadruped domain, where each leg of the quadruped has a chance of becoming unusable.
Results demonstrate that agents evolved using self-modifying plastic networks are more capable of adapting to complex meta-learning learning tasks, even outperforming the same network updated using gradient
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The adaptive learning capabilities seen in biological neural networks are
largely a product of the self-modifying behavior emerging from online plastic
changes in synaptic connectivity. Current methods in Reinforcement Learning
(RL) only adjust to new interactions after reflection over a specified time
interval, preventing the emergence of online adaptivity. Recent work addressing
this by endowing artificial neural networks with neuromodulated plasticity have
been shown to improve performance on simple RL tasks trained using
backpropagation, but have yet to scale up to larger problems. Here we study the
problem of meta-learning in a challenging quadruped domain, where each leg of
the quadruped has a chance of becoming unusable, requiring the agent to adapt
by continuing locomotion with the remaining limbs. Results demonstrate that
agents evolved using self-modifying plastic networks are more capable of
adapting to complex meta-learning learning tasks, even outperforming the same
network updated using gradient-based algorithms while taking less time to
train.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Biologically-inspired neuronal adaptation improves learning in neural
networks [0.7734726150561086]
Humans still outperform artificial neural networks on many tasks.
We draw inspiration from the brain to improve machine learning algorithms.
We add adaptation to multilayer perceptrons and convolutional neural networks trained on MNIST and CIFAR-10.
arXiv Detail & Related papers (2022-04-08T16:16:02Z) - Learning Fast and Slow for Online Time Series Forecasting [76.50127663309604]
Fast and Slow learning Networks (FSNet) is a holistic framework for online time-series forecasting.
FSNet balances fast adaptation to recent changes and retrieving similar old knowledge.
Our code will be made publicly available.
arXiv Detail & Related papers (2022-02-23T18:23:07Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - SpikePropamine: Differentiable Plasticity in Spiking Neural Networks [0.0]
We introduce a framework for learning the dynamics of synaptic plasticity and neuromodulated synaptic plasticity in Spiking Neural Networks (SNNs)
We show that SNNs augmented with differentiable plasticity are sufficient for solving a set of challenging temporal learning tasks.
These networks are also shown to be capable of producing locomotion on a high-dimensional robotic learning task.
arXiv Detail & Related papers (2021-06-04T19:29:07Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Sparse Meta Networks for Sequential Adaptation and its Application to
Adaptive Language Modelling [7.859988850911321]
We introduce Sparse Meta Networks -- a meta-learning approach to learn online sequential adaptation algorithms for deep neural networks.
We augment a deep neural network with a layer-specific fast-weight memory.
We demonstrate strong performance on a variety of sequential adaptation scenarios.
arXiv Detail & Related papers (2020-09-03T17:06:52Z) - Learning to Learn with Feedback and Local Plasticity [9.51828574518325]
We employ meta-learning to discover networks that learn using feedback connections and local, biologically inspired learning rules.
Our experiments show that meta-trained networks effectively use feedback connections to perform online credit assignment in multi-layer architectures.
arXiv Detail & Related papers (2020-06-16T22:49:07Z) - Training spiking neural networks using reinforcement learning [0.0]
We propose biologically-plausible alternatives to backpropagation to facilitate the training of spiking neural networks.
We focus on investigating the candidacy of reinforcement learning rules in solving the spatial and temporal credit assignment problems.
We compare and contrast the two approaches by applying them to traditional RL domains such as gridworld, cartpole and mountain car.
arXiv Detail & Related papers (2020-05-12T17:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.