Self-Activating Neural Ensembles for Continual Reinforcement Learning
- URL: http://arxiv.org/abs/2301.00141v1
- Date: Sat, 31 Dec 2022 07:11:05 GMT
- Title: Self-Activating Neural Ensembles for Continual Reinforcement Learning
- Authors: Sam Powers, Eliot Xing, Abhinav Gupta
- Abstract summary: Self-Activating Neural Ensembles (SANE) uses a modular architecture designed to avoid catastrophic forgetting without making any assumptions.
During training, new modules are created as needed and only activated modules are updated to ensure that unused modules remain unchanged.
This system enables our method to retain and leverage old skills, while growing and learning new ones.
- Score: 23.00149997940467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability for an agent to continuously learn new skills without
catastrophically forgetting existing knowledge is of critical importance for
the development of generally intelligent agents. Most methods devised to
address this problem depend heavily on well-defined task boundaries, and thus
depend on human supervision. Our task-agnostic method, Self-Activating Neural
Ensembles (SANE), uses a modular architecture designed to avoid catastrophic
forgetting without making any such assumptions. At the beginning of each
trajectory, a module in the SANE ensemble is activated to determine the agent's
next policy. During training, new modules are created as needed and only
activated modules are updated to ensure that unused modules remain unchanged.
This system enables our method to retain and leverage old skills, while growing
and learning new ones. We demonstrate our approach on visually rich
procedurally generated environments.
Related papers
- I Know How: Combining Prior Policies to Solve New Tasks [17.214443593424498]
Multi-Task Reinforcement Learning aims at developing agents that are able to continually evolve and adapt to new scenarios.
Learning from scratch for each new task is not a viable or sustainable option.
We propose a new framework, I Know How, which provides a common formalization.
arXiv Detail & Related papers (2024-06-14T08:44:51Z) - Neuro-mimetic Task-free Unsupervised Online Learning with Continual
Self-Organizing Maps [56.827895559823126]
Self-organizing map (SOM) is a neural model often used in clustering and dimensionality reduction.
We propose a generalization of the SOM, the continual SOM, which is capable of online unsupervised learning under a low memory budget.
Our results, on benchmarks including MNIST, Kuzushiji-MNIST, and Fashion-MNIST, show almost a two times increase in accuracy.
arXiv Detail & Related papers (2024-02-19T19:11:22Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Modular Deep Learning [120.36599591042908]
Transfer learning has recently become the dominant paradigm of machine learning.
It remains unclear how to develop models that specialise towards multiple tasks without incurring negative interference.
Modular deep learning has emerged as a promising solution to these challenges.
arXiv Detail & Related papers (2023-02-22T18:11:25Z) - The Role of Bio-Inspired Modularity in General Learning [0.0]
One goal of general intelligence is to learn novel information without overwriting prior learning.
bootstrapping previous knowledge may allow for faster learning of a novel task.
modularity may offer a solution to weight-update learning methods that adheres to the learning without catastrophic forgetting and bootstrapping constraints.
arXiv Detail & Related papers (2021-09-23T18:45:34Z) - Uncertainty-based Modulation for Lifelong Learning [1.3334365645271111]
We present an algorithm inspired by neuromodulatory mechanisms in the human brain that integrates and expands upon Stephen Grossberg's Adaptive Resonance Theory proposals.
Specifically, it builds on the concept of uncertainty, and employs a series of neuromodulatory mechanisms to enable continuous learning.
We demonstrate the critical role of developing these systems in a closed-loop manner where the environment and the agent's behaviors constrain and guide the learning process.
arXiv Detail & Related papers (2020-01-27T14:34:37Z) - A Neural Dirichlet Process Mixture Model for Task-Free Continual
Learning [48.87397222244402]
We propose an expansion-based approach for task-free continual learning.
Our model successfully performs task-free continual learning for both discriminative and generative tasks.
arXiv Detail & Related papers (2020-01-03T02:07:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.