Avalanche RL: a Continual Reinforcement Learning Library
- URL: http://arxiv.org/abs/2202.13657v1
- Date: Mon, 28 Feb 2022 10:01:22 GMT
- Title: Avalanche RL: a Continual Reinforcement Learning Library
- Authors: Nicol\`o Lucchesi, Antonio Carta and Vincenzo Lomonaco
- Abstract summary: We describe Avalanche RL, a library for Continual Reinforcement Learning.
Avalanche RL is based on PyTorch and supports any OpenAI Gym environment.
Continual Habitat-Lab is a novel benchmark and a high-level library which enables the usage of the photorealistic simulator Habitat-Sim.
- Score: 8.351133445318448
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual Reinforcement Learning (CRL) is a challenging setting where an
agent learns to interact with an environment that is constantly changing over
time (the stream of experiences). In this paper, we describe Avalanche RL, a
library for Continual Reinforcement Learning which allows to easily train
agents on a continuous stream of tasks. Avalanche RL is based on PyTorch and
supports any OpenAI Gym environment. Its design is based on Avalanche, one of
the more popular continual learning libraries, which allow us to reuse a large
number of continual learning strategies and improve the interaction between
reinforcement learning and continual learning researchers. Additionally, we
propose Continual Habitat-Lab, a novel benchmark and a high-level library which
enables the usage of the photorealistic simulator Habitat-Sim for CRL research.
Overall, Avalanche RL attempts to unify under a common framework continual
reinforcement learning applications, which we hope will foster the growth of
the field.
Related papers
- Accelerating Goal-Conditioned RL Algorithms and Research [17.155006770675904]
Self-supervised goal-conditioned reinforcement learning (GCRL) agents discover new behaviors by learning from the goals achieved during unstructured interaction with the environment.
These methods have failed to see similar success due to a lack of data from slow environment simulations as well as a lack of stable algorithms.
We release a benchmark (JaxGCRL) for self-supervised GCRL, enabling researchers to train agents for millions of environment steps in minutes on a single GPU.
arXiv Detail & Related papers (2024-08-20T17:58:40Z) - HackAtari: Atari Learning Environments for Robust and Continual Reinforcement Learning [20.034972354302788]
Reinforcement learning (RL) leverages novelty as a means of exploration, yet agents often struggle to handle novel situations.
We propose HackAtari, a framework introducing controlled novelty to the most common RL benchmark, the Atari Learning Environment.
arXiv Detail & Related papers (2024-06-06T12:17:05Z) - Hierarchical Continual Reinforcement Learning via Large Language Model [15.837883929274758]
Hi-Core is designed to facilitate the transfer of high-level knowledge.
It orchestrates a twolayer structure: high-level policy formulation by a large language model (LLM)
Hi-Core has demonstrated its effectiveness in handling diverse CRL tasks, which outperforms popular baselines.
arXiv Detail & Related papers (2024-01-25T03:06:51Z) - SequeL: A Continual Learning Library in PyTorch and JAX [50.33956216274694]
SequeL is a library for Continual Learning that supports both PyTorch and JAX frameworks.
It provides a unified interface for a wide range of Continual Learning algorithms, including regularization-based approaches, replay-based approaches, and hybrid approaches.
We release SequeL as an open-source library, enabling researchers and developers to easily experiment and extend the library for their own purposes.
arXiv Detail & Related papers (2023-04-21T10:00:22Z) - Robust Reinforcement Learning as a Stackelberg Game via
Adaptively-Regularized Adversarial Training [43.97565851415018]
Robust Reinforcement Learning (RL) focuses on improving performances under model errors or adversarial attacks.
Most of the existing literature models RARL as a zero-sum simultaneous game with Nash equilibrium as the solution concept.
We introduce a novel hierarchical formulation of robust RL - a general-sum Stackelberg game model called RRL-Stack.
arXiv Detail & Related papers (2022-02-19T03:44:05Z) - RvS: What is Essential for Offline RL via Supervised Learning? [77.91045677562802]
Recent work has shown that supervised learning alone, without temporal difference (TD) learning, can be remarkably effective for offline RL.
In every environment suite we consider simply maximizing likelihood with two-layer feedforward is competitive.
They also probe the limits of existing RvS methods, which are comparatively weak on random data.
arXiv Detail & Related papers (2021-12-20T18:55:16Z) - Avalanche: an End-to-End Library for Continual Learning [81.84325803942811]
We propose Avalanche, an open-source library for continual learning research based on PyTorch.
Avalanche is designed to provide a shared and collaborative for fast prototyping, training, and reproducible evaluation of continual learning algorithms.
arXiv Detail & Related papers (2021-04-01T11:31:46Z) - Continuous Coordination As a Realistic Scenario for Lifelong Learning [6.044372319762058]
We introduce a multi-agent lifelong learning testbed that supports both zero-shot and few-shot settings.
We evaluate several recent MARL methods, and benchmark state-of-the-art LLL algorithms in limited memory and computation.
We empirically show that the agents trained in our setup are able to coordinate well with unseen agents, without any additional assumptions made by previous works.
arXiv Detail & Related papers (2021-03-04T18:44:03Z) - Reset-Free Lifelong Learning with Skill-Space Planning [105.00539596788127]
We propose Lifelong Skill Planning (LiSP), an algorithmic framework for non-episodic lifelong RL.
LiSP learns the skills in an unsupervised manner using intrinsic rewards and plan over the learned skills using a learned dynamics model.
We demonstrate empirically that LiSP successfully enables long-horizon planning and learns agents that can avoid catastrophic failures even in challenging non-stationary and non-episodic environments.
arXiv Detail & Related papers (2020-12-07T09:33:02Z) - The NetHack Learning Environment [79.06395964379107]
We present the NetHack Learning Environment (NLE), a procedurally generated rogue-like environment for Reinforcement Learning research.
We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL.
We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration.
arXiv Detail & Related papers (2020-06-24T14:12:56Z) - MushroomRL: Simplifying Reinforcement Learning Research [60.70556446270147]
MushroomRL is an open-source Python library developed to simplify the process of implementing and running Reinforcement Learning (RL) experiments.
Compared to other available libraries, MushroomRL has been created with the purpose of providing a comprehensive and flexible framework to minimize the effort in implementing and testing novel RL methodologies.
arXiv Detail & Related papers (2020-01-04T17:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.