Integrating Distributed Architectures in Highly Modular RL Libraries
- URL: http://arxiv.org/abs/2007.02622v3
- Date: Mon, 12 Jun 2023 08:40:02 GMT
- Title: Integrating Distributed Architectures in Highly Modular RL Libraries
- Authors: Albert Bou, Sebastian Dittert and Gianni De Fabritiis
- Abstract summary: Most popular reinforcement learning libraries advocate for highly modular agent composability.
We propose a versatile approach that allows the definition of RL agents at different scales through independent reusable components.
- Score: 4.297070083645049
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advancing reinforcement learning (RL) requires tools that are flexible enough
to easily prototype new methods while avoiding impractically slow experimental
turnaround times. To match the first requirement, the most popular RL libraries
advocate for highly modular agent composability, which facilitates
experimentation and development. To solve challenging environments within
reasonable time frames, scaling RL to large sampling and computing resources
has proved a successful strategy. However, this capability has been so far
difficult to combine with modularity. In this work, we explore design choices
to allow agent composability both at a local and distributed level of
execution. We propose a versatile approach that allows the definition of RL
agents at different scales through independent reusable components. We
demonstrate experimentally that our design choices allow us to reproduce
classical benchmarks, explore multiple distributed architectures, and solve
novel and complex environments while giving full control to the user in the
agent definition and training scheme definition. We believe this work can
provide useful insights to the next generation of RL libraries.
Related papers
- EasyRL4Rec: An Easy-to-use Library for Reinforcement Learning Based Recommender Systems [18.22130279210423]
We introduce EasyRL4Rec, an easy-to-use code library designed specifically for RL-based RSs.
This library provides lightweight and diverse RL environments based on five public datasets.
EasyRL4Rec seeks to facilitate the model development and experimental process in the domain of RL-based RSs.
arXiv Detail & Related papers (2024-02-23T07:54:26Z) - OpenRL: A Unified Reinforcement Learning Framework [19.12129820612253]
We present OpenRL, an advanced reinforcement learning (RL) framework.
It is designed to accommodate a diverse array of tasks, from single-agent challenges to complex multi-agent systems.
It integrates Natural Language Processing (NLP) with RL, enabling researchers to address a combination of RL training and language-centric tasks effectively.
arXiv Detail & Related papers (2023-12-20T12:04:06Z) - LExCI: A Framework for Reinforcement Learning with Embedded Systems [1.8218298349840023]
We present a framework named LExCI, which bridges the gap between RL libraries and embedded systems.
It provides a free and open-source tool for training agents on embedded systems using the open-source library RLlib.
Its operability is demonstrated with two state-of-the-art RL-algorithms and a rapid control prototyping system.
arXiv Detail & Related papers (2023-12-05T13:06:25Z) - PEAR: Primitive enabled Adaptive Relabeling for boosting Hierarchical Reinforcement Learning [25.84621883831624]
Hierarchical reinforcement learning has the potential to solve complex long horizon tasks using temporal abstraction and increased exploration.
We present primitive enabled adaptive relabeling (PEAR)
We first perform adaptive relabeling on a few expert demonstrations to generate efficient subgoal supervision.
We then jointly optimize HRL agents by employing reinforcement learning (RL) and imitation learning (IL)
arXiv Detail & Related papers (2023-06-10T09:41:30Z) - Maximize to Explore: One Objective Function Fusing Estimation, Planning,
and Exploration [87.53543137162488]
We propose an easy-to-implement online reinforcement learning (online RL) framework called textttMEX.
textttMEX integrates estimation and planning components while balancing exploration exploitation automatically.
It can outperform baselines by a stable margin in various MuJoCo environments with sparse rewards.
arXiv Detail & Related papers (2023-05-29T17:25:26Z) - CoRL: Environment Creation and Management Focused on System Integration [0.0]
The Core Reinforcement Learning library (CoRL) is a modular, composable, and hyper-configurable environment creation tool.
It allows minute control over agent observations, rewards, and done conditions through the use of easy-to-read configuration files, pydantic validators, and a functor design pattern.
arXiv Detail & Related papers (2023-03-03T19:01:53Z) - Multi-Agent Reinforcement Learning for Microprocessor Design Space
Exploration [71.95914457415624]
Microprocessor architects are increasingly resorting to domain-specific customization in the quest for high-performance and energy-efficiency.
We propose an alternative formulation that leverages Multi-Agent RL (MARL) to tackle this problem.
Our evaluation shows that the MARL formulation consistently outperforms single-agent RL baselines.
arXiv Detail & Related papers (2022-11-29T17:10:24Z) - Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [112.63440666617494]
Reinforcement learning algorithms can succeed but require large amounts of interactions between the agent and the environment.
We propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent.
We show robust performance on the Real-Word RL benchmark, hinting at resiliency to environment perturbations during adaptation.
arXiv Detail & Related papers (2022-09-24T14:22:29Z) - Multitask Adaptation by Retrospective Exploration with Learned World
Models [77.34726150561087]
We propose a meta-learned addressing model called RAMa that provides training samples for the MBRL agent taken from task-agnostic storage.
The model is trained to maximize the expected agent's performance by selecting promising trajectories solving prior tasks from the storage.
arXiv Detail & Related papers (2021-10-25T20:02:57Z) - Scenic4RL: Programmatic Modeling and Generation of Reinforcement
Learning Environments [89.04823188871906]
Generation of diverse realistic scenarios is challenging for real-time strategy (RTS) environments.
Most of the existing simulators rely on randomly generating the environments.
We introduce the benefits of adopting an existing formal scenario specification language, SCENIC, to assist researchers.
arXiv Detail & Related papers (2021-06-18T21:49:46Z) - MushroomRL: Simplifying Reinforcement Learning Research [60.70556446270147]
MushroomRL is an open-source Python library developed to simplify the process of implementing and running Reinforcement Learning (RL) experiments.
Compared to other available libraries, MushroomRL has been created with the purpose of providing a comprehensive and flexible framework to minimize the effort in implementing and testing novel RL methodologies.
arXiv Detail & Related papers (2020-01-04T17:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.