Understanding the Synergies between Quality-Diversity and Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2303.06164v1
- Date: Fri, 10 Mar 2023 19:02:42 GMT
- Title: Understanding the Synergies between Quality-Diversity and Deep
Reinforcement Learning
- Authors: Bryan Lim, Manon Flageat, Antoine Cully
- Abstract summary: Generalized Actor-Critic QD-RL is a unified modular framework for actor-critic deep RL methods in the QD-RL setting.
We introduce two new algorithms, PGA-ME (SAC) and PGA-ME (DroQ) which apply recent advancements in Deep RL to the QD-RL setting.
- Score: 4.788163807490196
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The synergies between Quality-Diversity (QD) and Deep Reinforcement Learning
(RL) have led to powerful hybrid QD-RL algorithms that have shown tremendous
potential, and brings the best of both fields. However, only a single deep RL
algorithm (TD3) has been used in prior hybrid methods despite notable progress
made by other RL algorithms. Additionally, there are fundamental differences in
the optimization procedures between QD and RL which would benefit from a more
principled approach. We propose Generalized Actor-Critic QD-RL, a unified
modular framework for actor-critic deep RL methods in the QD-RL setting. This
framework provides a path to study insights from Deep RL in the QD-RL setting,
which is an important and efficient way to make progress in QD-RL. We introduce
two new algorithms, PGA-ME (SAC) and PGA-ME (DroQ) which apply recent
advancements in Deep RL to the QD-RL setting, and solves the humanoid
environment which was not possible using existing QD-RL algorithms. However, we
also find that not all insights from Deep RL can be effectively translated to
QD-RL. Critically, this work also demonstrates that the actor-critic models in
QD-RL are generally insufficiently trained and performance gains can be
achieved without any additional environment evaluations.
Related papers
- Generative AI for Deep Reinforcement Learning: Framework, Analysis, and Use Cases [60.30995339585003]
Deep reinforcement learning (DRL) has been widely applied across various fields and has achieved remarkable accomplishments.
DRL faces certain limitations, including low sample efficiency and poor generalization.
We present how to leverage generative AI (GAI) to address these issues and enhance the performance of DRL algorithms.
arXiv Detail & Related papers (2024-05-31T01:25:40Z) - ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL [80.10358123795946]
We develop a framework for building multi-turn RL algorithms for fine-tuning large language models.
Our framework adopts a hierarchical RL approach and runs two RL algorithms in parallel.
Empirically, we find that ArCHer significantly improves efficiency and performance on agent tasks.
arXiv Detail & Related papers (2024-02-29T18:45:56Z) - Leveraging Knowledge Distillation for Efficient Deep Reinforcement
Learning in Resource-Constrained Environments [0.0]
This paper aims to explore the potential of combining Deep Reinforcement Learning (DRL) with Knowledge Distillation (KD)
The primary objective is to provide a benchmark for evaluating the performance of different DRL algorithms that have been refined using KD techniques.
By exploring the combination of DRL and KD, this work aims to promote the development of models that require fewer GPU resources, learn more quickly, and make faster decisions in complex environments.
arXiv Detail & Related papers (2023-10-16T08:26:45Z) - RL$^3$: Boosting Meta Reinforcement Learning via RL inside RL$^2$ [12.111848705677142]
We propose RL$3$, a hybrid approach that incorporates action-values, learned per task through traditional RL, in the inputs to meta-RL.
We show that RL$3$ earns greater cumulative reward in the long term, compared to RL$2$, while maintaining data-efficiency in the short term, and generalizes better to out-of-distribution tasks.
arXiv Detail & Related papers (2023-06-28T04:16:16Z) - A Survey of Meta-Reinforcement Learning [69.76165430793571]
We cast the development of better RL algorithms as a machine learning problem itself in a process called meta-RL.
We discuss how, at a high level, meta-RL research can be clustered based on the presence of a task distribution and the learning budget available for each individual task.
We conclude by presenting the open problems on the path to making meta-RL part of the standard toolbox for a deep RL practitioner.
arXiv Detail & Related papers (2023-01-19T12:01:41Z) - Deep Black-Box Reinforcement Learning with Movement Primitives [15.184283143878488]
We present a new algorithm for deep reinforcement learning (RL)
It is based on differentiable trust region layers, a successful on-policy deep RL algorithm.
We compare our ERL algorithm to state-of-the-art step-based algorithms in many complex simulated robotic control tasks.
arXiv Detail & Related papers (2022-10-18T06:34:52Z) - DRL-based Slice Placement Under Non-Stationary Conditions [0.8459686722437155]
We consider online learning for optimal network slice placement under the assumption that slice requests arrive according to a non-stationary process.
We specifically propose two pure-DRL algorithms and two families of hybrid DRL-heuristic algorithms.
We show that the proposed hybrid DRL-heuristic algorithms require three orders of magnitude of learning episodes less than pure-DRL to achieve convergence.
arXiv Detail & Related papers (2021-08-05T10:05:12Z) - RL-DARTS: Differentiable Architecture Search for Reinforcement Learning [62.95469460505922]
We introduce RL-DARTS, one of the first applications of Differentiable Architecture Search (DARTS) in reinforcement learning (RL)
By replacing the image encoder with a DARTS supernet, our search method is sample-efficient, requires minimal extra compute resources, and is also compatible with off-policy and on-policy RL algorithms, needing only minor changes in preexisting code.
We show that the supernet gradually learns better cells, leading to alternative architectures which can be highly competitive against manually designed policies, but also verify previous design choices for RL policies.
arXiv Detail & Related papers (2021-06-04T03:08:43Z) - Combining Pessimism with Optimism for Robust and Efficient Model-Based
Deep Reinforcement Learning [56.17667147101263]
In real-world tasks, reinforcement learning agents encounter situations that are not present during training time.
To ensure reliable performance, the RL agents need to exhibit robustness against worst-case situations.
We propose the Robust Hallucinated Upper-Confidence RL (RH-UCRL) algorithm to provably solve this problem.
arXiv Detail & Related papers (2021-03-18T16:50:17Z) - Maximum Entropy RL (Provably) Solves Some Robust RL Problems [94.80212602202518]
We prove theoretically that standard maximum entropy RL is robust to some disturbances in the dynamics and the reward function.
Our results suggest that MaxEnt RL by itself is robust to certain disturbances, without requiring any additional modifications.
arXiv Detail & Related papers (2021-03-10T18:45:48Z) - Active Finite Reward Automaton Inference and Reinforcement Learning
Using Queries and Counterexamples [31.31937554018045]
Deep reinforcement learning (RL) methods require intensive data from the exploration of the environment to achieve satisfactory performance.
We propose a framework that enables an RL agent to reason over its exploration process and distill high-level knowledge for effectively guiding its future explorations.
Specifically, we propose a novel RL algorithm that learns high-level knowledge in the form of a finite reward automaton by using the L* learning algorithm.
arXiv Detail & Related papers (2020-06-28T21:13:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.