Evolving Populations of Diverse RL Agents with MAP-Elites
- URL: http://arxiv.org/abs/2303.12803v2
- Date: Thu, 15 Jun 2023 15:04:39 GMT
- Title: Evolving Populations of Diverse RL Agents with MAP-Elites
- Authors: Thomas Pierrot and Arthur Flajolet
- Abstract summary: We introduce a flexible framework that allows the use of any Reinforcement Learning (RL) algorithm instead of just policies.
We demonstrate the benefits brought about by our framework through extensive numerical experiments on a number of robotics control problems.
- Score: 1.5575376673936223
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Quality Diversity (QD) has emerged as a powerful alternative optimization
paradigm that aims at generating large and diverse collections of solutions,
notably with its flagship algorithm MAP-ELITES (ME) which evolves solutions
through mutations and crossovers. While very effective for some unstructured
problems, early ME implementations relied exclusively on random search to
evolve the population of solutions, rendering them notoriously
sample-inefficient for high-dimensional problems, such as when evolving neural
networks. Follow-up works considered exploiting gradient information to guide
the search in order to address these shortcomings through techniques borrowed
from either Black-Box Optimization (BBO) or Reinforcement Learning (RL). While
mixing RL techniques with ME unlocked state-of-the-art performance for robotics
control problems that require a good amount of exploration, it also plagued
these ME variants with limitations common among RL algorithms that ME was free
of, such as hyperparameter sensitivity, high stochasticity as well as training
instability, including when the population size increases as some components
are shared across the population in recent approaches. Furthermore, existing
approaches mixing ME with RL tend to be tied to a specific RL algorithm, which
effectively prevents their use on problems where the corresponding RL algorithm
fails. To address these shortcomings, we introduce a flexible framework that
allows the use of any RL algorithm and alleviates the aforementioned
limitations by evolving populations of agents (whose definition include
hyperparameters and all learnable parameters) instead of just policies. We
demonstrate the benefits brought about by our framework through extensive
numerical experiments on a number of robotics control problems, some of which
with deceptive rewards, taken from the QD-RL literature.
Related papers
- Joint Demonstration and Preference Learning Improves Policy Alignment with Human Feedback [58.049113055986375]
We develop a single stage approach named Alignment with Integrated Human Feedback (AIHF) to train reward models and the policy.
The proposed approach admits a suite of efficient algorithms, which can easily reduce to, and leverage, popular alignment algorithms.
We demonstrate the efficiency of the proposed solutions with extensive experiments involving alignment problems in LLMs and robotic control problems in MuJoCo.
arXiv Detail & Related papers (2024-06-11T01:20:53Z) - Adaptive $Q$-Network: On-the-fly Target Selection for Deep Reinforcement Learning [18.579378919155864]
We propose Adaptive $Q$Network (AdaQN) to take into account the non-stationarity of the optimization procedure without requiring additional samples.
AdaQN is theoretically sound and empirically validate it in MuJoCo control problems and Atari $2600 games.
arXiv Detail & Related papers (2024-05-25T11:57:43Z) - Variational Autoencoders for exteroceptive perception in reinforcement learning-based collision avoidance [0.0]
Deep Reinforcement Learning (DRL) has emerged as a promising control framework.
Current DRL algorithms require disproportionally large computational resources to find near-optimal policies.
This paper presents a comprehensive exploration of our proposed approach in maritime control systems.
arXiv Detail & Related papers (2024-03-31T09:25:28Z) - Hyperparameter Optimization for Multi-Objective Reinforcement Learning [0.27309692684728615]
Reinforcement learning (RL) has emerged as a powerful approach for tackling complex problems.
The recent introduction of multi-objective reinforcement learning (MORL) has further expanded the scope of RL.
In practice, this task often proves to be challenging, leading to unsuccessful deployments of these techniques.
arXiv Detail & Related papers (2023-10-25T09:17:25Z) - Deep Black-Box Reinforcement Learning with Movement Primitives [15.184283143878488]
We present a new algorithm for deep reinforcement learning (RL)
It is based on differentiable trust region layers, a successful on-policy deep RL algorithm.
We compare our ERL algorithm to state-of-the-art step-based algorithms in many complex simulated robotic control tasks.
arXiv Detail & Related papers (2022-10-18T06:34:52Z) - Reinforcement Learning-Empowered Mobile Edge Computing for 6G Edge
Intelligence [76.96698721128406]
Mobile edge computing (MEC) considered a novel paradigm for computation and delay-sensitive tasks in fifth generation (5G) networks and beyond.
This paper provides a comprehensive research review on free-enabled RL and offers insight for development.
arXiv Detail & Related papers (2022-01-27T10:02:54Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Adaptive Stochastic ADMM for Decentralized Reinforcement Learning in
Edge Industrial IoT [106.83952081124195]
Reinforcement learning (RL) has been widely investigated and shown to be a promising solution for decision-making and optimal control processes.
We propose an adaptive ADMM (asI-ADMM) algorithm and apply it to decentralized RL with edge-computing-empowered IIoT networks.
Experiment results show that our proposed algorithms outperform the state of the art in terms of communication costs and scalability, and can well adapt to complex IoT environments.
arXiv Detail & Related papers (2021-06-30T16:49:07Z) - Combining Pessimism with Optimism for Robust and Efficient Model-Based
Deep Reinforcement Learning [56.17667147101263]
In real-world tasks, reinforcement learning agents encounter situations that are not present during training time.
To ensure reliable performance, the RL agents need to exhibit robustness against worst-case situations.
We propose the Robust Hallucinated Upper-Confidence RL (RH-UCRL) algorithm to provably solve this problem.
arXiv Detail & Related papers (2021-03-18T16:50:17Z) - Sample-Efficient Automated Deep Reinforcement Learning [33.53903358611521]
We propose a population-based automated RL framework to meta-optimize arbitrary off-policy RL algorithms.
By sharing the collected experience across the population, we substantially increase the sample efficiency of the meta-optimization.
We demonstrate the capabilities of our sample-efficient AutoRL approach in a case study with the popular TD3 algorithm in the MuJoCo benchmark suite.
arXiv Detail & Related papers (2020-09-03T10:04:06Z) - SUNRISE: A Simple Unified Framework for Ensemble Learning in Deep
Reinforcement Learning [102.78958681141577]
We present SUNRISE, a simple unified ensemble method, which is compatible with various off-policy deep reinforcement learning algorithms.
SUNRISE integrates two key ingredients: (a) ensemble-based weighted Bellman backups, which re-weight target Q-values based on uncertainty estimates from a Q-ensemble, and (b) an inference method that selects actions using the highest upper-confidence bounds for efficient exploration.
arXiv Detail & Related papers (2020-07-09T17:08:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.