Distributed Deep Reinforcement Learning: An Overview
- URL: http://arxiv.org/abs/2011.11012v1
- Date: Sun, 22 Nov 2020 13:24:35 GMT
- Title: Distributed Deep Reinforcement Learning: An Overview
- Authors: Mohammad Reza Samsami, Hossein Alimadad
- Abstract summary: In this article, we provide a survey of the role of the distributed approaches in DRL.
We overview the state of the field, by studying the key research works that have a significant impact on how we can use distributed methods in DRL.
Also, we evaluate these methods on different tasks and compare their performance with each other and with single actor and learner agents.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep reinforcement learning (DRL) is a very active research area. However,
several technical and scientific issues require to be addressed, amongst which
we can mention data inefficiency, exploration-exploitation trade-off, and
multi-task learning. Therefore, distributed modifications of DRL were
introduced; agents that could be run on many machines simultaneously. In this
article, we provide a survey of the role of the distributed approaches in DRL.
We overview the state of the field, by studying the key research works that
have a significant impact on how we can use distributed methods in DRL. We
choose to overview these papers, from the perspective of distributed learning,
and not the aspect of innovations in reinforcement learning algorithms. Also,
we evaluate these methods on different tasks and compare their performance with
each other and with single actor and learner agents.
Related papers
- Generative AI for Deep Reinforcement Learning: Framework, Analysis, and Use Cases [60.30995339585003]
Deep reinforcement learning (DRL) has been widely applied across various fields and has achieved remarkable accomplishments.
DRL faces certain limitations, including low sample efficiency and poor generalization.
We present how to leverage generative AI (GAI) to address these issues and enhance the performance of DRL algorithms.
arXiv Detail & Related papers (2024-05-31T01:25:40Z) - Sample Efficient Myopic Exploration Through Multitask Reinforcement
Learning with Diverse Tasks [53.44714413181162]
This paper shows that when an agent is trained on a sufficiently diverse set of tasks, a generic policy-sharing algorithm with myopic exploration design can be sample-efficient.
To the best of our knowledge, this is the first theoretical demonstration of the "exploration benefits" of MTRL.
arXiv Detail & Related papers (2024-03-03T22:57:44Z) - Leveraging Knowledge Distillation for Efficient Deep Reinforcement
Learning in Resource-Constrained Environments [0.0]
This paper aims to explore the potential of combining Deep Reinforcement Learning (DRL) with Knowledge Distillation (KD)
The primary objective is to provide a benchmark for evaluating the performance of different DRL algorithms that have been refined using KD techniques.
By exploring the combination of DRL and KD, this work aims to promote the development of models that require fewer GPU resources, learn more quickly, and make faster decisions in complex environments.
arXiv Detail & Related papers (2023-10-16T08:26:45Z) - Evolutionary Reinforcement Learning: A Survey [31.112066295496003]
Reinforcement learning (RL) is a machine learning approach that trains agents to maximize cumulative rewards through interactions with environments.
This article presents a comprehensive survey of state-of-the-art methods for integrating EC into RL, referred to as evolutionary reinforcement learning (EvoRL)
arXiv Detail & Related papers (2023-03-07T01:38:42Z) - Effective Multimodal Reinforcement Learning with Modality Alignment and
Importance Enhancement [41.657470314421204]
It is challenging to train an agent via reinforcement learning due to the heterogeneity and dynamic importance of different modalities.
We propose a novel multimodal RL approach that can do multimodal alignment and importance enhancement according to their similarity and importance.
We test our approach on several multimodal RL domains, showing that it outperforms state-of-the-art methods in terms of learning speed and policy quality.
arXiv Detail & Related papers (2023-02-18T12:35:42Z) - A Survey of Meta-Reinforcement Learning [69.76165430793571]
We cast the development of better RL algorithms as a machine learning problem itself in a process called meta-RL.
We discuss how, at a high level, meta-RL research can be clustered based on the presence of a task distribution and the learning budget available for each individual task.
We conclude by presenting the open problems on the path to making meta-RL part of the standard toolbox for a deep RL practitioner.
arXiv Detail & Related papers (2023-01-19T12:01:41Z) - Pretraining in Deep Reinforcement Learning: A Survey [17.38360092869849]
Pretraining has shown to be effective in acquiring transferable knowledge.
Due to the nature of reinforcement learning, pretraining in this field is faced with unique challenges.
arXiv Detail & Related papers (2022-11-08T02:17:54Z) - A Comprehensive Survey of Data Augmentation in Visual Reinforcement Learning [53.35317176453194]
Data augmentation (DA) has become a widely used technique in visual RL for acquiring sample-efficient and generalizable policies.
We present a principled taxonomy of the existing augmentation techniques used in visual RL and conduct an in-depth discussion on how to better leverage augmented data.
As the first comprehensive survey of DA in visual RL, this work is expected to offer valuable guidance to this emerging field.
arXiv Detail & Related papers (2022-10-10T11:01:57Z) - Conservative Data Sharing for Multi-Task Offline Reinforcement Learning [119.85598717477016]
We argue that a natural use case of offline RL is in settings where we can pool large amounts of data collected in various scenarios for solving different tasks.
We develop a simple technique for data-sharing in multi-task offline RL that routes data based on the improvement over the task-specific data.
arXiv Detail & Related papers (2021-09-16T17:34:06Z) - RL Unplugged: A Suite of Benchmarks for Offline Reinforcement Learning [108.9599280270704]
We propose a benchmark called RL Unplugged to evaluate and compare offline RL methods.
RL Unplugged includes data from a diverse range of domains including games and simulated motor control problems.
We will release data for all our tasks and open-source all algorithms presented in this paper.
arXiv Detail & Related papers (2020-06-24T17:14:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.