Recurrent Off-policy Baselines for Memory-based Continuous Control
- URL: http://arxiv.org/abs/2110.12628v1
- Date: Mon, 25 Oct 2021 04:08:57 GMT
- Title: Recurrent Off-policy Baselines for Memory-based Continuous Control
- Authors: Zhihan Yang, Hai Nguyen
- Abstract summary: When the environment is partially observable (PO), a deep reinforcement learning (RL) agent must learn a suitable temporal representation of the entire history in addition to a strategy to control.
Inspired by recent success in model-free image-based RL, we noticed the absence of a model-free baseline for history-based RL.
We implement versions of DDPG, TD3, and SAC (RDPG, RTD3, and RSAC) in this work, evaluate them on short-term and long-term PO domains, and investigate key design choices.
- Score: 1.0965065178451106
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When the environment is partially observable (PO), a deep reinforcement
learning (RL) agent must learn a suitable temporal representation of the entire
history in addition to a strategy to control. This problem is not novel, and
there have been model-free and model-based algorithms proposed for this
problem. However, inspired by recent success in model-free image-based RL, we
noticed the absence of a model-free baseline for history-based RL that (1) uses
full history and (2) incorporates recent advances in off-policy continuous
control. Therefore, we implement recurrent versions of DDPG, TD3, and SAC
(RDPG, RTD3, and RSAC) in this work, evaluate them on short-term and long-term
PO domains, and investigate key design choices. Our experiments show that RDPG
and RTD3 can surprisingly fail on some domains and that RSAC is the most
reliable, reaching near-optimal performance on nearly all domains. However, one
task that requires systematic exploration still proved to be difficult, even
for RSAC. These results show that model-free RL can learn good temporal
representation using only reward signals; the primary difficulty seems to be
computational cost and exploration. To facilitate future research, we have made
our PyTorch implementation publicly available at
https://github.com/zhihanyang2022/off-policy-continuous-control.
Related papers
- Langevin Soft Actor-Critic: Efficient Exploration through Uncertainty-Driven Critic Learning [33.42657871152637]
Langevin Soft Actor Critic (LSAC) prioritizes enhancing critic learning through uncertainty estimation over policy optimization.
LSAC outperforms or matches the performance of mainstream model-free RL algorithms for continuous control tasks.
Notably, LSAC marks the first successful application of an LMC based Thompson sampling in continuous control tasks with continuous action spaces.
arXiv Detail & Related papers (2025-01-29T18:18:00Z) - Tangled Program Graphs as an alternative to DRL-based control algorithms for UAVs [0.43695508295565777]
Deep reinforcement learning (DRL) is currently the most popular AI-based approach to autonomous vehicle control.
This approach has some significant drawbacks: high computational requirements and low explainability.
We propose to use Tangled Program Graphs (TPGs) as an alternative for DRL in control-related tasks.
arXiv Detail & Related papers (2024-11-08T14:20:29Z) - Tractable Offline Learning of Regular Decision Processes [50.11277112628193]
This work studies offline Reinforcement Learning (RL) in a class of non-Markovian environments called Regular Decision Processes (RDPs)
Ins, the unknown dependency of future observations and rewards from the past interactions can be captured experimentally.
Many algorithms first reconstruct this unknown dependency using automata learning techniques.
arXiv Detail & Related papers (2024-09-04T14:26:58Z) - Pretty darn good control: when are approximate solutions better than
approximate models [0.0]
We show that DRL algorithms can successfully approximate solutions in a non-linear three-variable model for a fishery.
We show that the policy obtained with DRL is both more profitable and more sustainable than any constant mortality policy.
arXiv Detail & Related papers (2023-08-25T19:58:17Z) - Partial Observability during DRL for Robot Control [6.181642248900806]
We investigate partial observability as a potential failure source of applying Deep Reinforcement Learning to robot control tasks.
We compare the performance of three common DRL algorithms, TD3, SAC and PPO under various partial observability conditions.
We find that TD3 and SAC become easily stuck in local optima and underperform PPO.
arXiv Detail & Related papers (2022-09-12T03:12:04Z) - When does return-conditioned supervised learning work for offline
reinforcement learning? [51.899892382786526]
We study the capabilities and limitations of return-conditioned supervised learning.
We find that RCSL returns the optimal policy under a set of assumptions stronger than those needed for the more traditional dynamic programming-based algorithms.
arXiv Detail & Related papers (2022-06-02T15:05:42Z) - Jump-Start Reinforcement Learning [68.82380421479675]
We present a meta algorithm that can use offline data, demonstrations, or a pre-existing policy to initialize an RL policy.
In particular, we propose Jump-Start Reinforcement Learning (JSRL), an algorithm that employs two policies to solve tasks.
We show via experiments that JSRL is able to significantly outperform existing imitation and reinforcement learning algorithms.
arXiv Detail & Related papers (2022-04-05T17:25:22Z) - Recurrent Model-Free RL is a Strong Baseline for Many POMDPs [73.39666827525782]
Many problems in RL, such as meta RL, robust RL, and generalization in RL, can be cast as POMDPs.
In theory, simply augmenting model-free RL with memory, such as recurrent neural networks, provides a general approach to solving all types of POMDPs.
Prior work has found that such recurrent model-free RL methods tend to perform worse than more specialized algorithms that are designed for specific types of POMDPs.
arXiv Detail & Related papers (2021-10-11T07:09:14Z) - RL-DARTS: Differentiable Architecture Search for Reinforcement Learning [62.95469460505922]
We introduce RL-DARTS, one of the first applications of Differentiable Architecture Search (DARTS) in reinforcement learning (RL)
By replacing the image encoder with a DARTS supernet, our search method is sample-efficient, requires minimal extra compute resources, and is also compatible with off-policy and on-policy RL algorithms, needing only minor changes in preexisting code.
We show that the supernet gradually learns better cells, leading to alternative architectures which can be highly competitive against manually designed policies, but also verify previous design choices for RL policies.
arXiv Detail & Related papers (2021-06-04T03:08:43Z) - Model-based Reinforcement Learning for Continuous Control with Posterior
Sampling [10.91557009257615]
We study model-based posterior sampling for reinforcement learning (PSRL) in continuous state-action spaces.
We present MPC-PSRL, a model-based posterior sampling algorithm with model predictive control for action selection.
arXiv Detail & Related papers (2020-11-20T21:00:31Z) - MOPO: Model-based Offline Policy Optimization [183.6449600580806]
offline reinforcement learning (RL) refers to the problem of learning policies entirely from a large batch of previously collected data.
We show that an existing model-based RL algorithm already produces significant gains in the offline setting.
We propose to modify the existing model-based RL methods by applying them with rewards artificially penalized by the uncertainty of the dynamics.
arXiv Detail & Related papers (2020-05-27T08:46:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.