Latent Exploration for Reinforcement Learning
- URL: http://arxiv.org/abs/2305.20065v2
- Date: Sun, 29 Oct 2023 16:30:51 GMT
- Title: Latent Exploration for Reinforcement Learning
- Authors: Alberto Silvio Chiappa and Alessandro Marin Vargas and Ann Zixiang
Huang and Alexander Mathis
- Abstract summary: In Reinforcement Learning, agents learn policies by exploring and interacting with the environment.
We propose LATent TIme-Correlated Exploration (Lattice), a method to inject temporally-correlated noise into the latent state of the policy network.
- Score: 87.42776741119653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Reinforcement Learning, agents learn policies by exploring and interacting
with the environment. Due to the curse of dimensionality, learning policies
that map high-dimensional sensory input to motor output is particularly
challenging. During training, state of the art methods (SAC, PPO, etc.) explore
the environment by perturbing the actuation with independent Gaussian noise.
While this unstructured exploration has proven successful in numerous tasks, it
can be suboptimal for overactuated systems. When multiple actuators, such as
motors or muscles, drive behavior, uncorrelated perturbations risk diminishing
each other's effect, or modifying the behavior in a task-irrelevant way. While
solutions to introduce time correlation across action perturbations exist,
introducing correlation across actuators has been largely ignored. Here, we
propose LATent TIme-Correlated Exploration (Lattice), a method to inject
temporally-correlated noise into the latent state of the policy network, which
can be seamlessly integrated with on- and off-policy algorithms. We demonstrate
that the noisy actions generated by perturbing the network's activations can be
modeled as a multivariate Gaussian distribution with a full covariance matrix.
In the PyBullet locomotion tasks, Lattice-SAC achieves state of the art
results, and reaches 18% higher reward than unstructured exploration in the
Humanoid environment. In the musculoskeletal control environments of MyoSuite,
Lattice-PPO achieves higher reward in most reaching and object manipulation
tasks, while also finding more energy-efficient policies with reductions of
20-60%. Overall, we demonstrate the effectiveness of structured action noise in
time and actuator space for complex motor control tasks. The code is available
at: https://github.com/amathislab/lattice.
Related papers
- Reinforcement Learning with Action Sequence for Data-Efficient Robot Learning [62.3886343725955]
We introduce a novel RL algorithm that learns a critic network that outputs Q-values over a sequence of actions.
By explicitly training the value functions to learn the consequence of executing a series of current and future actions, our algorithm allows for learning useful value functions from noisy trajectories.
arXiv Detail & Related papers (2024-11-19T01:23:52Z) - Variable-Agnostic Causal Exploration for Reinforcement Learning [56.52768265734155]
We introduce a novel framework, Variable-Agnostic Causal Exploration for Reinforcement Learning (VACERL)
Our approach automatically identifies crucial observation-action steps associated with key variables using attention mechanisms.
It constructs the causal graph connecting these steps, which guides the agent towards observation-action pairs with greater causal influence on task completion.
arXiv Detail & Related papers (2024-07-17T09:45:27Z) - Reconciling Spatial and Temporal Abstractions for Goal Representation [0.4813333335683418]
Goal representation affects the performance of Hierarchical Reinforcement Learning (HRL) algorithms.
Recent studies show that representations that preserve temporally abstract environment dynamics are successful in solving difficult problems.
We propose a novel three-layer HRL algorithm that introduces, at different levels of the hierarchy, both a spatial and a temporal goal abstraction.
arXiv Detail & Related papers (2024-01-18T10:33:30Z) - Mission-driven Exploration for Accelerated Deep Reinforcement Learning
with Temporal Logic Task Specifications [11.812602599752294]
We consider robots with unknown dynamics operating in environments with unknown structure.
Our goal is to synthesize a control policy that maximizes the probability of satisfying an automaton-encoded task.
We propose a novel DRL algorithm, which has the capability to learn control policies at a notably faster rate compared to similar methods.
arXiv Detail & Related papers (2023-11-28T18:59:58Z) - Nonprehensile Planar Manipulation through Reinforcement Learning with
Multimodal Categorical Exploration [8.343657309038285]
Reinforcement Learning is a powerful framework for developing such robot controllers.
We propose a multimodal exploration approach through categorical distributions, which enables us to train planar pushing RL policies.
We show that the learned policies are robust to external disturbances and observation noise, and scale to tasks with multiple pushers.
arXiv Detail & Related papers (2023-08-04T16:55:00Z) - Deep Multi-Agent Reinforcement Learning with Hybrid Action Spaces based
on Maximum Entropy [0.0]
We propose Deep Multi-Agent Hybrid Soft Actor-Critic (MAHSAC) to handle multi-agent problems with hybrid action spaces.
This algorithm follows the centralized training but decentralized execution (CTDE) paradigm, and extend the Soft Actor-Critic algorithm (SAC) to handle hybrid action space problems.
Our experiences are running on an easy multi-agent particle world with a continuous observation and discrete action space, along with some basic simulated physics.
arXiv Detail & Related papers (2022-06-10T13:52:59Z) - Accelerated Policy Learning with Parallel Differentiable Simulation [59.665651562534755]
We present a differentiable simulator and a new policy learning algorithm (SHAC)
Our algorithm alleviates problems with local minima through a smooth critic function.
We show substantial improvements in sample efficiency and wall-clock time over state-of-the-art RL and differentiable simulation-based algorithms.
arXiv Detail & Related papers (2022-04-14T17:46:26Z) - Learning Robust Policy against Disturbance in Transition Dynamics via
State-Conservative Policy Optimization [63.75188254377202]
Deep reinforcement learning algorithms can perform poorly in real-world tasks due to discrepancy between source and target environments.
We propose a novel model-free actor-critic algorithm to learn robust policies without modeling the disturbance in advance.
Experiments in several robot control tasks demonstrate that SCPO learns robust policies against the disturbance in transition dynamics.
arXiv Detail & Related papers (2021-12-20T13:13:05Z) - Augmenting Reinforcement Learning with Behavior Primitives for Diverse
Manipulation Tasks [17.13584584844048]
This work introduces MAnipulation Primitive-augmented reinforcement LEarning (MAPLE), a learning framework that augments standard reinforcement learning algorithms with a pre-defined library of behavior primitives.
We develop a hierarchical policy that involves the primitives and instantiates their executions with input parameters.
We demonstrate that MAPLE outperforms baseline approaches by a significant margin on a suite of simulated manipulation tasks.
arXiv Detail & Related papers (2021-10-07T17:44:33Z) - Discrete Action On-Policy Learning with Action-Value Critic [72.20609919995086]
Reinforcement learning (RL) in discrete action space is ubiquitous in real-world applications, but its complexity grows exponentially with the action-space dimension.
We construct a critic to estimate action-value functions, apply it on correlated actions, and combine these critic estimated action values to control the variance of gradient estimation.
These efforts result in a new discrete action on-policy RL algorithm that empirically outperforms related on-policy algorithms relying on variance control techniques.
arXiv Detail & Related papers (2020-02-10T04:23:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.