LASER: Learning a Latent Action Space for Efficient Reinforcement
Learning
- URL: http://arxiv.org/abs/2103.15793v2
- Date: Tue, 30 Mar 2021 12:19:29 GMT
- Title: LASER: Learning a Latent Action Space for Efficient Reinforcement
Learning
- Authors: Arthur Allshire, Roberto Mart\'in-Mart\'in, Charles Lin, Shawn Manuel,
Silvio Savarese, Animesh Garg
- Abstract summary: We present LASER, a method to learn latent action spaces for efficient reinforcement learning.
We show improved sample efficiency compared to the original action space from better alignment of the action space to the task space, as we observe with visualizations of the learned action space manifold.
- Score: 41.53297694894669
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The process of learning a manipulation task depends strongly on the action
space used for exploration: posed in the incorrect action space, solving a task
with reinforcement learning can be drastically inefficient. Additionally,
similar tasks or instances of the same task family impose latent manifold
constraints on the most effective action space: the task family can be best
solved with actions in a manifold of the entire action space of the robot.
Combining these insights we present LASER, a method to learn latent action
spaces for efficient reinforcement learning. LASER factorizes the learning
problem into two sub-problems, namely action space learning and policy learning
in the new action space. It leverages data from similar manipulation task
instances, either from an offline expert or online during policy learning, and
learns from these trajectories a mapping from the original to a latent action
space. LASER is trained as a variational encoder-decoder model to map raw
actions into a disentangled latent action space while maintaining action
reconstruction and latent space dynamic consistency. We evaluate LASER on two
contact-rich robotic tasks in simulation, and analyze the benefit of policy
learning in the generated latent action space. We show improved sample
efficiency compared to the original action space from better alignment of the
action space to the task space, as we observe with visualizations of the
learned action space manifold. Additional details:
https://www.pair.toronto.edu/laser
Related papers
- SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation [58.14969377419633]
We propose spire, a system that decomposes tasks into smaller learning subproblems and second combines imitation and reinforcement learning to maximize their strengths.
We find that spire outperforms prior approaches that integrate imitation learning, reinforcement learning, and planning by 35% to 50% in average task performance.
arXiv Detail & Related papers (2024-10-23T17:42:07Z) - Empowering Large Language Model Agents through Action Learning [85.39581419680755]
Large Language Model (LLM) Agents have recently garnered increasing interest yet they are limited in their ability to learn from trial and error.
We argue that the capacity to learn new actions from experience is fundamental to the advancement of learning in LLM agents.
We introduce a framework LearnAct with an iterative learning strategy to create and improve actions in the form of Python functions.
arXiv Detail & Related papers (2024-02-24T13:13:04Z) - MAN: Multi-Action Networks Learning [0.0]
We introduce a Deep Reinforcement Learning algorithm call Multi-Action Networks (MAN) Learning.
We propose separating the action space into two components, creating a Value Neural Network for each sub-action.
Then, MAN uses temporal-difference learning to train the networks synchronously, which is simpler than training a single network with a large action output.
arXiv Detail & Related papers (2022-09-19T20:13:29Z) - Accelerating Robotic Reinforcement Learning via Parameterized Action
Primitives [92.0321404272942]
Reinforcement learning can be used to build general-purpose robotic systems.
However, training RL agents to solve robotics tasks still remains challenging.
In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy.
We find that our simple change to the action interface substantially improves both the learning efficiency and task performance.
arXiv Detail & Related papers (2021-10-28T17:59:30Z) - Learning Routines for Effective Off-Policy Reinforcement Learning [0.0]
We propose a novel framework for reinforcement learning that effectively lifts such constraints.
Within our framework, agents learn effective behavior over a routine space.
We show that the resulting agents obtain relevant performance improvements while requiring fewer interactions with the environment per episode.
arXiv Detail & Related papers (2021-06-05T18:41:57Z) - Motion Planner Augmented Reinforcement Learning for Robot Manipulation
in Obstructed Environments [22.20810568845499]
We propose motion planner augmented RL (MoPA-RL) which augments the action space of an RL agent with the long-horizon planning capabilities of motion planners.
Based on the magnitude of the action, our approach smoothly transitions between directly executing the action and invoking a motion planner.
Experiments demonstrate that MoPA-RL increases learning efficiency, leads to a faster exploration, and results in safer policies.
arXiv Detail & Related papers (2020-10-22T17:59:09Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z) - Weakly-Supervised Reinforcement Learning for Controllable Behavior [126.04932929741538]
Reinforcement learning (RL) is a powerful framework for learning to take actions to solve tasks.
In many settings, an agent must winnow down the inconceivably large space of all possible tasks to the single task that it is currently being asked to solve.
We introduce a framework for using weak supervision to automatically disentangle this semantically meaningful subspace of tasks from the enormous space of nonsensical "chaff" tasks.
arXiv Detail & Related papers (2020-04-06T17:50:28Z) - Action Space Shaping in Deep Reinforcement Learning [7.508516104014916]
Reinforcement learning has been successful in training agents in various learning environments, including video-games.
We aim to gain insight on these action space modifications by conducting extensive experiments in video-game environments.
Our results show how domain-specific removal of actions and discretization of continuous actions can be crucial for successful learning.
arXiv Detail & Related papers (2020-04-02T13:25:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.