ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation
- URL: http://arxiv.org/abs/2008.07792v2
- Date: Fri, 26 Mar 2021 04:44:22 GMT
- Title: ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation
- Authors: Fei Xia, Chengshu Li, Roberto Mart\'in-Mart\'in, Or Litany, Alexander
Toshev, Silvio Savarese
- Abstract summary: ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
- Score: 99.2543521972137
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many Reinforcement Learning (RL) approaches use joint control signals
(positions, velocities, torques) as action space for continuous control tasks.
We propose to lift the action space to a higher level in the form of subgoals
for a motion generator (a combination of motion planner and trajectory
executor). We argue that, by lifting the action space and by leveraging
sampling-based motion planners, we can efficiently use RL to solve complex,
long-horizon tasks that could not be solved with existing RL methods in the
original action space. We propose ReLMoGen -- a framework that combines a
learned policy to predict subgoals and a motion generator to plan and execute
the motion needed to reach these subgoals. To validate our method, we apply
ReLMoGen to two types of tasks: 1) Interactive Navigation tasks, navigation
problems where interactions with the environment are required to reach the
destination, and 2) Mobile Manipulation tasks, manipulation tasks that require
moving the robot base. These problems are challenging because they are usually
long-horizon, hard to explore during training, and comprise alternating phases
of navigation and interaction. Our method is benchmarked on a diverse set of
seven robotics tasks in photo-realistic simulation environments. In all
settings, ReLMoGen outperforms state-of-the-art Reinforcement Learning and
Hierarchical Reinforcement Learning baselines. ReLMoGen also shows outstanding
transferability between different motion generators at test time, indicating a
great potential to transfer to real robots.
Related papers
- A Meta-Engine Framework for Interleaved Task and Motion Planning using Topological Refinements [51.54559117314768]
Task And Motion Planning (TAMP) is the problem of finding a solution to an automated planning problem.
We propose a general and open-source framework for modeling and benchmarking TAMP problems.
We introduce an innovative meta-technique to solve TAMP problems involving moving agents and multiple task-state-dependent obstacles.
arXiv Detail & Related papers (2024-08-11T14:57:57Z) - HYPERmotion: Learning Hybrid Behavior Planning for Autonomous Loco-manipulation [7.01404330241523]
HYPERmotion is a framework that learns, selects and plans behaviors based on tasks in different scenarios.
We combine reinforcement learning with whole-body optimization to generate motion for 38 actuated joints.
Experiments in simulation and real-world show that learned motions can efficiently adapt to new tasks.
arXiv Detail & Related papers (2024-06-20T18:21:24Z) - Guided Decoding for Robot On-line Motion Generation and Adaption [44.959409835754634]
We present a novel motion generation approach for robot arms, with high degrees of freedom, in complex settings that can adapt online to obstacles or new via points.
We train a transformer architecture, based on conditional variational autoencoder, on a large dataset of simulated trajectories used as demonstrations.
We show that our model successfully generates motion from different initial and target points and that is capable of generating trajectories that navigate complex tasks across different robotic platforms.
arXiv Detail & Related papers (2024-03-22T14:32:27Z) - RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - Generalizable Long-Horizon Manipulations with Large Language Models [91.740084601715]
This work introduces a framework harnessing the capabilities of Large Language Models (LLMs) to generate primitive task conditions for generalizable long-horizon manipulations.
We create a challenging robotic manipulation task suite based on Pybullet for long-horizon task evaluation.
arXiv Detail & Related papers (2023-10-03T17:59:46Z) - Nonprehensile Planar Manipulation through Reinforcement Learning with
Multimodal Categorical Exploration [8.343657309038285]
Reinforcement Learning is a powerful framework for developing such robot controllers.
We propose a multimodal exploration approach through categorical distributions, which enables us to train planar pushing RL policies.
We show that the learned policies are robust to external disturbances and observation noise, and scale to tasks with multiple pushers.
arXiv Detail & Related papers (2023-08-04T16:55:00Z) - Causal Policy Gradient for Whole-Body Mobile Manipulation [39.3461626518495]
We introduce Causal MoMa, a new reinforcement learning framework to train policies for typical MoMa tasks.
We evaluate the performance of Causal MoMa on three types of simulated robots across different MoMa tasks.
arXiv Detail & Related papers (2023-05-04T23:23:47Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - Simultaneous Navigation and Construction Benchmarking Environments [73.0706832393065]
We need intelligent robots for mobile construction, the process of navigating in an environment and modifying its structure according to a geometric design.
In this task, a major robot vision and learning challenge is how to exactly achieve the design without GPS.
We benchmark the performance of a handcrafted policy with basic localization and planning, and state-of-the-art deep reinforcement learning methods.
arXiv Detail & Related papers (2021-03-31T00:05:54Z) - Motion Planner Augmented Reinforcement Learning for Robot Manipulation
in Obstructed Environments [22.20810568845499]
We propose motion planner augmented RL (MoPA-RL) which augments the action space of an RL agent with the long-horizon planning capabilities of motion planners.
Based on the magnitude of the action, our approach smoothly transitions between directly executing the action and invoking a motion planner.
Experiments demonstrate that MoPA-RL increases learning efficiency, leads to a faster exploration, and results in safer policies.
arXiv Detail & Related papers (2020-10-22T17:59:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.