Multi-skill Mobile Manipulation for Object Rearrangement
- URL: http://arxiv.org/abs/2209.02778v1
- Date: Tue, 6 Sep 2022 19:02:08 GMT
- Title: Multi-skill Mobile Manipulation for Object Rearrangement
- Authors: Jiayuan Gu, Devendra Singh Chaplot, Hao Su, Jitendra Malik
- Abstract summary: We study a modular approach to tackle long-horizon mobile manipulation tasks for object rearrangement.
Prior work chains multiple stationary manipulation skills with a point-goal navigation skill, which are learned individually on subtasks.
We operationalize these ideas by implementing mobile manipulation skills rather than stationary ones and training a navigation skill trained with region goal instead of point goal.
- Score: 75.62774690484022
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study a modular approach to tackle long-horizon mobile manipulation tasks
for object rearrangement, which decomposes a full task into a sequence of
subtasks. To tackle the entire task, prior work chains multiple stationary
manipulation skills with a point-goal navigation skill, which are learned
individually on subtasks. Although more effective than monolithic end-to-end RL
policies, this framework suffers from compounding errors in skill chaining,
e.g., navigating to a bad location where a stationary manipulation skill can
not reach its target to manipulate. To this end, we propose that the
manipulation skills should include mobility to have flexibility in interacting
with the target object from multiple locations and at the same time the
navigation skill could have multiple end points which lead to successful
manipulation. We operationalize these ideas by implementing mobile manipulation
skills rather than stationary ones and training a navigation skill trained with
region goal instead of point goal. We evaluate our multi-skill mobile
manipulation method M3 on 3 challenging long-horizon mobile manipulation tasks
in the Home Assistant Benchmark (HAB), and show superior performance as
compared to the baselines.
Related papers
- Continuously Improving Mobile Manipulation with Autonomous Real-World RL [33.085671103158866]
We present a fully autonomous real-world RL framework for mobile manipulation that can learn policies without extensive instrumentation or human supervision.
This is enabled by task-relevant autonomy, which guides exploration towards object interactions and prevents stagnation near goal states.
We demonstrate that our approach allows Spot robots to continually improve their performance on a set of four challenging mobile manipulation tasks.
arXiv Detail & Related papers (2024-09-30T17:59:50Z) - Mobile-Agent-v2: Mobile Device Operation Assistant with Effective Navigation via Multi-Agent Collaboration [52.25473993987409]
We propose Mobile-Agent-v2, a multi-agent architecture for mobile device operation assistance.
The architecture comprises three agents: planning agent, decision agent, and reflection agent.
We show that Mobile-Agent-v2 achieves over a 30% improvement in task completion compared to the single-agent architecture.
arXiv Detail & Related papers (2024-06-03T05:50:00Z) - CLAS: Coordinating Multi-Robot Manipulation with Central Latent Action
Spaces [9.578169216444813]
This paper proposes an approach to coordinating multi-robot manipulation through learned latent action spaces that are shared across different agents.
We validate our method in simulated multi-robot manipulation tasks and demonstrate improvement over previous baselines in terms of sample efficiency and learning performance.
arXiv Detail & Related papers (2022-11-28T23:20:47Z) - N$^2$M$^2$: Learning Navigation for Arbitrary Mobile Manipulation
Motions in Unseen and Dynamic Environments [9.079709086741987]
We introduce Neural Navigation for Mobile Manipulation (N$2$M$2$) which extends this decomposition to complex obstacle environments.
The resulting approach can perform unseen, long-horizon tasks in unexplored environments while instantly reacting to dynamic obstacles and environmental changes.
We demonstrate the capabilities of our proposed approach in extensive simulation and real-world experiments on multiple kinematically diverse mobile manipulators.
arXiv Detail & Related papers (2022-06-17T12:52:41Z) - Zero Experience Required: Plug & Play Modular Transfer Learning for
Semantic Visual Navigation [97.17517060585875]
We present a unified approach to visual navigation using a novel modular transfer learning model.
Our model can effectively leverage its experience from one source task and apply it to multiple target tasks.
Our approach learns faster, generalizes better, and outperforms SoTA models by a significant margin.
arXiv Detail & Related papers (2022-02-05T00:07:21Z) - Error-Aware Imitation Learning from Teleoperation Data for Mobile
Manipulation [54.31414116478024]
In mobile manipulation (MM), robots can both navigate within and interact with their environment.
In this work, we explore how to apply imitation learning (IL) to learn continuous visuo-motor policies for MM tasks.
arXiv Detail & Related papers (2021-12-09T23:54:59Z) - MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale [103.7609761511652]
We show how a large-scale collective robotic learning system can acquire a repertoire of behaviors simultaneously.
New tasks can be continuously instantiated from previously learned tasks.
We train and evaluate our system on a set of 12 real-world tasks with data collected from 7 robots.
arXiv Detail & Related papers (2021-04-16T16:38:02Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.