N$^2$M$^2$: Learning Navigation for Arbitrary Mobile Manipulation
Motions in Unseen and Dynamic Environments
- URL: http://arxiv.org/abs/2206.08737v2
- Date: Thu, 29 Jun 2023 07:54:25 GMT
- Title: N$^2$M$^2$: Learning Navigation for Arbitrary Mobile Manipulation
Motions in Unseen and Dynamic Environments
- Authors: Daniel Honerkamp, Tim Welschehold, Abhinav Valada
- Abstract summary: We introduce Neural Navigation for Mobile Manipulation (N$2$M$2$) which extends this decomposition to complex obstacle environments.
The resulting approach can perform unseen, long-horizon tasks in unexplored environments while instantly reacting to dynamic obstacles and environmental changes.
We demonstrate the capabilities of our proposed approach in extensive simulation and real-world experiments on multiple kinematically diverse mobile manipulators.
- Score: 9.079709086741987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite its importance in both industrial and service robotics, mobile
manipulation remains a significant challenge as it requires a seamless
integration of end-effector trajectory generation with navigation skills as
well as reasoning over long-horizons. Existing methods struggle to control the
large configuration space, and to navigate dynamic and unknown environments. In
previous work, we proposed to decompose mobile manipulation tasks into a
simplified motion generator for the end-effector in task space and a trained
reinforcement learning agent for the mobile base to account for kinematic
feasibility of the motion. In this work, we introduce Neural Navigation for
Mobile Manipulation (N$^2$M$^2$) which extends this decomposition to complex
obstacle environments and enables it to tackle a broad range of tasks in real
world settings. The resulting approach can perform unseen, long-horizon tasks
in unexplored environments while instantly reacting to dynamic obstacles and
environmental changes. At the same time, it provides a simple way to define new
mobile manipulation tasks. We demonstrate the capabilities of our proposed
approach in extensive simulation and real-world experiments on multiple
kinematically diverse mobile manipulators. Code and videos are publicly
available at http://mobile-rl.cs.uni-freiburg.de.
Related papers
- M3Bench: Benchmarking Whole-body Motion Generation for Mobile Manipulation in 3D Scenes [66.44171200767839]
We propose M3Bench, a new benchmark of whole-body motion generation for mobile manipulation tasks.
M3Bench requires an embodied agent to understand its configuration, environmental constraints and task objectives.
M3Bench features 30k object rearrangement tasks across 119 diverse scenes, providing expert demonstrations generated by our newly developed M3BenchMaker.
arXiv Detail & Related papers (2024-10-09T08:38:21Z) - Zero-Cost Whole-Body Teleoperation for Mobile Manipulation [8.71539730969424]
MoMa-Teleop is a novel teleoperation method that delegates the base motions to a reinforcement learning agent.
We demonstrate that our approach results in a significant reduction in task completion time across a variety of robots and tasks.
arXiv Detail & Related papers (2024-09-23T15:09:45Z) - Flow as the Cross-Domain Manipulation Interface [73.15952395641136]
Im2Flow2Act enables robots to acquire real-world manipulation skills without the need of real-world robot training data.
Im2Flow2Act comprises two components: a flow generation network and a flow-conditioned policy.
We demonstrate Im2Flow2Act's capabilities in a variety of real-world tasks, including the manipulation of rigid, articulated, and deformable objects.
arXiv Detail & Related papers (2024-07-21T16:15:02Z) - Harmonic Mobile Manipulation [35.82197562695662]
HarmonicMM is an end-to-end learning method that optimize both navigation and manipulation.
Our contributions include a new benchmark for mobile manipulation and the successful deployment with only RGB visual observation.
arXiv Detail & Related papers (2023-12-11T18:54:42Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Active-Perceptive Motion Generation for Mobile Manipulation [6.952045528182883]
We introduce an active perception pipeline for mobile manipulators to generate motions that are informative toward manipulation tasks.
Our proposed approach, ActPerMoMa, generates robot paths in a receding horizon fashion by sampling paths and computing path-wise utilities.
We show the efficacy of our method in simulated experiments with a dual-arm TIAGo++ MoMa robot performing mobile grasping in cluttered scenes with obstacles.
arXiv Detail & Related papers (2023-09-30T16:56:52Z) - Simultaneous Navigation and Construction Benchmarking Environments [73.0706832393065]
We need intelligent robots for mobile construction, the process of navigating in an environment and modifying its structure according to a geometric design.
In this task, a major robot vision and learning challenge is how to exactly achieve the design without GPS.
We benchmark the performance of a handcrafted policy with basic localization and planning, and state-of-the-art deep reinforcement learning methods.
arXiv Detail & Related papers (2021-03-31T00:05:54Z) - Articulated Object Interaction in Unknown Scenes with Whole-Body Mobile
Manipulation [16.79185733369416]
We propose a two-stage architecture for autonomous interaction with large articulated objects in unknown environments.
The first stage uses a learned model to estimate the articulated model of a target object from an RGB-D input and predicts an action-conditional sequence of states for interaction.
The second stage comprises of a whole-body motion controller to manipulate the object along the generated kinematic plan.
arXiv Detail & Related papers (2021-03-18T21:32:18Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z) - Visual Navigation Among Humans with Optimal Control as a Supervisor [72.5188978268463]
We propose an approach that combines learning-based perception with model-based optimal control to navigate among humans.
Our approach is enabled by our novel data-generation tool, HumANav.
We demonstrate that the learned navigation policies can anticipate and react to humans without explicitly predicting future human motion.
arXiv Detail & Related papers (2020-03-20T16:13:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.