Harmonic Mobile Manipulation
- URL: http://arxiv.org/abs/2312.06639v1
- Date: Mon, 11 Dec 2023 18:54:42 GMT
- Title: Harmonic Mobile Manipulation
- Authors: Ruihan Yang, Yejin Kim, Aniruddha Kembhavi, Xiaolong Wang, Kiana
Ehsani
- Abstract summary: HarmonicMM is an end-to-end learning method that optimize both navigation and manipulation.
Our contributions include a new benchmark for mobile manipulation and the successful deployment in a real unseen apartment.
- Score: 40.72258476872912
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in robotics have enabled robots to navigate complex
scenes or manipulate diverse objects independently. However, robots are still
impotent in many household tasks requiring coordinated behaviors such as
opening doors. The factorization of navigation and manipulation, while
effective for some tasks, fails in scenarios requiring coordinated actions. To
address this challenge, we introduce, HarmonicMM, an end-to-end learning method
that optimizes both navigation and manipulation, showing notable improvement
over existing techniques in everyday tasks. This approach is validated in
simulated and real-world environments and adapts to novel unseen settings
without additional tuning. Our contributions include a new benchmark for mobile
manipulation and the successful deployment in a real unseen apartment,
demonstrating the potential for practical indoor robot deployment in daily
life. More results are on our project site:
https://rchalyang.github.io/HarmonicMM/
Related papers
- Track2Act: Predicting Point Tracks from Internet Videos enables Diverse Zero-shot Robot Manipulation [65.46610405509338]
Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We use these 2D track predictions to infer a sequence of rigid transforms of the object to be manipulated, and obtain robot end-effector poses.
We show that this approach of combining scalably learned track prediction with a residual policy enables zero-shot robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - Learning Hierarchical Interactive Multi-Object Search for Mobile
Manipulation [10.21450780640562]
We introduce a novel interactive multi-object search task in which a robot has to open doors to navigate rooms and search inside cabinets and drawers to find target objects.
These new challenges require combining manipulation and navigation skills in unexplored environments.
We present HIMOS, a hierarchical reinforcement learning approach that learns to compose exploration, navigation, and manipulation skills.
arXiv Detail & Related papers (2023-07-12T12:25:33Z) - Zero-Shot Robot Manipulation from Passive Human Videos [59.193076151832145]
We develop a framework for extracting agent-agnostic action representations from human videos.
Our framework is based on predicting plausible human hand trajectories.
We deploy the trained model zero-shot for physical robot manipulation tasks.
arXiv Detail & Related papers (2023-02-03T21:39:52Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Physical Interaction and Manipulation of the Environment using Aerial
Robots [1.370633147306388]
The physical interaction of aerial robots with their environment has countless potential applications and is an emerging area with many open challenges.
fully-actuated multirotors have been introduced to tackle some of these challenges.
They provide complete control over position and orientation and eliminate the need for attaching a multi-DoF manipulation arm to the robot.
However, there are many open problems before they can be used in real-world applications.
arXiv Detail & Related papers (2022-07-06T13:15:10Z) - N$^2$M$^2$: Learning Navigation for Arbitrary Mobile Manipulation
Motions in Unseen and Dynamic Environments [9.079709086741987]
We introduce Neural Navigation for Mobile Manipulation (N$2$M$2$) which extends this decomposition to complex obstacle environments.
The resulting approach can perform unseen, long-horizon tasks in unexplored environments while instantly reacting to dynamic obstacles and environmental changes.
We demonstrate the capabilities of our proposed approach in extensive simulation and real-world experiments on multiple kinematically diverse mobile manipulators.
arXiv Detail & Related papers (2022-06-17T12:52:41Z) - Bottom-Up Skill Discovery from Unsegmented Demonstrations for
Long-Horizon Robot Manipulation [55.31301153979621]
We tackle real-world long-horizon robot manipulation tasks through skill discovery.
We present a bottom-up approach to learning a library of reusable skills from unsegmented demonstrations.
Our method has shown superior performance over state-of-the-art imitation learning methods in multi-stage manipulation tasks.
arXiv Detail & Related papers (2021-09-28T16:18:54Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Passing Through Narrow Gaps with Deep Reinforcement Learning [2.299414848492227]
In this paper we present a deep reinforcement learning method for autonomously navigating through small gaps.
We first learn a gap behaviour policy to get through small gaps, where contact between the robot and the gap may be required.
In simulation experiments, our approach achieves 93% success rate when the gap behaviour is activated manually by an operator.
In real robot experiments, our approach achieves a success rate of 73% with manual activation, and 40% with autonomous behaviour selection.
arXiv Detail & Related papers (2021-03-06T00:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.