NeRP: Neural Rearrangement Planning for Unknown Objects
- URL: http://arxiv.org/abs/2106.01352v1
- Date: Wed, 2 Jun 2021 17:56:27 GMT
- Title: NeRP: Neural Rearrangement Planning for Unknown Objects
- Authors: Ahmed H. Qureshi, Arsalan Mousavian, Chris Paxton, Michael C. Yip, and
Dieter Fox
- Abstract summary: We propose NeRP (Neural Rearrangement Planning), a deep learning based approach for multi-step neural object rearrangement planning.
NeRP works with never-before-seen objects, that is trained on simulation data, and generalizes to the real world.
- Score: 49.191284597526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robots will be expected to manipulate a wide variety of objects in complex
and arbitrary ways as they become more widely used in human environments. As
such, the rearrangement of objects has been noted to be an important benchmark
for AI capabilities in recent years. We propose NeRP (Neural Rearrangement
Planning), a deep learning based approach for multi-step neural object
rearrangement planning which works with never-before-seen objects, that is
trained on simulation data, and generalizes to the real world. We compare NeRP
to several naive and model-based baselines, demonstrating that our approach is
measurably better and can efficiently arrange unseen objects in fewer steps and
with less planning time. Finally, we demonstrate it on several challenging
rearrangement problems in the real world.
Related papers
- Transferring Foundation Models for Generalizable Robotic Manipulation [82.12754319808197]
We propose a novel paradigm that effectively leverages language-reasoning segmentation mask generated by internet-scale foundation models.
Our approach can effectively and robustly perceive object pose and enable sample-efficient generalization learning.
Demos can be found in our submitted video, and more comprehensive ones can be found in link1 or link2.
arXiv Detail & Related papers (2023-06-09T07:22:12Z) - DexTransfer: Real World Multi-fingered Dexterous Grasping with Minimal
Human Demonstrations [51.87067543670535]
We propose a robot-learning system that can take a small number of human demonstrations and learn to grasp unseen object poses.
We train a dexterous grasping policy that takes the point clouds of the object as input and predicts continuous actions to grasp objects from different initial robot states.
The policy learned from our dataset can generalize well on unseen object poses in both simulation and the real world.
arXiv Detail & Related papers (2022-09-28T17:51:49Z) - Curious Exploration via Structured World Models Yields Zero-Shot Object
Manipulation [19.840186443344]
We propose to use structured world models to incorporate inductive biases in the control loop to achieve sample-efficient exploration.
Our method generates free-play behavior that starts to interact with objects early on and develops more complex behavior over time.
arXiv Detail & Related papers (2022-06-22T22:08:50Z) - IFOR: Iterative Flow Minimization for Robotic Object Rearrangement [92.97142696891727]
IFOR, Iterative Flow Minimization for Robotic Object Rearrangement, is an end-to-end method for the problem of object rearrangement for unknown objects.
We show that our method applies to cluttered scenes, and in the real world, while training only on synthetic data.
arXiv Detail & Related papers (2022-02-01T20:03:56Z) - Learning to Regrasp by Learning to Place [19.13976401970985]
Regrasping is needed when a robot's current grasp pose fails to perform desired manipulation tasks.
We propose a system for robots to take partial point clouds of an object and the supporting environment as inputs and output a sequence of pick-and-place operations.
We show that our system is able to achieve 73.3% success rate of regrasping diverse objects.
arXiv Detail & Related papers (2021-09-18T03:07:06Z) - Predicting Stable Configurations for Semantic Placement of Novel Objects [37.18437299513799]
Our goal is to enable robots to repose previously unseen objects according to learned semantic relationships in novel environments.
We build our models and training from the ground up to be tightly integrated with our proposed planning algorithm for semantic placement of unknown objects.
Our approach enables motion planning for semantic rearrangement of unknown objects in scenes with varying geometry from only RGB-D sensing.
arXiv Detail & Related papers (2021-08-26T23:05:05Z) - Reactive Human-to-Robot Handovers of Arbitrary Objects [57.845894608577495]
We present a vision-based system that enables human-to-robot handovers of unknown objects.
Our approach combines closed-loop motion planning with real-time, temporally-consistent grasp generation.
We demonstrate the generalizability, usability, and robustness of our approach on a novel benchmark set of 26 diverse household objects.
arXiv Detail & Related papers (2020-11-17T21:52:22Z) - A Long Horizon Planning Framework for Manipulating Rigid Pointcloud
Objects [25.428781562909606]
We present a framework for solving long-horizon planning problems involving manipulation of rigid objects.
Our method plans in the space of object subgoals and frees the planner from reasoning about robot-object interaction dynamics.
arXiv Detail & Related papers (2020-11-16T18:59:33Z) - Mutual Information Maximization for Robust Plannable Representations [82.83676853746742]
We present MIRO, an information theoretic representational learning algorithm for model-based reinforcement learning.
We show that our approach is more robust than reconstruction objectives in the presence of distractors and cluttered scenes.
arXiv Detail & Related papers (2020-05-16T21:58:47Z) - Human-like Planning for Reaching in Cluttered Environments [11.55532557594561]
Humans are remarkably adept at reaching for objects in cluttered environments.
We identify high-level manipulation plans in humans, and transfer these skills to robot planners.
We found that the human-like planner outperformed a state-of-the-art standard trajectory optimisation algorithm.
arXiv Detail & Related papers (2020-02-28T14:28:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.