Local Neural Descriptor Fields: Locally Conditioned Object
Representations for Manipulation
- URL: http://arxiv.org/abs/2302.03573v1
- Date: Tue, 7 Feb 2023 16:37:19 GMT
- Title: Local Neural Descriptor Fields: Locally Conditioned Object
Representations for Manipulation
- Authors: Ethan Chun, Yilun Du, Anthony Simeonov, Tomas Lozano-Perez, Leslie
Kaelbling
- Abstract summary: We present a method to generalize object manipulation skills acquired from a limited number of demonstrations.
Our approach, Local Neural Descriptor Fields (L-NDF), utilizes neural descriptors defined on the local geometry of the object.
We illustrate the efficacy of our approach in manipulating novel objects in novel poses -- both in simulation and in the real world.
- Score: 10.684104348212742
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: A robot operating in a household environment will see a wide range of unique
and unfamiliar objects. While a system could train on many of these, it is
infeasible to predict all the objects a robot will see. In this paper, we
present a method to generalize object manipulation skills acquired from a
limited number of demonstrations, to novel objects from unseen shape
categories. Our approach, Local Neural Descriptor Fields (L-NDF), utilizes
neural descriptors defined on the local geometry of the object to effectively
transfer manipulation demonstrations to novel objects at test time. In doing
so, we leverage the local geometry shared between objects to produce a more
general manipulation framework. We illustrate the efficacy of our approach in
manipulating novel objects in novel poses -- both in simulation and in the real
world.
Related papers
- RPMArt: Towards Robust Perception and Manipulation for Articulated Objects [56.73978941406907]
We propose a framework towards Robust Perception and Manipulation for Articulated Objects ( RPMArt)
RPMArt learns to estimate the articulation parameters and manipulate the articulation part from the noisy point cloud.
We introduce an articulation-aware classification scheme to enhance its ability for sim-to-real transfer.
arXiv Detail & Related papers (2024-03-24T05:55:39Z) - Kinematic-aware Prompting for Generalizable Articulated Object
Manipulation with LLMs [53.66070434419739]
Generalizable articulated object manipulation is essential for home-assistant robots.
We propose a kinematic-aware prompting framework that prompts Large Language Models with kinematic knowledge of objects to generate low-level motion waypoints.
Our framework outperforms traditional methods on 8 categories seen and shows a powerful zero-shot capability for 8 unseen articulated object categories.
arXiv Detail & Related papers (2023-11-06T03:26:41Z) - Grasp Transfer based on Self-Aligning Implicit Representations of Local
Surfaces [10.602143478315861]
This work addresses the problem of transferring a grasp experience or a demonstration to a novel object that shares shape similarities with objects the robot has previously encountered.
We employ a single expert grasp demonstration to learn an implicit local surface representation model from a small dataset of object meshes.
At inference time, this model is used to transfer grasps to novel objects by identifying the most geometrically similar surfaces to the one on which the expert grasp is demonstrated.
arXiv Detail & Related papers (2023-08-15T14:33:17Z) - Transferring Foundation Models for Generalizable Robotic Manipulation [82.12754319808197]
We propose a novel paradigm that effectively leverages language-reasoning segmentation mask generated by internet-scale foundation models.
Our approach can effectively and robustly perceive object pose and enable sample-efficient generalization learning.
Demos can be found in our submitted video, and more comprehensive ones can be found in link1 or link2.
arXiv Detail & Related papers (2023-06-09T07:22:12Z) - FlowBot3D: Learning 3D Articulation Flow to Manipulate Articulated Objects [14.034256001448574]
We propose a vision-based system that learns to predict the potential motions of the parts of a variety of articulated objects.
We deploy an analytical motion planner based on this vector field to achieve a policy that yields maximum articulation.
Results show that our system achieves state-of-the-art performance in both simulated and real-world experiments.
arXiv Detail & Related papers (2022-05-09T15:35:33Z) - Discovering Objects that Can Move [55.743225595012966]
We study the problem of object discovery -- separating objects from the background without manual labels.
Existing approaches utilize appearance cues, such as color, texture, and location, to group pixels into object-like regions.
We choose to focus on dynamic objects -- entities that can move independently in the world.
arXiv Detail & Related papers (2022-03-18T21:13:56Z) - Object Manipulation via Visual Target Localization [64.05939029132394]
Training agents to manipulate objects, poses many challenges.
We propose an approach that explores the environment in search for target objects, computes their 3D coordinates once they are located, and then continues to estimate their 3D locations even when the objects are not visible.
Our evaluations show a massive 3x improvement in success rate over a model that has access to the same sensory suite.
arXiv Detail & Related papers (2022-03-15T17:59:01Z) - Ab Initio Particle-based Object Manipulation [22.78939235155233]
Particle-based Object Manipulation (Prompt) is a new approach to robot manipulation of novel objects ab initio.
Prompt combines the benefits of both model-based reasoning and data-driven learning.
Prompt successfully handles a variety of everyday objects, some of which are transparent.
arXiv Detail & Related papers (2021-07-19T13:27:00Z) - Supervised Training of Dense Object Nets using Optimal Descriptors for
Industrial Robotic Applications [57.87136703404356]
Dense Object Nets (DONs) by Florence, Manuelli and Tedrake introduced dense object descriptors as a novel visual object representation for the robotics community.
In this paper we show that given a 3D model of an object, we can generate its descriptor space image, which allows for supervised training of DONs.
We compare the training methods on generating 6D grasps for industrial objects and show that our novel supervised training approach improves the pick-and-place performance in industry-relevant tasks.
arXiv Detail & Related papers (2021-02-16T11:40:12Z) - A Long Horizon Planning Framework for Manipulating Rigid Pointcloud
Objects [25.428781562909606]
We present a framework for solving long-horizon planning problems involving manipulation of rigid objects.
Our method plans in the space of object subgoals and frees the planner from reasoning about robot-object interaction dynamics.
arXiv Detail & Related papers (2020-11-16T18:59:33Z) - Learning Object-Based State Estimators for Household Robots [11.055133590909097]
We build object-based memory systems that operate on high-dimensional observations and hypotheses.
We demonstrate the system's effectiveness in maintaining memory of dynamically changing objects in both simulated environment and real images.
arXiv Detail & Related papers (2020-11-06T04:18:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.