Grasp Transfer based on Self-Aligning Implicit Representations of Local
Surfaces
- URL: http://arxiv.org/abs/2308.07807v1
- Date: Tue, 15 Aug 2023 14:33:17 GMT
- Title: Grasp Transfer based on Self-Aligning Implicit Representations of Local
Surfaces
- Authors: Ahmet Tekden, Marc Peter Deisenroth, Yasemin Bekiroglu
- Abstract summary: This work addresses the problem of transferring a grasp experience or a demonstration to a novel object that shares shape similarities with objects the robot has previously encountered.
We employ a single expert grasp demonstration to learn an implicit local surface representation model from a small dataset of object meshes.
At inference time, this model is used to transfer grasps to novel objects by identifying the most geometrically similar surfaces to the one on which the expert grasp is demonstrated.
- Score: 10.602143478315861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Objects we interact with and manipulate often share similar parts, such as
handles, that allow us to transfer our actions flexibly due to their shared
functionality. This work addresses the problem of transferring a grasp
experience or a demonstration to a novel object that shares shape similarities
with objects the robot has previously encountered. Existing approaches for
solving this problem are typically restricted to a specific object category or
a parametric shape. Our approach, however, can transfer grasps associated with
implicit models of local surfaces shared across object categories.
Specifically, we employ a single expert grasp demonstration to learn an
implicit local surface representation model from a small dataset of object
meshes. At inference time, this model is used to transfer grasps to novel
objects by identifying the most geometrically similar surfaces to the one on
which the expert grasp is demonstrated. Our model is trained entirely in
simulation and is evaluated on simulated and real-world objects that are not
seen during training. Evaluations indicate that grasp transfer to unseen object
categories using this approach can be successfully performed both in simulation
and real-world experiments. The simulation results also show that the proposed
approach leads to better spatial precision and grasp accuracy compared to a
baseline approach.
Related papers
- Zero-Shot Object-Centric Representation Learning [72.43369950684057]
We study current object-centric methods through the lens of zero-shot generalization.
We introduce a benchmark comprising eight different synthetic and real-world datasets.
We find that training on diverse real-world images improves transferability to unseen scenarios.
arXiv Detail & Related papers (2024-08-17T10:37:07Z) - Where2Explore: Few-shot Affordance Learning for Unseen Novel Categories
of Articulated Objects [15.989258402792755]
'Where2Explore' is a framework that effectively explores novel categories with minimal interactions on a limited number of instances.
Our framework explicitly estimates the geometric similarity across different categories, identifying local areas that differ from shapes in the training categories for efficient exploration.
arXiv Detail & Related papers (2023-09-14T07:11:58Z) - Inverse Dynamics Pretraining Learns Good Representations for Multitask
Imitation [66.86987509942607]
We evaluate how such a paradigm should be done in imitation learning.
We consider a setting where the pretraining corpus consists of multitask demonstrations.
We argue that inverse dynamics modeling is well-suited to this setting.
arXiv Detail & Related papers (2023-05-26T14:40:46Z) - Local Neural Descriptor Fields: Locally Conditioned Object
Representations for Manipulation [10.684104348212742]
We present a method to generalize object manipulation skills acquired from a limited number of demonstrations.
Our approach, Local Neural Descriptor Fields (L-NDF), utilizes neural descriptors defined on the local geometry of the object.
We illustrate the efficacy of our approach in manipulating novel objects in novel poses -- both in simulation and in the real world.
arXiv Detail & Related papers (2023-02-07T16:37:19Z) - Detection and Physical Interaction with Deformable Linear Objects [10.707804359932604]
Deformable linear objects (e.g., cables, ropes, and threads) commonly appear in our everyday lives.
There have already been successful methods to model and track deformable linear objects.
We present our work on using the method for tasks such as routing and manipulation with the ground and aerial robots.
arXiv Detail & Related papers (2022-05-17T01:17:21Z) - Discovering Objects that Can Move [55.743225595012966]
We study the problem of object discovery -- separating objects from the background without manual labels.
Existing approaches utilize appearance cues, such as color, texture, and location, to group pixels into object-like regions.
We choose to focus on dynamic objects -- entities that can move independently in the world.
arXiv Detail & Related papers (2022-03-18T21:13:56Z) - Fusing Local Similarities for Retrieval-based 3D Orientation Estimation
of Unseen Objects [70.49392581592089]
We tackle the task of estimating the 3D orientation of previously-unseen objects from monocular images.
We follow a retrieval-based strategy and prevent the network from learning object-specific features.
Our experiments on the LineMOD, LineMOD-Occluded, and T-LESS datasets show that our method yields a significantly better generalization to unseen objects than previous works.
arXiv Detail & Related papers (2022-03-16T08:53:00Z) - Attribute-Based Robotic Grasping with One-Grasp Adaptation [9.255994599301712]
We introduce an end-to-end learning method of attribute-based robotic grasping with one-grasp adaptation capability.
Our approach fuses the embeddings of a workspace image and a query text using a gated-attention mechanism and learns to predict instance grasping affordances.
Experimental results in both simulation and the real world demonstrate that our approach achieves over 80% instance grasping success rate on unknown objects.
arXiv Detail & Related papers (2021-04-06T03:40:46Z) - Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial
Observability in Visual Navigation [62.22058066456076]
Reinforcement Learning (RL) represents powerful tools to solve complex robotic tasks.
RL does not work directly in the real-world, which is known as the sim-to-real transfer problem.
We propose a method that learns on an observation space constructed by point clouds and environment randomization.
arXiv Detail & Related papers (2020-07-27T17:46:59Z) - Learning Predictive Representations for Deformable Objects Using
Contrastive Estimation [83.16948429592621]
We propose a new learning framework that jointly optimize both the visual representation model and the dynamics model.
We show substantial improvements over standard model-based learning techniques across our rope and cloth manipulation suite.
arXiv Detail & Related papers (2020-03-11T17:55:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.