Dexterous Functional Grasping
- URL: http://arxiv.org/abs/2312.02975v1
- Date: Tue, 5 Dec 2023 18:59:23 GMT
- Title: Dexterous Functional Grasping
- Authors: Ananye Agarwal, Shagun Uppal, Kenneth Shaw, Deepak Pathak
- Abstract summary: This paper combines the best of both worlds to accomplish functional grasping for in-the-wild objects.
We propose a novel application of eigengrasps to reduce the search space of RL using a small amount of human data.
We find that eigengrasp action space beats baselines in simulation and outperforms hardcoded grasping in real and matches or outperforms a trained human teleoperator.
- Score: 39.15442658671798
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While there have been significant strides in dexterous manipulation, most of
it is limited to benchmark tasks like in-hand reorientation which are of
limited utility in the real world. The main benefit of dexterous hands over
two-fingered ones is their ability to pickup tools and other objects (including
thin ones) and grasp them firmly to apply force. However, this task requires
both a complex understanding of functional affordances as well as precise
low-level control. While prior work obtains affordances from human data this
approach doesn't scale to low-level control. Similarly, simulation training
cannot give the robot an understanding of real-world semantics. In this paper,
we aim to combine the best of both worlds to accomplish functional grasping for
in-the-wild objects. We use a modular approach. First, affordances are obtained
by matching corresponding regions of different objects and then a low-level
policy trained in sim is run to grasp it. We propose a novel application of
eigengrasps to reduce the search space of RL using a small amount of human data
and find that it leads to more stable and physically realistic motion. We find
that eigengrasp action space beats baselines in simulation and outperforms
hardcoded grasping in real and matches or outperforms a trained human
teleoperator. Results visualizations and videos at https://dexfunc.github.io/
Related papers
- Helpful DoggyBot: Open-World Object Fetching using Legged Robots and Vision-Language Models [63.89598561397856]
We present a system for quadrupedal mobile manipulation in indoor environments.
It uses a front-mounted gripper for object manipulation, a low-level controller trained in simulation using egocentric depth for agile skills.
We evaluate our system in two unseen environments without any real-world data collection or training.
arXiv Detail & Related papers (2024-09-30T20:58:38Z) - Offline Imitation Learning Through Graph Search and Retrieval [57.57306578140857]
Imitation learning is a powerful machine learning algorithm for a robot to acquire manipulation skills.
We propose GSR, a simple yet effective algorithm that learns from suboptimal demonstrations through Graph Search and Retrieval.
GSR can achieve a 10% to 30% higher success rate and over 30% higher proficiency compared to baselines.
arXiv Detail & Related papers (2024-07-22T06:12:21Z) - SynH2R: Synthesizing Hand-Object Motions for Learning Human-to-Robot
Handovers [37.49601724575655]
Vision-based human-to-robot handover is an important and challenging task in human-robot interaction.
We introduce a framework that can generate plausible human grasping motions suitable for training the robot.
This allows us to generate synthetic training and testing data with 100x more objects than previous work.
arXiv Detail & Related papers (2023-11-09T18:57:02Z) - Few-Shot Preference Learning for Human-in-the-Loop RL [13.773589150740898]
Motivated by the success of meta-learning, we pre-train preference models on prior task data and quickly adapt them for new tasks using only a handful of queries.
We reduce the amount of online feedback needed to train manipulation policies in Meta-World by 20$times$, and demonstrate the effectiveness of our method on a real Franka Panda Robot.
arXiv Detail & Related papers (2022-12-06T23:12:26Z) - Reactive Long Horizon Task Execution via Visual Skill and Precondition
Models [59.76233967614774]
We describe an approach for sim-to-real training that can accomplish unseen robotic tasks using models learned in simulation to ground components of a simple task planner.
We show an increase in success rate from 91.6% to 98% in simulation and from 10% to 80% success rate in the real-world as compared with naive baselines.
arXiv Detail & Related papers (2020-11-17T15:24:01Z) - Deep Imitation Learning for Bimanual Robotic Manipulation [70.56142804957187]
We present a deep imitation learning framework for robotic bimanual manipulation.
A core challenge is to generalize the manipulation skills to objects in different locations.
We propose to (i) decompose the multi-modal dynamics into elemental movement primitives, (ii) parameterize each primitive using a recurrent graph neural network to capture interactions, and (iii) integrate a high-level planner that composes primitives sequentially and a low-level controller to combine primitive dynamics and inverse kinematics control.
arXiv Detail & Related papers (2020-10-11T01:40:03Z) - Learning Dexterous Grasping with Object-Centric Visual Affordances [86.49357517864937]
Dexterous robotic hands are appealing for their agility and human-like morphology.
We introduce an approach for learning dexterous grasping.
Our key idea is to embed an object-centric visual affordance model within a deep reinforcement learning loop.
arXiv Detail & Related papers (2020-09-03T04:00:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.