Learning Dexterous Grasping with Object-Centric Visual Affordances
- URL: http://arxiv.org/abs/2009.01439v2
- Date: Wed, 16 Jun 2021 22:28:15 GMT
- Title: Learning Dexterous Grasping with Object-Centric Visual Affordances
- Authors: Priyanka Mandikal, Kristen Grauman
- Abstract summary: Dexterous robotic hands are appealing for their agility and human-like morphology.
We introduce an approach for learning dexterous grasping.
Our key idea is to embed an object-centric visual affordance model within a deep reinforcement learning loop.
- Score: 86.49357517864937
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dexterous robotic hands are appealing for their agility and human-like
morphology, yet their high degree of freedom makes learning to manipulate
challenging. We introduce an approach for learning dexterous grasping. Our key
idea is to embed an object-centric visual affordance model within a deep
reinforcement learning loop to learn grasping policies that favor the same
object regions favored by people. Unlike traditional approaches that learn from
human demonstration trajectories (e.g., hand joint sequences captured with a
glove), the proposed prior is object-centric and image-based, allowing the
agent to anticipate useful affordance regions for objects unseen during policy
learning. We demonstrate our idea with a 30-DoF five-fingered robotic hand
simulator on 40 objects from two datasets, where it successfully and
efficiently learns policies for stable functional grasps. Our affordance-guided
policies are significantly more effective, generalize better to novel objects,
train 3 X faster than the baselines, and are more robust to noisy sensor
readings and actuation. Our work offers a step towards manipulation agents that
learn by watching how people use objects, without requiring state and action
information about the human body. Project website:
http://vision.cs.utexas.edu/projects/graff-dexterous-affordance-grasp
Related papers
- Hand-Object Interaction Pretraining from Videos [77.92637809322231]
We learn general robot manipulation priors from 3D hand-object interaction trajectories.
We do so by sharing both the human hand and the manipulated object in 3D space and human motions to robot actions.
We empirically demonstrate that finetuning this policy, with both reinforcement learning (RL) and behavior cloning (BC), enables sample-efficient adaptation to downstream tasks and simultaneously improves robustness and generalizability compared to prior approaches.
arXiv Detail & Related papers (2024-09-12T17:59:07Z) - GraspXL: Generating Grasping Motions for Diverse Objects at Scale [30.104108863264706]
We unify the generation of hand-object grasping motions across multiple motion objectives in a policy learning framework GraspXL.
Our policy trained with 58 objects can robustly synthesize diverse grasping motions for more than 500k unseen objects with a success rate of 82.2%.
Our framework can be deployed to different dexterous hands and work with reconstructed or generated objects.
arXiv Detail & Related papers (2024-03-28T17:57:27Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - DexTransfer: Real World Multi-fingered Dexterous Grasping with Minimal
Human Demonstrations [51.87067543670535]
We propose a robot-learning system that can take a small number of human demonstrations and learn to grasp unseen object poses.
We train a dexterous grasping policy that takes the point clouds of the object as input and predicts continuous actions to grasp objects from different initial robot states.
The policy learned from our dataset can generalize well on unseen object poses in both simulation and the real world.
arXiv Detail & Related papers (2022-09-28T17:51:49Z) - Learning Generalizable Dexterous Manipulation from Human Grasp
Affordance [11.060931225148936]
Dexterous manipulation with a multi-finger hand is one of the most challenging problems in robotics.
Recent progress in imitation learning has largely improved the sample efficiency compared to Reinforcement Learning.
We propose to learn dexterous manipulation using large-scale demonstrations with diverse 3D objects in a category.
arXiv Detail & Related papers (2022-04-05T16:26:22Z) - DexVIP: Learning Dexterous Grasping with Human Hand Pose Priors from
Video [86.49357517864937]
We propose DexVIP, an approach to learn dexterous robotic grasping from human-object interaction videos.
We do this by curating grasp images from human-object interaction videos and imposing a prior over the agent's hand pose.
We demonstrate that DexVIP compares favorably to existing approaches that lack a hand pose prior or rely on specialized tele-operation equipment.
arXiv Detail & Related papers (2022-02-01T00:45:57Z) - Learning by Watching: Physical Imitation of Manipulation Skills from
Human Videos [28.712673809577076]
We present an approach for physical imitation from human videos for robot manipulation tasks.
We design a perception module that learns to translate human videos to the robot domain followed by unsupervised keypoint detection.
We evaluate the effectiveness of our approach on five robot manipulation tasks, including reaching, pushing, sliding, coffee making, and drawer closing.
arXiv Detail & Related papers (2021-01-18T18:50:32Z) - Learning Object Manipulation Skills via Approximate State Estimation
from Real Videos [47.958512470724926]
Humans are adept at learning new tasks by watching a few instructional videos.
On the other hand, robots that learn new actions either require a lot of effort through trial and error, or use expert demonstrations that are challenging to obtain.
In this paper, we explore a method that facilitates learning object manipulation skills directly from videos.
arXiv Detail & Related papers (2020-11-13T08:53:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.