GraspGF: Learning Score-based Grasping Primitive for Human-assisting
Dexterous Grasping
- URL: http://arxiv.org/abs/2309.06038v3
- Date: Wed, 15 Nov 2023 00:28:32 GMT
- Title: GraspGF: Learning Score-based Grasping Primitive for Human-assisting
Dexterous Grasping
- Authors: Tianhao Wu, Mingdong Wu, Jiyao Zhang, Yunchong Gan, Hao Dong
- Abstract summary: We propose a novel task called human-assisting dexterous grasping.
It aims to train a policy for controlling a robotic hand's fingers to assist users in grasping objects.
- Score: 11.63059055320262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of anthropomorphic robotic hands for assisting individuals in
situations where human hands may be unavailable or unsuitable has gained
significant importance. In this paper, we propose a novel task called
human-assisting dexterous grasping that aims to train a policy for controlling
a robotic hand's fingers to assist users in grasping objects. Unlike
conventional dexterous grasping, this task presents a more complex challenge as
the policy needs to adapt to diverse user intentions, in addition to the
object's geometry. We address this challenge by proposing an approach
consisting of two sub-modules: a hand-object-conditional grasping primitive
called Grasping Gradient Field~(GraspGF), and a history-conditional residual
policy. GraspGF learns `how' to grasp by estimating the gradient from a success
grasping example set, while the residual policy determines `when' and at what
speed the grasping action should be executed based on the trajectory history.
Experimental results demonstrate the superiority of our proposed method
compared to baselines, highlighting the user-awareness and practicality in
real-world applications. The codes and demonstrations can be viewed at
"https://sites.google.com/view/graspgf".
Related papers
- SIGHT: Single-Image Conditioned Generation of Hand Trajectories for Hand-Object Interaction [86.54738165527502]
We introduce a novel task of generating realistic and diverse 3D hand trajectories given a single image of an object.
Hand-object interaction trajectory priors can greatly benefit applications in robotics, embodied AI, augmented reality and related fields.
arXiv Detail & Related papers (2025-03-28T20:53:20Z) - Gaze-Guided 3D Hand Motion Prediction for Detecting Intent in Egocentric Grasping Tasks [5.018156030818883]
We propose a novel approach that predicts future sequences of both hand poses and joint positions.
We use a vector-quantized variational autoencoder for robust hand pose encoding with an autoregressive generative transformer for effective hand motion sequence prediction.
arXiv Detail & Related papers (2025-03-27T15:26:41Z) - COMBO-Grasp: Learning Constraint-Based Manipulation for Bimanual Occluded Grasping [56.907940167333656]
Occluded robot grasping is where the desired grasp poses are kinematically infeasible due to environmental constraints such as surface collisions.
Traditional robot manipulation approaches struggle with the complexity of non-prehensile or bimanual strategies commonly used by humans.
We introduce Constraint-based Manipulation for Bimanual Occluded Grasping (COMBO-Grasp), a learning-based approach which leverages two coordinated policies.
arXiv Detail & Related papers (2025-02-12T01:31:01Z) - A Powered Prosthetic Hand with Vision System for Enhancing the Anthropopathic Grasp [11.354158680070652]
We propose the Spatial Geometry-based Gesture Mapping (SG-GM) method, which constructs gesture functions based on the geometric features of the human hand grasping processes.
We also propose the Motion Trajectory Regression-based Grasping Intent Estimation (MTR-GIE) algorithm.
The experiments were conducted to grasp 8 common daily objects including cup, fork, etc.
arXiv Detail & Related papers (2024-12-10T01:45:14Z) - Learning to Assist Humans without Inferring Rewards [65.28156318196397]
We build upon prior work that studies assistance through the lens of empowerment.
An assistive agent aims to maximize the influence of the human's actions.
We prove that these representations estimate a similar notion of empowerment to that studied by prior work.
arXiv Detail & Related papers (2024-11-04T21:31:04Z) - Learning Goal-oriented Bimanual Dough Rolling Using Dynamic Heterogeneous Graph Based on Human Demonstration [19.74767906744719]
Soft object manipulation poses significant challenges for robots, requiring effective techniques for state representation and manipulation policy learning.
This research paper introduces a novel approach: a dynamic heterogeneous graph-based model for learning goal-oriented soft object manipulation policies.
arXiv Detail & Related papers (2024-10-15T16:12:00Z) - Dreamitate: Real-World Visuomotor Policy Learning via Video Generation [49.03287909942888]
We propose a visuomotor policy learning framework that fine-tunes a video diffusion model on human demonstrations of a given task.
We generate an example of an execution of the task conditioned on images of a novel scene, and use this synthesized execution directly to control the robot.
arXiv Detail & Related papers (2024-06-24T17:59:45Z) - Dexterous Functional Grasping [39.15442658671798]
This paper combines the best of both worlds to accomplish functional grasping for in-the-wild objects.
We propose a novel application of eigengrasps to reduce the search space of RL using a small amount of human data.
We find that eigengrasp action space beats baselines in simulation and outperforms hardcoded grasping in real and matches or outperforms a trained human teleoperator.
arXiv Detail & Related papers (2023-12-05T18:59:23Z) - UniDexGrasp: Universal Robotic Dexterous Grasping via Learning Diverse
Proposal Generation and Goal-Conditioned Policy [23.362000826018612]
We tackle the problem of learning universal robotic dexterous grasping from a point cloud observation under a table-top setting.
Inspired by successful pipelines used in parallel gripper grasping, we split the task into two stages: 1) grasp proposal (pose) generation and 2) goal-conditioned grasp execution.
Our final pipeline becomes the first to achieve universal generalization for dexterous grasping, demonstrating an average success rate of more than 60% on thousands of object instances.
arXiv Detail & Related papers (2023-03-02T03:23:18Z) - Efficient Representations of Object Geometry for Reinforcement Learning
of Interactive Grasping Policies [29.998917158604694]
We present a reinforcement learning framework that learns the interactive grasping of various geometrically distinct real-world objects.
Videos of learned interactive policies are available at https://maltemosbach.org/io/geometry_aware_grasping_policies.
arXiv Detail & Related papers (2022-11-20T11:47:33Z) - Silver-Bullet-3D at ManiSkill 2021: Learning-from-Demonstrations and
Heuristic Rule-based Methods for Object Manipulation [118.27432851053335]
This paper presents an overview and comparative analysis of our systems designed for the following two tracks in SAPIEN ManiSkill Challenge 2021: No Interaction Track.
The No Interaction track targets for learning policies from pre-collected demonstration trajectories.
In this track, we design a Heuristic Rule-based Method (HRM) to trigger high-quality object manipulation by decomposing the task into a series of sub-tasks.
For each sub-task, the simple rule-based controlling strategies are adopted to predict actions that can be applied to robotic arms.
arXiv Detail & Related papers (2022-06-13T16:20:42Z) - Human-in-the-Loop Imitation Learning using Remote Teleoperation [72.2847988686463]
We build a data collection system tailored to 6-DoF manipulation settings.
We develop an algorithm to train the policy iteratively on new data collected by the system.
We demonstrate that agents trained on data collected by our intervention-based system and algorithm outperform agents trained on an equivalent number of samples collected by non-interventional demonstrators.
arXiv Detail & Related papers (2020-12-12T05:30:35Z) - Learning Dexterous Grasping with Object-Centric Visual Affordances [86.49357517864937]
Dexterous robotic hands are appealing for their agility and human-like morphology.
We introduce an approach for learning dexterous grasping.
Our key idea is to embed an object-centric visual affordance model within a deep reinforcement learning loop.
arXiv Detail & Related papers (2020-09-03T04:00:40Z) - AvE: Assistance via Empowerment [77.08882807208461]
We propose a new paradigm for assistance by instead increasing the human's ability to control their environment.
This task-agnostic objective preserves the person's autonomy and ability to achieve any eventual state.
arXiv Detail & Related papers (2020-06-26T04:40:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.