Learning Dexterous Object Handover
- URL: http://arxiv.org/abs/2506.16822v1
- Date: Fri, 20 Jun 2025 08:22:46 GMT
- Title: Learning Dexterous Object Handover
- Authors: Daniel Frau-Alfaro, Julio CastaƱo-Amoros, Santiago Puente, Pablo Gil, Roberto Calandra,
- Abstract summary: In this work, we demonstrate the use of Reinforcement Learning (RL) for dexterous object handover between two multi-finger hands.<n>Key to this task is the use of a novel reward function based on dual quaternions to minimize the rotation distance.<n>The results demonstrate that the trained policy successfully perform this task, achieving a total success rate of 94% in the best-case scenario after 100 experiments.
- Score: 4.351636062759616
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Object handover is an important skill that we use daily when interacting with other humans. To deploy robots in collaborative setting, like houses, being able to receive and handing over objects safely and efficiently becomes a crucial skill. In this work, we demonstrate the use of Reinforcement Learning (RL) for dexterous object handover between two multi-finger hands. Key to this task is the use of a novel reward function based on dual quaternions to minimize the rotation distance, which outperforms other rotation representations such as Euler and rotation matrices. The robustness of the trained policy is experimentally evaluated by testing w.r.t. objects that are not included in the training distribution, and perturbations during the handover process. The results demonstrate that the trained policy successfully perform this task, achieving a total success rate of 94% in the best-case scenario after 100 experiments, thereby showing the robustness of our policy with novel objects. In addition, the best-case performance of the policy decreases by only 13.8% when the other robot moves during the handover, proving that our policy is also robust to this type of perturbation, which is common in real-world object handovers.
Related papers
- Tool-as-Interface: Learning Robot Policies from Human Tool Usage through Imitation Learning [16.394434999046293]
We propose a framework to transfer tool-use knowledge from humans to robots.<n>We validate our approach on diverse real-world tasks, including meatball scooping, pan flipping, wine bottle balancing, and other complex tasks.
arXiv Detail & Related papers (2025-04-06T20:40:19Z) - FLEX: A Framework for Learning Robot-Agnostic Force-based Skills Involving Sustained Contact Object Manipulation [9.292150395779332]
We propose a novel framework for learning object-centric manipulation policies in force space.<n>Our method simplifies the action space, reduces unnecessary exploration, and decreases simulation overhead.<n>Our evaluations demonstrate that the method significantly outperforms baselines.
arXiv Detail & Related papers (2025-03-17T17:49:47Z) - Lessons from Learning to Spin "Pens" [51.9182692233916]
In this work, we push the boundaries of learning-based in-hand manipulation systems by demonstrating the capability to spin pen-like objects.
We first use reinforcement learning to train an oracle policy with privileged information and generate a high-fidelity trajectory dataset in simulation.
We then fine-tune the sensorimotor policy using these real-world trajectories to adapt it to the real world dynamics.
arXiv Detail & Related papers (2024-07-26T17:56:01Z) - Towards Open-World Mobile Manipulation in Homes: Lessons from the Neurips 2023 HomeRobot Open Vocabulary Mobile Manipulation Challenge [93.4434417387526]
We propose Open Vocabulary Mobile Manipulation as a key benchmark task for robotics.
We organized a NeurIPS 2023 competition featuring both simulation and real-world components to evaluate solutions to this task.
We detail the results and methodologies used, both in simulation and real-world settings.
arXiv Detail & Related papers (2024-07-09T15:15:01Z) - DexCatch: Learning to Catch Arbitrary Objects with Dexterous Hands [14.712280514097912]
We propose a Learning-based framework for Throwing-Catching tasks using dexterous hands.
Our method achieves a 73% success rate across 45 scenarios (diverse hand poses and objects)
In tasks where the object in hand faces sideways, an extremely unstable scenario due to the lack of support from the palm, our method still achieves a success rate of over 60%.
arXiv Detail & Related papers (2023-10-13T01:36:46Z) - DexTransfer: Real World Multi-fingered Dexterous Grasping with Minimal
Human Demonstrations [51.87067543670535]
We propose a robot-learning system that can take a small number of human demonstrations and learn to grasp unseen object poses.
We train a dexterous grasping policy that takes the point clouds of the object as input and predicts continuous actions to grasp objects from different initial robot states.
The policy learned from our dataset can generalize well on unseen object poses in both simulation and the real world.
arXiv Detail & Related papers (2022-09-28T17:51:49Z) - Trajectory-based Reinforcement Learning of Non-prehensile Manipulation
Skills for Semi-Autonomous Teleoperation [18.782289957834475]
We present a semi-autonomous teleoperation framework for a pick-and-place task using an RGB-D sensor.
A trajectory-based reinforcement learning is utilized for learning the non-prehensile manipulation to rearrange the objects.
We show that the proposed method outperforms manual keyboard control in terms of the time duration for the grasping.
arXiv Detail & Related papers (2021-09-27T14:27:28Z) - A Framework for Efficient Robotic Manipulation [79.10407063260473]
We show that a single robotic arm can learn sparse-reward manipulation policies from pixels.
We show that, given only 10 demonstrations, a single robotic arm can learn sparse-reward manipulation policies from pixels.
arXiv Detail & Related papers (2020-12-14T22:18:39Z) - Reactive Human-to-Robot Handovers of Arbitrary Objects [57.845894608577495]
We present a vision-based system that enables human-to-robot handovers of unknown objects.
Our approach combines closed-loop motion planning with real-time, temporally-consistent grasp generation.
We demonstrate the generalizability, usability, and robustness of our approach on a novel benchmark set of 26 diverse household objects.
arXiv Detail & Related papers (2020-11-17T21:52:22Z) - Reinforcement Learning Experiments and Benchmark for Solving Robotic
Reaching Tasks [0.0]
Reinforcement learning has been successfully applied to solving the reaching task with robotic arms.
It is shown that augmenting the reward signal with the Hindsight Experience Replay exploration technique increases the average return of off-policy agents.
arXiv Detail & Related papers (2020-11-11T14:00:49Z) - Learning Dexterous Grasping with Object-Centric Visual Affordances [86.49357517864937]
Dexterous robotic hands are appealing for their agility and human-like morphology.
We introduce an approach for learning dexterous grasping.
Our key idea is to embed an object-centric visual affordance model within a deep reinforcement learning loop.
arXiv Detail & Related papers (2020-09-03T04:00:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.