Kinematically Constrained Human-like Bimanual Robot-to-Human Handovers
- URL: http://arxiv.org/abs/2402.14525v1
- Date: Thu, 22 Feb 2024 13:19:02 GMT
- Title: Kinematically Constrained Human-like Bimanual Robot-to-Human Handovers
- Authors: Yasemin G\"oksu, Antonio De Almeida Correia, Vignesh Prasad, Alap
Kshirsagar, Dorothea Koert, Jan Peters, Georgia Chalvatzaki
- Abstract summary: Bimanual handovers are crucial for transferring large, deformable or delicate objects.
This paper proposes a framework for generating kinematically constrained human-like bimanual robot motions.
- Score: 19.052211315080044
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Bimanual handovers are crucial for transferring large, deformable or delicate
objects. This paper proposes a framework for generating kinematically
constrained human-like bimanual robot motions to ensure seamless and natural
robot-to-human object handovers. We use a Hidden Semi-Markov Model (HSMM) to
reactively generate suitable response trajectories for a robot based on the
observed human partner's motion. The trajectories are adapted with task space
constraints to ensure accurate handovers. Results from a pilot study show that
our approach is perceived as more human--like compared to a baseline Inverse
Kinematics approach.
Related papers
- 3HANDS Dataset: Learning from Humans for Generating Naturalistic Handovers with Supernumerary Robotic Limbs [64.99122701615151]
Supernumerary robotic limbs (SRLs) are robotic structures integrated closely with the user's body.
We present 3HANDS, a novel dataset of object handover interactions between a participant performing a daily activity and another participant enacting a hip-mounted SRL in a naturalistic manner.
We present three models that generate naturalistic handover trajectories, one that determines the appropriate handover endpoints, and a third that predicts the moment to initiate a handover.
arXiv Detail & Related papers (2025-03-06T17:23:55Z) - Learning to Transfer Human Hand Skills for Robot Manipulations [12.797862020095856]
We present a method for teaching dexterous manipulation tasks to robots from human hand motion demonstrations.
Our approach learns a joint motion manifold that maps human hand movements, robot hand actions, and object movements in 3D, enabling us to infer one motion from others.
arXiv Detail & Related papers (2025-01-07T22:33:47Z) - Real-Time Dynamic Robot-Assisted Hand-Object Interaction via Motion Primitives [45.256762954338704]
We propose an approach to enhancing physical HRI with a focus on dynamic robot-assisted hand-object interaction.
We employ a transformer-based algorithm to perform real-time 3D modeling of human hands from single RGB images.
The robot's action implementation is dynamically fine-tuned using the continuously updated 3D hand models.
arXiv Detail & Related papers (2024-05-29T21:20:16Z) - Learning Multimodal Latent Dynamics for Human-Robot Interaction [19.803547418450236]
This article presents a method for learning well-coordinated Human-Robot Interaction (HRI) from Human-Human Interactions (HHI)
We devise a hybrid approach using Hidden Markov Models (HMMs) as the latent space priors for a Variational Autoencoder to model a joint distribution over the interacting agents.
We find that Users perceive our method as more human-like, timely, and accurate and rank our method with a higher degree of preference over other baselines.
arXiv Detail & Related papers (2023-11-27T23:56:59Z) - ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space [9.806227900768926]
This paper introduces a novel deep-learning approach for human-to-robot motion.
Our method does not require paired human-to-robot data, which facilitates its translation to new robots.
Our model outperforms existing works regarding human-to-robot similarity in terms of efficiency and precision.
arXiv Detail & Related papers (2023-09-11T08:55:04Z) - Human-Robot Skill Transfer with Enhanced Compliance via Dynamic Movement
Primitives [1.7901837062462316]
We introduce a systematic method to extract the dynamic features from human demonstration to auto-tune the parameters in the Dynamic Movement Primitives framework.
Our method was implemented into an actual human-robot setup to extract human dynamic features and used to regenerate the robot trajectories following both LfD and RL.
arXiv Detail & Related papers (2023-04-12T08:48:28Z) - Learning Human-to-Robot Handovers from Point Clouds [63.18127198174958]
We propose the first framework to learn control policies for vision-based human-to-robot handovers.
We show significant performance gains over baselines on a simulation benchmark, sim-to-sim transfer and sim-to-real transfer.
arXiv Detail & Related papers (2023-03-30T17:58:36Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.