Real-Time Dynamic Robot-Assisted Hand-Object Interaction via Motion Primitives
- URL: http://arxiv.org/abs/2405.19531v1
- Date: Wed, 29 May 2024 21:20:16 GMT
- Title: Real-Time Dynamic Robot-Assisted Hand-Object Interaction via Motion Primitives
- Authors: Mingqi Yuan, Huijiang Wang, Kai-Fung Chu, Fumiya Iida, Bo Li, Wenjun Zeng,
- Abstract summary: We propose an approach to enhancing physical HRI with a focus on dynamic robot-assisted hand-object interaction.
We employ a transformer-based algorithm to perform real-time 3D modeling of human hands from single RGB images.
The robot's action implementation is dynamically fine-tuned using the continuously updated 3D hand models.
- Score: 45.256762954338704
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in artificial intelligence (AI) have been propelling the evolution of human-robot interaction (HRI) technologies. However, significant challenges remain in achieving seamless interactions, particularly in tasks requiring physical contact with humans. These challenges arise from the need for accurate real-time perception of human actions, adaptive control algorithms for robots, and the effective coordination between human and robotic movements. In this paper, we propose an approach to enhancing physical HRI with a focus on dynamic robot-assisted hand-object interaction (HOI). Our methodology integrates hand pose estimation, adaptive robot control, and motion primitives to facilitate human-robot collaboration. Specifically, we employ a transformer-based algorithm to perform real-time 3D modeling of human hands from single RGB images, based on which a motion primitives model (MPM) is designed to translate human hand motions into robotic actions. The robot's action implementation is dynamically fine-tuned using the continuously updated 3D hand models. Experimental validations, including a ring-wearing task, demonstrate the system's effectiveness in adapting to real-time movements and assisting in precise task executions.
Related papers
- Interactive Multi-Robot Flocking with Gesture Responsiveness and Musical Accompaniment [0.7659052547635159]
This work presents a compelling multi-robot task in which the main aim is to enthrall and interest.
In this task, the goal is for a human to be drawn to move alongside and participate in a dynamic, expressive robot flock.
Towards this aim, the research team created algorithms for robot movements and engaging interaction modes such as gestures and sound.
arXiv Detail & Related papers (2024-03-30T18:16:28Z) - Robot Interaction Behavior Generation based on Social Motion Forecasting for Human-Robot Interaction [9.806227900768926]
We propose to model social motion forecasting in a shared human-robot representation space.
ECHO operates in the aforementioned shared space to predict the future motions of the agents encountered in social scenarios.
We evaluate our model in multi-person and human-robot motion forecasting tasks and obtain state-of-the-art performance by a large margin.
arXiv Detail & Related papers (2024-02-07T11:37:14Z) - Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space [9.806227900768926]
This paper introduces a novel deep-learning approach for human-to-robot motion.
Our method does not require paired human-to-robot data, which facilitates its translation to new robots.
Our model outperforms existing works regarding human-to-robot similarity in terms of efficiency and precision.
arXiv Detail & Related papers (2023-09-11T08:55:04Z) - Human-Robot Skill Transfer with Enhanced Compliance via Dynamic Movement
Primitives [1.7901837062462316]
We introduce a systematic method to extract the dynamic features from human demonstration to auto-tune the parameters in the Dynamic Movement Primitives framework.
Our method was implemented into an actual human-robot setup to extract human dynamic features and used to regenerate the robot trajectories following both LfD and RL.
arXiv Detail & Related papers (2023-04-12T08:48:28Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - CoGrasp: 6-DoF Grasp Generation for Human-Robot Collaboration [0.0]
We propose a novel, deep neural network-based method called CoGrasp that generates human-aware robot grasps.
In real robot experiments, our method achieves about 88% success rate in producing stable grasps.
Our approach enables a safe, natural, and socially-aware human-robot objects' co-grasping experience.
arXiv Detail & Related papers (2022-10-06T19:23:25Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.