Dynamic Hand Gesture-Featured Human Motor Adaptation in Tool Delivery
using Voice Recognition
- URL: http://arxiv.org/abs/2309.11368v1
- Date: Wed, 20 Sep 2023 14:51:09 GMT
- Title: Dynamic Hand Gesture-Featured Human Motor Adaptation in Tool Delivery
using Voice Recognition
- Authors: Haolin Fei, Stefano Tedeschi, Yanpei Huang, Andrew Kennedy and Ziwei
Wang
- Abstract summary: This paper introduces an innovative human-robot collaborative framework.
It seamlessly integrates hand gesture and dynamic movement recognition, voice recognition, and a switchable control adaptation strategy.
Experiment results have demonstrated superior performance in hand gesture recognition.
- Score: 5.13619372598999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human-robot collaboration has benefited users with higher efficiency towards
interactive tasks. Nevertheless, most collaborative schemes rely on complicated
human-machine interfaces, which might lack the requisite intuitiveness compared
with natural limb control. We also expect to understand human intent with low
training data requirements. In response to these challenges, this paper
introduces an innovative human-robot collaborative framework that seamlessly
integrates hand gesture and dynamic movement recognition, voice recognition,
and a switchable control adaptation strategy. These modules provide a
user-friendly approach that enables the robot to deliver the tools as per user
need, especially when the user is working with both hands. Therefore, users can
focus on their task execution without additional training in the use of
human-machine interfaces, while the robot interprets their intuitive gestures.
The proposed multimodal interaction framework is executed in the UR5e robot
platform equipped with a RealSense D435i camera, and the effectiveness is
assessed through a soldering circuit board task. The experiment results have
demonstrated superior performance in hand gesture recognition, where the static
hand gesture recognition module achieves an accuracy of 94.3\%, while the
dynamic motion recognition module reaches 97.6\% accuracy. Compared with human
solo manipulation, the proposed approach facilitates higher efficiency tool
delivery, without significantly distracting from human intents.
Related papers
- Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Real-Time Dynamic Robot-Assisted Hand-Object Interaction via Motion Primitives [45.256762954338704]
We propose an approach to enhancing physical HRI with a focus on dynamic robot-assisted hand-object interaction.
We employ a transformer-based algorithm to perform real-time 3D modeling of human hands from single RGB images.
The robot's action implementation is dynamically fine-tuned using the continuously updated 3D hand models.
arXiv Detail & Related papers (2024-05-29T21:20:16Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Body Gesture Recognition to Control a Social Robot [5.557794184787908]
We propose a gesture based language to allow humans to interact with robots using their body in a natural way.
We have created a new gesture detection model using neural networks and a custom dataset of humans performing a set of body gestures to train our network.
arXiv Detail & Related papers (2022-06-15T13:49:22Z) - Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
Learning [121.9708998627352]
Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off.
This work revisits the robustness-accuracy trade-off in robot learning by analyzing if recent advances in robust training methods and theory can make adversarial training suitable for real-world robot applications.
arXiv Detail & Related papers (2022-04-15T08:12:15Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Non-invasive Cognitive-level Human Interfacing for the Robotic
Restoration of Reaching & Grasping [5.985098076571228]
We present a robotic system for human augmentation, capable of actuating the user's arm and fingers for them.
We combine wearable eye tracking, the visual context of the environment and the structural grammar of human actions to create a cognitive-level assistive robotic setup.
arXiv Detail & Related papers (2021-02-25T16:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.