Simultaneous Estimation of Manipulation Skill and Hand Grasp Force from Forearm Ultrasound Images
- URL: http://arxiv.org/abs/2502.00275v1
- Date: Sat, 01 Feb 2025 02:23:02 GMT
- Title: Simultaneous Estimation of Manipulation Skill and Hand Grasp Force from Forearm Ultrasound Images
- Authors: Keshav Bimbraw, Srikar Nekkanti, Daniel B. Tiller II, Mihir Deshmukh, Berk Calli, Robert D. Howe, Haichong K. Zhang,
- Abstract summary: We present a method for simultaneously estimating manipulation skills and applied hand force using forearm ultrasound data.
Our models achieved an average classification accuracy of 94.87 percent plus or minus 10.16 percent for manipulation skills and an average root mean square error (RMSE) of 0.51 plus or minus 0.19 Newtons for force estimation.
- Score: 2.82697733014759
- License:
- Abstract: Accurate estimation of human hand configuration and the forces they exert is critical for effective teleoperation and skill transfer in robotic manipulation. A deeper understanding of human interactions with objects can further enhance teleoperation performance. To address this need, researchers have explored methods to capture and translate human manipulation skills and applied forces to robotic systems. Among these, biosignal-based approaches, particularly those using forearm ultrasound data, have shown significant potential for estimating hand movements and finger forces. In this study, we present a method for simultaneously estimating manipulation skills and applied hand force using forearm ultrasound data. Data collected from seven participants were used to train deep learning models for classifying manipulation skills and estimating grasp force. Our models achieved an average classification accuracy of 94.87 percent plus or minus 10.16 percent for manipulation skills and an average root mean square error (RMSE) of 0.51 plus or minus 0.19 Newtons for force estimation, as evaluated using five-fold cross-validation. These results highlight the effectiveness of forearm ultrasound in advancing human-machine interfacing and robotic teleoperation for complex manipulation tasks. This work enables new and effective possibilities for human-robot skill transfer and tele-manipulation, bridging the gap between human dexterity and robotic control.
Related papers
- The Role of Functional Muscle Networks in Improving Hand Gesture Perception for Human-Machine Interfaces [2.367412330421982]
Surface electromyography (sEMG) has been explored for its rich informational context and accessibility.
This paper proposes the decoding of muscle synchronization rather than individual muscle activation.
It achieves an accuracy of 85.1%, demonstrating improved performance compared to existing methods.
arXiv Detail & Related papers (2024-08-05T15:17:34Z) - Offline Imitation Learning Through Graph Search and Retrieval [57.57306578140857]
Imitation learning is a powerful machine learning algorithm for a robot to acquire manipulation skills.
We propose GSR, a simple yet effective algorithm that learns from suboptimal demonstrations through Graph Search and Retrieval.
GSR can achieve a 10% to 30% higher success rate and over 30% higher proficiency compared to baselines.
arXiv Detail & Related papers (2024-07-22T06:12:21Z) - Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Uncertainty-aware Self-supervised Learning for Cross-domain Technical
Skill Assessment in Robot-assisted Surgery [14.145726158070522]
We propose a novel approach for skill assessment by transferring domain knowledge from labeled kinematic data to unlabeled data.
Our method offers a significant advantage over other existing works as it does not require manual labeling or prior knowledge of the surgical training task for robot-assisted surgery.
arXiv Detail & Related papers (2023-04-28T01:52:18Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
Learning [121.9708998627352]
Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off.
This work revisits the robustness-accuracy trade-off in robot learning by analyzing if recent advances in robust training methods and theory can make adversarial training suitable for real-world robot applications.
arXiv Detail & Related papers (2022-04-15T08:12:15Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z) - Learn and Transfer Knowledge of Preferred Assistance Strategies in
Semi-autonomous Telemanipulation [16.28164706104047]
We develop a novel preference-aware assistance knowledge learning approach.
An assistance preference model learns what assistance is preferred by a human.
We also develop knowledge transfer methods to transfer the preference knowledge across different robot hand structures.
arXiv Detail & Related papers (2020-03-07T04:49:57Z) - Human-robot co-manipulation of extended objects: Data-driven models and
control from analysis of human-human dyads [2.7036498789349244]
We use data from human-human dyad experiments to determine motion intent which we use for a physical human-robot co-manipulation task.
We develop a deep neural network based on motion data from human-human trials to predict human intent based on past motion.
arXiv Detail & Related papers (2020-01-03T21:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.