Posture-Informed Muscular Force Learning for Robust Hand Pressure Estimation
- URL: http://arxiv.org/abs/2410.23629v2
- Date: Fri, 01 Nov 2024 08:38:21 GMT
- Title: Posture-Informed Muscular Force Learning for Robust Hand Pressure Estimation
- Authors: Kyungjin Seo, Junghoon Seo, Hanseok Jeong, Sangpil Kim, Sang Ho Yoon,
- Abstract summary: We present PiMForce, a novel framework that enhances hand pressure estimation.
Our approach utilizes detailed spatial information from 3D hand poses in conjunction with dynamic muscle activity from sEMG.
Our framework enables precise hand pressure estimation in complex and natural interaction scenarios.
- Score: 6.912016522494431
- License:
- Abstract: We present PiMForce, a novel framework that enhances hand pressure estimation by leveraging 3D hand posture information to augment forearm surface electromyography (sEMG) signals. Our approach utilizes detailed spatial information from 3D hand poses in conjunction with dynamic muscle activity from sEMG to enable accurate and robust whole-hand pressure measurements under diverse hand-object interactions. We also developed a multimodal data collection system that combines a pressure glove, an sEMG armband, and a markerless finger-tracking module. We created a comprehensive dataset from 21 participants, capturing synchronized data of hand posture, sEMG signals, and exerted hand pressure across various hand postures and hand-object interaction scenarios using our collection system. Our framework enables precise hand pressure estimation in complex and natural interaction scenarios. Our approach substantially mitigates the limitations of traditional sEMG-based or vision-based methods by integrating 3D hand posture information with sEMG signals. Video demos, data, and code are available online.
Related papers
- EgoPressure: A Dataset for Hand Pressure and Pose Estimation in Egocentric Vision [69.1005706608681]
We introduce EgoPressure, a novel dataset of touch contact and pressure interaction from an egocentric perspective.
EgoPressure comprises 5.0 hours of touch contact and pressure interaction from 21 participants captured by a moving egocentric camera and 7 stationary Kinect cameras.
arXiv Detail & Related papers (2024-09-03T18:53:32Z) - FORS-EMG: A Novel sEMG Dataset for Hand Gesture Recognition Across Multiple Forearm Orientations [1.444899524297657]
Surface electromy (sEMG) signal holds great potential in the research fields of gesture recognition and the development of robust prosthetic hands.
The sEMG signal is compromised with physiological or dynamic factors such as forearm orientations, forearm displacement, limb position, etc.
In this paper, we have proposed a dataset of electrode sEMG signals to evaluate common daily living hand gestures performed with three forearm orientations.
arXiv Detail & Related papers (2024-09-03T14:23:06Z) - HUP-3D: A 3D multi-view synthetic dataset for assisted-egocentric hand-ultrasound pose estimation [11.876066932162873]
HUP-3D is a 3D multiview synthetic dataset for hand-ultrasound probe pose estimation.
Our dataset consists of over 31k sets of movements.
Our approach includes image rendering concept, enhancing diversity with various hand and arm textures.
arXiv Detail & Related papers (2024-07-12T12:25:42Z) - Learning Visuotactile Skills with Two Multifingered Hands [80.99370364907278]
We explore learning from human demonstrations using a bimanual system with multifingered hands and visuotactile data.
Our results mark a promising step forward in bimanual multifingered manipulation from visuotactile data.
arXiv Detail & Related papers (2024-04-25T17:59:41Z) - HMP: Hand Motion Priors for Pose and Shape Estimation from Video [52.39020275278984]
We develop a generative motion prior specific for hands, trained on the AMASS dataset which features diverse and high-quality hand motions.
Our integration of a robust motion prior significantly enhances performance, especially in occluded scenarios.
We demonstrate our method's efficacy via qualitative and quantitative evaluations on the HO3D and DexYCB datasets.
arXiv Detail & Related papers (2023-12-27T22:35:33Z) - Reconfigurable Data Glove for Reconstructing Physical and Virtual Grasps [100.72245315180433]
We present a reconfigurable data glove design to capture different modes of human hand-object interactions.
The glove operates in three modes for various downstream tasks with distinct features.
We evaluate the system's three modes by (i) recording hand gestures and associated forces, (ii) improving manipulation fluency in VR, and (iii) producing realistic simulation effects of various tool uses.
arXiv Detail & Related papers (2023-01-14T05:35:50Z) - Spatial-Temporal Parallel Transformer for Arm-Hand Dynamic Estimation [7.043124227237034]
We propose an approach to estimate arm and hand dynamics from monocular video by utilizing the relationship between arm and hand.
By integrating a 2D hand pose estimation model and a 3D human pose estimation model, the proposed method can produce plausible arm and hand dynamics from monocular video.
arXiv Detail & Related papers (2022-03-30T10:51:41Z) - MM-Hand: 3D-Aware Multi-Modal Guided Hand Generative Network for 3D Hand
Pose Synthesis [81.40640219844197]
Estimating the 3D hand pose from a monocular RGB image is important but challenging.
A solution is training on large-scale RGB hand images with accurate 3D hand keypoint annotations.
We have developed a learning-based approach to synthesize realistic, diverse, and 3D pose-preserving hand images.
arXiv Detail & Related papers (2020-10-02T18:27:34Z) - Body2Hands: Learning to Infer 3D Hands from Conversational Gesture Body
Dynamics [87.17505994436308]
We build upon the insight that body motion and hand gestures are strongly correlated in non-verbal communication settings.
We formulate the learning of this prior as a prediction task of 3D hand shape over time given body motion input alone.
Our hand prediction model produces convincing 3D hand gestures given only the 3D motion of the speaker's arms as input.
arXiv Detail & Related papers (2020-07-23T22:58:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.