Recognizing Hand Use and Hand Role at Home After Stroke from Egocentric
Video
- URL: http://arxiv.org/abs/2207.08920v2
- Date: Thu, 21 Jul 2022 16:02:22 GMT
- Title: Recognizing Hand Use and Hand Role at Home After Stroke from Egocentric
Video
- Authors: Meng-Fen Tsai, Rosalie H. Wang, and Jo\'se Zariffa
- Abstract summary: Egocentric video can capture hand-object interactions in context, as well as show how more-affected hands are used.
To use artificial intelligence-based computer vision to classify hand use and hand role from egocentric videos recorded at home after stroke.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Introduction: Hand function is a central determinant of independence after
stroke. Measuring hand use in the home environment is necessary to evaluate the
impact of new interventions, and calls for novel wearable technologies.
Egocentric video can capture hand-object interactions in context, as well as
show how more-affected hands are used during bilateral tasks (for stabilization
or manipulation). Automated methods are required to extract this information.
Objective: To use artificial intelligence-based computer vision to classify
hand use and hand role from egocentric videos recorded at home after stroke.
Methods: Twenty-one stroke survivors participated in the study. A random forest
classifier, a SlowFast neural network, and the Hand Object Detector neural
network were applied to identify hand use and hand role at home.
Leave-One-Subject-Out-Cross-Validation (LOSOCV) was used to evaluate the
performance of the three models. Between-group differences of the models were
calculated based on the Mathews correlation coefficient (MCC). Results: For
hand use detection, the Hand Object Detector had significantly higher
performance than the other models. The macro average MCCs using this model in
the LOSOCV were 0.50 +- 0.23 for the more-affected hands and 0.58 +- 0.18 for
the less-affected hands. Hand role classification had macro average MCCs in the
LOSOCV that were close to zero for all models. Conclusion: Using egocentric
video to capture the hand use of stroke survivors at home is feasible. Pose
estimation to track finger movements may be beneficial to classifying hand
roles in the future.
Related papers
- A Personalized Video-Based Hand Taxonomy: Application for Individuals with Spinal Cord Injury [14.062874246796687]
Spinal cord injuries (SCI) can impair hand function, reducing independence.
This study aims to automatically identify the dominant distinct hand grasps in egocentric video using semantic clustering.
A deep learning model integrating posture and appearance data was employed to create a personalized hand taxonomy.
arXiv Detail & Related papers (2024-03-26T20:30:55Z) - HMP: Hand Motion Priors for Pose and Shape Estimation from Video [52.39020275278984]
We develop a generative motion prior specific for hands, trained on the AMASS dataset which features diverse and high-quality hand motions.
Our integration of a robust motion prior significantly enhances performance, especially in occluded scenarios.
We demonstrate our method's efficacy via qualitative and quantitative evaluations on the HO3D and DexYCB datasets.
arXiv Detail & Related papers (2023-12-27T22:35:33Z) - Semantics2Hands: Transferring Hand Motion Semantics between Avatars [34.39785320233128]
Even minor errors in hand motions can significantly impact the user experience.
This paper introduces a novel anatomy-based semantic matrix (ASM) that encodes the semantics of hand motions.
We train the ASM using a semi-supervised learning strategy on the Mixamo and InterHand2.6M datasets.
arXiv Detail & Related papers (2023-08-11T03:07:31Z) - Simultaneous Estimation of Hand Configurations and Finger Joint Angles
using Forearm Ultrasound [8.753262480814493]
Forearm ultrasound images provide a musculoskeletal visualization that can be used to understand hand motion.
We propose a CNN based deep learning pipeline for predicting the MCP joint angles.
A low latency pipeline has been proposed for estimating both MCP joint angles and hand configuration aimed at real-time control of human-machine interfaces.
arXiv Detail & Related papers (2022-11-29T02:06:19Z) - Interacting Hand-Object Pose Estimation via Dense Mutual Attention [97.26400229871888]
3D hand-object pose estimation is the key to the success of many computer vision applications.
We propose a novel dense mutual attention mechanism that is able to model fine-grained dependencies between the hand and the object.
Our method is able to produce physically plausible poses with high quality and real-time inference speed.
arXiv Detail & Related papers (2022-11-16T10:01:33Z) - Palm Vein Recognition via Multi-task Loss Function and Attention Layer [3.265773263570237]
In this paper, a convolutional neural network based on VGG-16 transfer learning fused attention mechanism is used as the feature extraction network on the infrared palm vein dataset.
In order to verify the robustness of the model, some experiments were carried out on datasets from different sources.
At the same time, the matching is with high efficiency which takes an average of 0.13 seconds per palm vein pair.
arXiv Detail & Related papers (2022-11-11T02:32:49Z) - Measuring hand use in the home after cervical spinal cord injury using
egocentric video [2.1064122195521926]
Egocentric video has emerged as a potential solution for monitoring hand function in individuals living with tetraplegia in the community.
We develop and validate a wearable vision-based system for measuring hand use in the home among individuals living with tetraplegia.
arXiv Detail & Related papers (2022-03-31T12:43:23Z) - Monocular 3D Reconstruction of Interacting Hands via Collision-Aware
Factorized Refinements [96.40125818594952]
We make the first attempt to reconstruct 3D interacting hands from monocular single RGB images.
Our method can generate 3D hand meshes with both precise 3D poses and minimal collisions.
arXiv Detail & Related papers (2021-11-01T08:24:10Z) - A Skeleton-Driven Neural Occupancy Representation for Articulated Hands [49.956892429789775]
Hand ArticuLated Occupancy (HALO) is a novel representation of articulated hands that bridges the advantages of 3D keypoints and neural implicit surfaces.
We demonstrate the applicability of HALO to the task of conditional generation of hands that grasp 3D objects.
arXiv Detail & Related papers (2021-09-23T14:35:19Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.