A Powered Prosthetic Hand with Vision System for Enhancing the Anthropopathic Grasp
- URL: http://arxiv.org/abs/2412.07105v1
- Date: Tue, 10 Dec 2024 01:45:14 GMT
- Title: A Powered Prosthetic Hand with Vision System for Enhancing the Anthropopathic Grasp
- Authors: Yansong Xu, Xiaohui Wang, Junlin Li, Xiaoqian Zhang, Feng Li, Qing Gao, Chenglong Fu, Yuquan Leng,
- Abstract summary: We propose the Spatial Geometry-based Gesture Mapping (SG-GM) method, which constructs gesture functions based on the geometric features of the human hand grasping processes.
We also propose the Motion Trajectory Regression-based Grasping Intent Estimation (MTR-GIE) algorithm.
The experiments were conducted to grasp 8 common daily objects including cup, fork, etc.
- Score: 11.354158680070652
- License:
- Abstract: The anthropomorphism of grasping process significantly benefits the experience and grasping efficiency of prosthetic hand wearers. Currently, prosthetic hands controlled by signals such as brain-computer interfaces (BCI) and electromyography (EMG) face difficulties in precisely recognizing the amputees' grasping gestures and executing anthropomorphic grasp processes. Although prosthetic hands equipped with vision systems enables the objects' feature recognition, they lack perception of human grasping intention. Therefore, this paper explores the estimation of grasping gestures solely through visual data to accomplish anthropopathic grasping control and the determination of grasping intention within a multi-object environment. To address this, we propose the Spatial Geometry-based Gesture Mapping (SG-GM) method, which constructs gesture functions based on the geometric features of the human hand grasping processes. It's subsequently implemented on the prosthetic hand. Furthermore, we propose the Motion Trajectory Regression-based Grasping Intent Estimation (MTR-GIE) algorithm. This algorithm predicts pre-grasping object utilizing regression prediction and prior spatial segmentation estimation derived from the prosthetic hand's position and trajectory. The experiments were conducted to grasp 8 common daily objects including cup, fork, etc. The experimental results presented a similarity coefficient $R^{2}$ of grasping process of 0.911, a Root Mean Squared Error ($RMSE$) of 2.47\degree, a success rate of grasping of 95.43$\%$, and an average duration of grasping process of 3.07$\pm$0.41 s. Furthermore, grasping experiments in a multi-object environment were conducted. The average accuracy of intent estimation reached 94.35$\%$. Our methodologies offer a groundbreaking approach to enhance the prosthetic hand's functionality and provides valuable insights for future research.
Related papers
- Machine Learning Assisted Postural Movement Recognition using Photoplethysmography(PPG) [0.0]
There is an urgent need for the development of fall detection and fall prevention technologies.
This work presents for the first time the use of machine learning techniques to recognize postural movements.
Various machine learning approaches were used for classification, and the Artificial Neural Network (ANN) was found to be the best.
arXiv Detail & Related papers (2024-11-02T18:56:41Z) - AiOS: All-in-One-Stage Expressive Human Pose and Shape Estimation [55.179287851188036]
We introduce a novel all-in-one-stage framework, AiOS, for expressive human pose and shape recovery without an additional human detection step.
We first employ a human token to probe a human location in the image and encode global features for each instance.
Then, we introduce a joint-related token to probe the human joint in the image and encoder a fine-grained local feature.
arXiv Detail & Related papers (2024-03-26T17:59:23Z) - Neural feels with neural fields: Visuo-tactile perception for in-hand
manipulation [57.60490773016364]
We combine vision and touch sensing on a multi-fingered hand to estimate an object's pose and shape during in-hand manipulation.
Our method, NeuralFeels, encodes object geometry by learning a neural field online and jointly tracks it by optimizing a pose graph problem.
Our results demonstrate that touch, at the very least, refines and, at the very best, disambiguates visual estimates during in-hand manipulation.
arXiv Detail & Related papers (2023-12-20T22:36:37Z) - GraspGF: Learning Score-based Grasping Primitive for Human-assisting
Dexterous Grasping [11.63059055320262]
We propose a novel task called human-assisting dexterous grasping.
It aims to train a policy for controlling a robotic hand's fingers to assist users in grasping objects.
arXiv Detail & Related papers (2023-09-12T08:12:32Z) - Non-Contact Heart Rate Measurement from Deteriorated Videos [0.3149883354098941]
Remote photoplethysmography (rmography) offers a state-of-the-art, non-contact methodology for estimating human pulse by analyzing facial videos.
In this study, we apply image processing to intentionally degrade video quality, mimicking challenging conditions.
Our results reveal a significant decrease in accuracy in the presence of these artifacts, prompting us to propose the application of restoration techniques.
arXiv Detail & Related papers (2023-04-28T11:58:36Z) - ShaRPy: Shape Reconstruction and Hand Pose Estimation from RGB-D with
Uncertainty [6.559796851992517]
We propose ShaRPy, the first RGB-D Shape Reconstruction and hand Pose tracking system.
ShaRPy approximates a personalized hand shape, promoting a more realistic and intuitive understanding of its digital twin.
We evaluate ShaRPy on a keypoint detection benchmark and show qualitative results of hand function assessments for activity monitoring of musculoskeletal diseases.
arXiv Detail & Related papers (2023-03-17T15:12:25Z) - Dissecting Self-Supervised Learning Methods for Surgical Computer Vision [51.370873913181605]
Self-Supervised Learning (SSL) methods have begun to gain traction in the general computer vision community.
The effectiveness of SSL methods in more complex and impactful domains, such as medicine and surgery, remains limited and unexplored.
We present an extensive analysis of the performance of these methods on the Cholec80 dataset for two fundamental and popular tasks in surgical context understanding, phase recognition and tool presence detection.
arXiv Detail & Related papers (2022-07-01T14:17:11Z) - Koopman pose predictions for temporally consistent human walking
estimations [11.016730029019522]
We introduce a new factor graph factor based on Koopman theory that embeds the nonlinear dynamics of lower-limb movement activities.
We show that our approach reduces outliers on the skeleton form by almost 1 m, while preserving natural walking trajectories at depths up to more than 10 m.
arXiv Detail & Related papers (2022-05-05T16:16:06Z) - Learning Dynamics via Graph Neural Networks for Human Pose Estimation
and Tracking [98.91894395941766]
We propose a novel online approach to learning the pose dynamics, which are independent of pose detections in current fame.
Specifically, we derive this prediction of dynamics through a graph neural network(GNN) that explicitly accounts for both spatial-temporal and visual information.
Experiments on PoseTrack 2017 and PoseTrack 2018 datasets demonstrate that the proposed method achieves results superior to the state of the art on both human pose estimation and tracking tasks.
arXiv Detail & Related papers (2021-06-07T16:36:50Z) - Appearance Learning for Image-based Motion Estimation in Tomography [60.980769164955454]
In tomographic imaging, anatomical structures are reconstructed by applying a pseudo-inverse forward model to acquired signals.
Patient motion corrupts the geometry alignment in the reconstruction process resulting in motion artifacts.
We propose an appearance learning approach recognizing the structures of rigid motion independently from the scanned object.
arXiv Detail & Related papers (2020-06-18T09:49:11Z) - Detecting Parkinsonian Tremor from IMU Data Collected In-The-Wild using
Deep Multiple-Instance Learning [59.74684475991192]
Parkinson's Disease (PD) is a slowly evolving neuro-logical disease that affects about 1% of the population above 60 years old.
PD symptoms include tremor, rigidity and braykinesia.
We present a method for automatically identifying tremorous episodes related to PD, based on IMU signals captured via a smartphone device.
arXiv Detail & Related papers (2020-05-06T09:02:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.