Data-Driven Goal Recognition in Transhumeral Prostheses Using Process
Mining Techniques
- URL: http://arxiv.org/abs/2309.08106v1
- Date: Fri, 15 Sep 2023 02:03:59 GMT
- Title: Data-Driven Goal Recognition in Transhumeral Prostheses Using Process
Mining Techniques
- Authors: Zihang Su, Tianshi Yu, Nir Lipovetzky, Alireza Mohammadi, Denny
Oetomo, Artem Polyvyanyy, Sebastian Sardina, Ying Tan, Nick van Beest
- Abstract summary: Active prostheses utilize real-valued, continuous sensor data to recognize patient target poses, or goals, and proactively move the artificial limb.
Previous studies have examined how well the data collected in stationary poses, without considering the time steps, can help discriminate the goals.
Our approach involves transforming the data into discrete events and training an existing process mining-based goal recognition system.
- Score: 7.95507524742396
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A transhumeral prosthesis restores missing anatomical segments below the
shoulder, including the hand. Active prostheses utilize real-valued, continuous
sensor data to recognize patient target poses, or goals, and proactively move
the artificial limb. Previous studies have examined how well the data collected
in stationary poses, without considering the time steps, can help discriminate
the goals. In this case study paper, we focus on using time series data from
surface electromyography electrodes and kinematic sensors to sequentially
recognize patients' goals. Our approach involves transforming the data into
discrete events and training an existing process mining-based goal recognition
system. Results from data collected in a virtual reality setting with ten
subjects demonstrate the effectiveness of our proposed goal recognition
approach, which achieves significantly better precision and recall than the
state-of-the-art machine learning techniques and is less confident when wrong,
which is beneficial when approximating smoother movements of prostheses.
Related papers
- Enhancing Activity Recognition After Stroke: Generative Adversarial Networks for Kinematic Data Augmentation [0.0]
Generalizability of machine learning models for wearable monitoring in stroke rehabilitation is often constrained by the limited scale and heterogeneity of available data.
Data augmentation addresses this challenge by adding computationally derived data to real data to enrich the variability represented in the training set.
This study employs Conditional Generative Adversarial Networks (cGANs) to create synthetic kinematic data from a publicly available dataset.
By training deep learning models on both synthetic and experimental data, we enhanced task classification accuracy: models incorporating synthetic data attained an overall accuracy of 80.0%, significantly higher than the 66.1% seen in models trained solely with real data
arXiv Detail & Related papers (2024-06-12T15:51:00Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - After-Stroke Arm Paresis Detection using Kinematic Data [2.375665889100906]
This paper presents an approach for detecting unilateral arm paralysis/weakness using kinematic data.
Our method employs temporal convolution networks and recurrent neural networks, guided by knowledge distillation.
The results suggest that our method could be a useful tool for clinicians and healthcare professionals working with patients with this condition.
arXiv Detail & Related papers (2023-11-03T16:56:02Z) - Hand Gesture Classification on Praxis Dataset: Trading Accuracy for
Expense [0.6390468088226495]
We focus on'skeletal' data represented by the body joint coordinates, from the Praxis dataset.
The PRAXIS dataset contains recordings of patients with cortical pathologies such as Alzheimer's disease.
Using a combination of windowing techniques with deep learning architecture such as a Recurrent Neural Network (RNN), we achieved an overall accuracy of 70.8%.
arXiv Detail & Related papers (2023-11-01T18:18:09Z) - Bayesian and Neural Inference on LSTM-based Object Recognition from
Tactile and Kinesthetic Information [0.0]
Haptic perception encompasses the sensing modalities encountered in the sense of touch (e.g., tactile and kinesthetic sensations)
This letter focuses on multimodal object recognition and proposes analytical and data-driven methodologies to fuse tactile- and kinesthetic-based classification results.
arXiv Detail & Related papers (2023-06-10T12:29:23Z) - Dissecting Self-Supervised Learning Methods for Surgical Computer Vision [51.370873913181605]
Self-Supervised Learning (SSL) methods have begun to gain traction in the general computer vision community.
The effectiveness of SSL methods in more complex and impactful domains, such as medicine and surgery, remains limited and unexplored.
We present an extensive analysis of the performance of these methods on the Cholec80 dataset for two fundamental and popular tasks in surgical context understanding, phase recognition and tool presence detection.
arXiv Detail & Related papers (2022-07-01T14:17:11Z) - A Temporal Learning Approach to Inpainting Endoscopic Specularities and
Its effect on Image Correspondence [13.25903945009516]
We propose using a temporal generative adversarial network (GAN) to inpaint the hidden anatomy under specularities.
This is achieved using in-vivo data of gastric endoscopy (Hyper-Kvasir) in a fully unsupervised manner.
We also assess the effect of our method in computer vision tasks that underpin 3D reconstruction and camera motion estimation.
arXiv Detail & Related papers (2022-03-31T13:14:00Z) - Federated Cycling (FedCy): Semi-supervised Federated Learning of
Surgical Phases [57.90226879210227]
FedCy is a semi-supervised learning (FSSL) method that combines FL and self-supervised learning to exploit a decentralized dataset of both labeled and unlabeled videos.
We demonstrate significant performance gains over state-of-the-art FSSL methods on the task of automatic recognition of surgical phases.
arXiv Detail & Related papers (2022-03-14T17:44:53Z) - Real-time landmark detection for precise endoscopic submucosal
dissection via shape-aware relation network [51.44506007844284]
We propose a shape-aware relation network for accurate and real-time landmark detection in endoscopic submucosal dissection surgery.
We first devise an algorithm to automatically generate relation keypoint heatmaps, which intuitively represent the prior knowledge of spatial relations among landmarks.
We then develop two complementary regularization schemes to progressively incorporate the prior knowledge into the training process.
arXiv Detail & Related papers (2021-11-08T07:57:30Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.