3D Pose Based Feedback for Physical Exercises
- URL: http://arxiv.org/abs/2208.03257v1
- Date: Fri, 5 Aug 2022 16:15:02 GMT
- Title: 3D Pose Based Feedback for Physical Exercises
- Authors: Ziyi Zhao, Sena Kiciroglu, Hugues Vinzant, Yuan Cheng, Isinsu
Katircioglu, Mathieu Salzmann, Pascal Fua
- Abstract summary: We introduce a learning-based framework that identifies the mistakes made by a user.
Our framework does not rely on hard-coded rules, instead, it learns them from data.
Our approach yields 90.9% mistake identification accuracy and successfully corrects 94.2% of the mistakes.
- Score: 87.35086507661227
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised self-rehabilitation exercises and physical training can cause
serious injuries if performed incorrectly. We introduce a learning-based
framework that identifies the mistakes made by a user and proposes corrective
measures for easier and safer individual training. Our framework does not rely
on hard-coded, heuristic rules. Instead, it learns them from data, which
facilitates its adaptation to specific user needs. To this end, we use a Graph
Convolutional Network (GCN) architecture acting on the user's pose sequence to
model the relationship between the body joints trajectories. To evaluate our
approach, we introduce a dataset with 3 different physical exercises. Our
approach yields 90.9% mistake identification accuracy and successfully corrects
94.2% of the mistakes.
Related papers
- POCO: 3D Pose and Shape Estimation with Confidence [99.91683561240549]
We develop POCO, a novel framework for training HPS regressors to estimate not only a 3D human body, but also their confidence.
Specifically, POCO estimates both the 3D body pose and a per-sample variance.
In all cases, training the network to reason about uncertainty helps it learn to more accurately estimate 3D pose.
arXiv Detail & Related papers (2023-08-24T17:59:04Z) - Adversarial Unlearning: Reducing Confidence Along Adversarial Directions [88.46039795134993]
We propose a complementary regularization strategy that reduces confidence on self-generated examples.
The method, which we call RCAD, aims to reduce confidence on out-of-distribution examples lying along directions adversarially chosen to increase training loss.
Despite its simplicity, we find on many classification benchmarks that RCAD can be added to existing techniques to increase test accuracy by 1-3% in absolute value.
arXiv Detail & Related papers (2022-06-03T02:26:24Z) - Domain Knowledge-Informed Self-Supervised Representations for Workout
Form Assessment [12.040334568268445]
We propose to learn exercise-specific representations from unlabeled samples.
In particular, our domain knowledge-informed self-supervised approaches exploit the harmonic motion of the exercise actions.
We show that our self-supervised representations outperform off-the-shelf 2D- and 3D-pose estimators.
arXiv Detail & Related papers (2022-02-28T18:40:02Z) - What Stops Learning-based 3D Registration from Working in the Real
World? [53.68326201131434]
This work identifies the sources of 3D point cloud registration failures, analyze the reasons behind them, and propose solutions.
Ultimately, this translates to a best-practice 3D registration network (BPNet), constituting the first learning-based method able to handle previously-unseen objects in real-world data.
Our model generalizes to real data without any fine-tuning, reaching an accuracy of up to 67% on point clouds of unseen objects obtained with a commercial sensor.
arXiv Detail & Related papers (2021-11-19T19:24:27Z) - Self Context and Shape Prior for Sensorless Freehand 3D Ultrasound
Reconstruction [61.62191904755521]
3D freehand US reconstruction is promising in addressing the problem by providing broad range and freeform scan.
Existing deep learning based methods only focus on the basic cases of skill sequences.
We propose a novel approach to sensorless freehand 3D US reconstruction considering the complex skill sequences.
arXiv Detail & Related papers (2021-07-31T16:06:50Z) - Deep Optimized Priors for 3D Shape Modeling and Reconstruction [38.79018852887249]
We introduce a new learning framework for 3D modeling and reconstruction.
We show that the proposed strategy effectively breaks the barriers constrained by the pre-trained priors.
arXiv Detail & Related papers (2020-12-14T03:56:31Z) - Fast Uncertainty Quantification for Deep Object Pose Estimation [91.09217713805337]
Deep learning-based object pose estimators are often unreliable and overconfident.
In this work, we propose a simple, efficient, and plug-and-play UQ method for 6-DoF object pose estimation.
arXiv Detail & Related papers (2020-11-16T06:51:55Z) - Deform-GAN:An Unsupervised Learning Model for Deformable Registration [4.030402376540977]
In this paper, a non-rigid registration method is proposed for 3D medical images leveraging unsupervised learning.
The proposed gradient loss is robust across sequences and modals for large deformation.
Neither ground-truth nor manual labeling is required during training.
arXiv Detail & Related papers (2020-02-26T12:20:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.