Comparison of Visual Trackers for Biomechanical Analysis of Running
- URL: http://arxiv.org/abs/2505.04713v1
- Date: Wed, 07 May 2025 18:04:14 GMT
- Title: Comparison of Visual Trackers for Biomechanical Analysis of Running
- Authors: Luis F. Gomez, Gonzalo Garrido-Lopez, Julian Fierrez, Aythami Morales, Ruben Tolosana, Javier Rueda, Enrique Navarro,
- Abstract summary: This work analyzes the performance of six trackers: two point trackers and four joint trackers for biomechanical analysis in sprints.<n>The experimental framework employs forty sprints from five professional runners.<n>Using joint-based models yields root mean squared errors ranging from 11.41deg to 4.37deg.
- Score: 12.12643642515884
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Human pose estimation has witnessed significant advancements in recent years, mainly due to the integration of deep learning models, the availability of a vast amount of data, and large computational resources. These developments have led to highly accurate body tracking systems, which have direct applications in sports analysis and performance evaluation. This work analyzes the performance of six trackers: two point trackers and four joint trackers for biomechanical analysis in sprints. The proposed framework compares the results obtained from these pose trackers with the manual annotations of biomechanical experts for more than 5870 frames. The experimental framework employs forty sprints from five professional runners, focusing on three key angles in sprint biomechanics: trunk inclination, hip flex extension, and knee flex extension. We propose a post-processing module for outlier detection and fusion prediction in the joint angles. The experimental results demonstrate that using joint-based models yields root mean squared errors ranging from 11.41{\deg} to 4.37{\deg}. When integrated with the post-processing modules, these errors can be reduced to 6.99{\deg} and 3.88{\deg}, respectively. The experimental findings suggest that human pose tracking approaches can be valuable resources for the biomechanical analysis of running. However, there is still room for improvement in applications where high accuracy is required.
Related papers
- Learning to Track Any Points from Human Motion [55.831218129679144]
We propose an automated pipeline to generate pseudo-labeled training data for point tracking.<n>A point tracking model trained on AnthroTAP achieves annotated state-of-the-art performance on the TAP-Vid benchmark.
arXiv Detail & Related papers (2025-07-08T17:59:58Z) - Learning golf swing signatures from a single wrist-worn inertial sensor [0.0]
We build a data-driven framework for personalized golf swing analysis from a single wrist-worn sensor.<n>We learn a compositional, discrete vocabulary of motion primitives that facilitates the detection and visualization of technical flaws.<n>Our system accurately estimates full-body kinematics and swing events from wrist data, delivering lab-grade motion analysis on-course.
arXiv Detail & Related papers (2025-06-20T22:57:59Z) - Deep Learning for Human Locomotion Analysis in Lower-Limb Exoskeletons: A Comparative Study [1.3569491184708433]
This paper presents an experimental comparison between eight deep neural network backbones to predict high-level locomotion parameters.<n>The LSTM achieved high terrain classification accuracy (0.94 +- 0.04) and precise ramp slope (1.95 +- 0.58deg) and the CNN-LSTM a stair height (15.65 +- 7.40 mm)<n>The system operates with 2 ms inference time, supporting real-time applications.
arXiv Detail & Related papers (2025-03-21T07:12:44Z) - VideoRun2D: Cost-Effective Markerless Motion Capture for Sprint Biomechanics [12.12643642515884]
Sprinting is a determinant ability, especially in team sports. The kinematics of the sprint have been studied in the past using different methods.
This study first adapts two general trackers for realistic biomechanical analysis and then evaluate them in comparison to manual tracking.
Our best resulting markerless body tracker particularly adapted for sprint biomechanics is termed VideoRun2D.
arXiv Detail & Related papers (2024-09-16T11:10:48Z) - Leveraging Digital Perceptual Technologies for Remote Perception and Analysis of Human Biomechanical Processes: A Contactless Approach for Workload and Joint Force Assessment [4.96669107440958]
This study presents an innovative computer vision framework designed to analyze human movements in industrial settings.
The framework allows for comprehensive scrutiny of human motion, providing valuable insights into kinematic patterns and kinetic data.
arXiv Detail & Related papers (2024-04-02T02:12:00Z) - CogCoM: A Visual Language Model with Chain-of-Manipulations Reasoning [61.21923643289266]
Chain of Manipulations is a mechanism that enables Vision-Language Models to solve problems step-by-step with evidence.<n>After training, models can solve various visual problems by eliciting intrinsic manipulations (e.g., grounding, zoom in) actively without involving external tools.<n>Our trained model, textbfCogCoM, achieves state-of-the-art performance across 9 benchmarks from 4 categories.
arXiv Detail & Related papers (2024-02-06T18:43:48Z) - SMPLer-X: Scaling Up Expressive Human Pose and Shape Estimation [83.18930314027254]
Expressive human pose and shape estimation (EHPS) unifies body, hands, and face motion capture with numerous applications.
In this work, we investigate scaling up EHPS towards the first generalist foundation model (dubbed SMPLer-X) with up to ViT-Huge as the backbone.
With big data and the large model, SMPLer-X exhibits strong performance across diverse test benchmarks and excellent transferability to even unseen environments.
arXiv Detail & Related papers (2023-09-29T17:58:06Z) - Markerless Motion Capture and Biomechanical Analysis Pipeline [0.0]
Markerless motion capture has the potential to expand access to precise movement analysis.
Our pipeline makes it easy to obtain accurate biomechanical estimates of movement in a rehabilitation hospital.
arXiv Detail & Related papers (2023-03-19T13:31:57Z) - Few-shot human motion prediction for heterogeneous sensors [5.210197476419621]
We introduce the first few-shot motion approach that explicitly incorporates the spatial graph.
We show that our model can perform on par with the best approach so far when evaluating on tasks with a fixed output space.
arXiv Detail & Related papers (2022-12-22T15:06:24Z) - Joint Feature Learning and Relation Modeling for Tracking: A One-Stream
Framework [76.70603443624012]
We propose a novel one-stream tracking (OSTrack) framework that unifies feature learning and relation modeling.
In this way, discriminative target-oriented features can be dynamically extracted by mutual guidance.
OSTrack achieves state-of-the-art performance on multiple benchmarks, in particular, it shows impressive results on the one-shot tracking benchmark GOT-10k.
arXiv Detail & Related papers (2022-03-22T18:37:11Z) - Transforming Model Prediction for Tracking [109.08417327309937]
Transformers capture global relations with little inductive bias, allowing it to learn the prediction of more powerful target models.
We train the proposed tracker end-to-end and validate its performance by conducting comprehensive experiments on multiple tracking datasets.
Our tracker sets a new state of the art on three benchmarks, achieving an AUC of 68.5% on the challenging LaSOT dataset.
arXiv Detail & Related papers (2022-03-21T17:59:40Z) - Learning Dynamics via Graph Neural Networks for Human Pose Estimation
and Tracking [98.91894395941766]
We propose a novel online approach to learning the pose dynamics, which are independent of pose detections in current fame.
Specifically, we derive this prediction of dynamics through a graph neural network(GNN) that explicitly accounts for both spatial-temporal and visual information.
Experiments on PoseTrack 2017 and PoseTrack 2018 datasets demonstrate that the proposed method achieves results superior to the state of the art on both human pose estimation and tracking tasks.
arXiv Detail & Related papers (2021-06-07T16:36:50Z) - AcinoSet: A 3D Pose Estimation Dataset and Baseline Models for Cheetahs
in the Wild [51.35013619649463]
We present an extensive dataset of free-running cheetahs in the wild, called AcinoSet.
The dataset contains 119,490 frames of multi-view synchronized high-speed video footage, camera calibration files and 7,588 human-annotated frames.
The resulting 3D trajectories, human-checked 3D ground truth, and an interactive tool to inspect the data is also provided.
arXiv Detail & Related papers (2021-03-24T15:54:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.