OpenCapBench: A Benchmark to Bridge Pose Estimation and Biomechanics
- URL: http://arxiv.org/abs/2406.09788v1
- Date: Fri, 14 Jun 2024 07:37:28 GMT
- Title: OpenCapBench: A Benchmark to Bridge Pose Estimation and Biomechanics
- Authors: Yoni Gozlan, Antoine Falisse, Scott Uhlrich, Anthony Gatti, Michael Black, Akshay Chaudhari,
- Abstract summary: We develop OpenCapBench to offer an easy-to-use unified benchmark to assess common tasks in human pose estimation.
OpenCapBench computes consistent kinematic metrics through joints angles provided by an open-source musculoskeletal modeling software (OpenSim)
We introduce SynthPose, a new approach that enables finetuning of pre-trained 2D human pose models to predict an arbitrarily denser set of keypoints for accurate kinematic analysis.
- Score: 2.2188004943970627
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pose estimation has promised to impact healthcare by enabling more practical methods to quantify nuances of human movement and biomechanics. However, despite the inherent connection between pose estimation and biomechanics, these disciplines have largely remained disparate. For example, most current pose estimation benchmarks use metrics such as Mean Per Joint Position Error, Percentage of Correct Keypoints, or mean Average Precision to assess performance, without quantifying kinematic and physiological correctness - key aspects for biomechanics. To alleviate this challenge, we develop OpenCapBench to offer an easy-to-use unified benchmark to assess common tasks in human pose estimation, evaluated under physiological constraints. OpenCapBench computes consistent kinematic metrics through joints angles provided by an open-source musculoskeletal modeling software (OpenSim). Through OpenCapBench, we demonstrate that current pose estimation models use keypoints that are too sparse for accurate biomechanics analysis. To mitigate this challenge, we introduce SynthPose, a new approach that enables finetuning of pre-trained 2D human pose models to predict an arbitrarily denser set of keypoints for accurate kinematic analysis through the use of synthetic data. Incorporating such finetuning on synthetic data of prior models leads to twofold reduced joint angle errors. Moreover, OpenCapBench allows users to benchmark their own developed models on our clinically relevant cohort. Overall, OpenCapBench bridges the computer vision and biomechanics communities, aiming to drive simultaneous advances in both areas.
Related papers
- Reconstructing Humans with a Biomechanically Accurate Skeleton [55.06027148976482]
We introduce a method for reconstructing 3D humans from a single image using a biomechanically accurate skeleton model.
Compared to state-of-the-art methods for 3D human mesh recovery, our model achieves competitive performance on standard benchmarks.
arXiv Detail & Related papers (2025-03-27T17:56:24Z) - Validation of Human Pose Estimation and Human Mesh Recovery for Extracting Clinically Relevant Motion Data from Videos [79.62407455005561]
Marker-less motion capture using human pose estimation produces results in-line with the results of both the IMU and MoCap kinematics.
While there is still room for improvement when it comes to the quality of the data produced, we believe that this compromise is within the room of error.
arXiv Detail & Related papers (2025-03-18T22:18:33Z) - WorldSimBench: Towards Video Generation Models as World Simulators [79.69709361730865]
We classify the functionalities of predictive models into a hierarchy and take the first step in evaluating World Simulators by proposing a dual evaluation framework called WorldSimBench.
WorldSimBench includes Explicit Perceptual Evaluation and Implicit Manipulative Evaluation, encompassing human preference assessments from the visual perspective and action-level evaluations in embodied tasks.
Our comprehensive evaluation offers key insights that can drive further innovation in video generation models, positioning World Simulators as a pivotal advancement toward embodied artificial intelligence.
arXiv Detail & Related papers (2024-10-23T17:56:11Z) - OpenCap markerless motion capture estimation of lower extremity kinematics and dynamics in cycling [0.0]
Markerless motion capture offers several benefits over traditional marker-based systems.
System can directly detect human body landmarks, reducing manual processing and errors associated with marker placement.
This study compares the performance of OpenCap, a markerless motion capture system, with traditional marker-based systems in assessing cycling biomechanics.
arXiv Detail & Related papers (2024-08-20T15:57:40Z) - Using joint angles based on the international biomechanical standards for human action recognition and related tasks [7.789894769085375]
We show how to convert keypoint data into joint angles that uniquely describe a pose.
We experimentally demonstrate that the joint angle representation of keypoint data is suitable for machine learning applications.
The use of joint angles as a human meaningful representation of kinematic data is in particular promising for applications where interpretability and dialog with human experts is important.
arXiv Detail & Related papers (2024-06-25T10:23:58Z) - MS-MANO: Enabling Hand Pose Tracking with Biomechanical Constraints [50.61346764110482]
We integrate a musculoskeletal system with a learnable parametric hand model, MANO, to create MS-MANO.
This model emulates the dynamics of muscles and tendons to drive the skeletal system, imposing physiologically realistic constraints on the resulting torque trajectories.
We also propose a simulation-in-the-loop pose refinement framework, BioPR, that refines the initial estimated pose through a multi-layer perceptron network.
arXiv Detail & Related papers (2024-04-16T02:18:18Z) - 3D Kinematics Estimation from Video with a Biomechanical Model and
Synthetic Training Data [4.130944152992895]
We propose a novel biomechanics-aware network that directly outputs 3D kinematics from two input views.
Our experiments demonstrate that the proposed approach, only trained on synthetic data, outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2024-02-20T17:33:40Z) - EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration [49.90228618894857]
We introduce a new approach to hand-eye calibration called EasyHeC, which is markerless, white-box, and delivers superior accuracy and robustness.
We propose to use two key technologies: differentiable rendering-based camera pose optimization and consistency-based joint space exploration.
Our evaluation demonstrates superior performance in synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-02T03:49:54Z) - Lightweight Estimation of Hand Mesh and Biomechanically Feasible
Kinematic Parameters [9.477719717840683]
We propose an efficient variation of the previously proposed image-to-lixel approach to efficiently estimate hand meshes from the images.
We introduce an inverted kinematic(IK) network to translate the estimated hand mesh to a biomechanically feasible set of joint rotation parameters.
Our Lite I2L Mesh Net achieves state-of-the-art joint and mesh estimation accuracy with less than $13%$ of the total computational complexity.
arXiv Detail & Related papers (2023-03-26T22:24:12Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Real-time Pose and Shape Reconstruction of Two Interacting Hands With a
Single Depth Camera [79.41374930171469]
We present a novel method for real-time pose and shape reconstruction of two strongly interacting hands.
Our approach combines an extensive list of favorable properties, namely it is marker-less.
We show state-of-the-art results in scenes that exceed the complexity level demonstrated by previous work.
arXiv Detail & Related papers (2021-06-15T11:39:49Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.