Differentiable Biomechanics Unlocks Opportunities for Markerless Motion
Capture
- URL: http://arxiv.org/abs/2402.17192v1
- Date: Tue, 27 Feb 2024 04:18:15 GMT
- Title: Differentiable Biomechanics Unlocks Opportunities for Markerless Motion
Capture
- Authors: R. James Cotton
- Abstract summary: Differentiable physics simulators can be accelerated on a GPU.
We show that these simulators can be used to fit inverse kinematics to markerless motion capture data.
- Score: 2.44755919161855
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent developments have created differentiable physics simulators designed
for machine learning pipelines that can be accelerated on a GPU. While these
can simulate biomechanical models, these opportunities have not been exploited
for biomechanics research or markerless motion capture. We show that these
simulators can be used to fit inverse kinematics to markerless motion capture
data, including scaling the model to fit the anthropomorphic measurements of an
individual. This is performed end-to-end with an implicit representation of the
movement trajectory, which is propagated through the forward kinematic model to
minimize the error from the 3D markers reprojected into the images. The
differential optimizer yields other opportunities, such as adding bundle
adjustment during trajectory optimization to refine the extrinsic camera
parameters or meta-optimization to improve the base model jointly over
trajectories from multiple participants. This approach improves the
reprojection error from markerless motion capture over prior methods and
produces accurate spatial step parameters compared to an instrumented walkway
for control and clinical populations.
Related papers
- Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring [71.60457491155451]
Eliminating image blur produced by various kinds of motion has been a challenging problem.
We propose a novel real-world deblurring filtering model called the Motion-adaptive Separable Collaborative Filter.
Our method provides an effective solution for real-world motion blur removal and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-04-19T19:44:24Z) - Motion Flow Matching for Human Motion Synthesis and Editing [75.13665467944314]
We propose emphMotion Flow Matching, a novel generative model for human motion generation featuring efficient sampling and effectiveness in motion editing applications.
Our method reduces the sampling complexity from thousand steps in previous diffusion models to just ten steps, while achieving comparable performance in text-to-motion and action-to-motion generation benchmarks.
arXiv Detail & Related papers (2023-12-14T12:57:35Z) - MotionTrack: Learning Motion Predictor for Multiple Object Tracking [68.68339102749358]
We introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor.
Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT.
arXiv Detail & Related papers (2023-06-05T04:24:11Z) - Visual-Inertial Odometry with Online Calibration of Velocity-Control
Based Kinematic Motion Models [3.42658286826597]
Visual-inertial odometry (VIO) is an important technology for autonomous robots with power and payload constraints.
We propose a novel approach for VIO with stereo cameras which integrates and calibrates the velocity-control based kinematic motion model of wheeled mobile robots online.
arXiv Detail & Related papers (2022-04-14T06:21:12Z) - MotionAug: Augmentation with Physical Correction for Human Motion
Prediction [19.240717471864723]
This paper presents a motion data augmentation scheme incorporating motion synthesis encouraging diversity and motion correction imposing physical plausibility.
Our method outperforms previous noise-based motion augmentation methods by a large margin on both Recurrent Neural Network-based and Graph Convolutional Network-based human motion prediction models.
arXiv Detail & Related papers (2022-03-17T06:53:15Z) - Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred
Objects in Videos [115.71874459429381]
We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video.
Experiments on benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.
arXiv Detail & Related papers (2021-11-29T11:25:14Z) - TomoSLAM: factor graph optimization for rotation angle refinement in
microtomography [0.0]
Relative trajectories of a sample, a detector, and a signal source are traditionally considered to be known.
Due to mechanical backlashes, rotation sensor measurement errors, thermal deformations real trajectory differs from desired ones.
The scientific novelty of this work is to consider the problem of trajectory refinement in microtomography as a SLAM problem.
arXiv Detail & Related papers (2021-11-10T08:00:46Z) - Graph-based Normalizing Flow for Human Motion Generation and
Reconstruction [20.454140530081183]
We propose a probabilistic generative model to synthesize and reconstruct long horizon motion sequences conditioned on past information and control signals.
We evaluate the models on a mixture of motion capture datasets of human locomotion with foot-step and bone-length analysis.
arXiv Detail & Related papers (2021-04-07T09:51:15Z) - MotionRNN: A Flexible Model for Video Prediction with Spacetime-Varying
Motions [70.30211294212603]
This paper tackles video prediction from a new dimension of predicting spacetime-varying motions that are incessantly across both space and time.
We propose the MotionRNN framework, which can capture the complex variations within motions and adapt to spacetime-varying scenarios.
arXiv Detail & Related papers (2021-03-03T08:11:50Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z) - Learning a Generative Motion Model from Image Sequences based on a
Latent Motion Matrix [8.774604259603302]
We learn a probabilistic motion model from simulating temporal-temporal registration in a sequence of images.
We show improved registration accuracy-temporally smoother consistencys compared to three state-of-the-art registration algorithms.
We also demonstrate the model's applicability for motion analysis, simulation and super-resolution by an improved motion reconstruction from sequences with missing frames.
arXiv Detail & Related papers (2020-11-03T14:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.