Transforming Gait: Video-Based Spatiotemporal Gait Analysis
- URL: http://arxiv.org/abs/2203.09371v1
- Date: Thu, 17 Mar 2022 14:57:04 GMT
- Title: Transforming Gait: Video-Based Spatiotemporal Gait Analysis
- Authors: R. James Cotton, Emoonah McClerklin, Anthony Cimorelli, Ankit Patel,
Tasos Karakostas
- Abstract summary: Gait analysis, typically performed in a dedicated lab, produces precise measurements including kinematics and step timing.
We trained a neural network to map 3D joint trajectories and the height of individuals onto interpretable biomechanical outputs.
- Score: 1.749935196721634
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Human pose estimation from monocular video is a rapidly advancing field that
offers great promise to human movement science and rehabilitation. This
potential is tempered by the smaller body of work ensuring the outputs are
clinically meaningful and properly calibrated. Gait analysis, typically
performed in a dedicated lab, produces precise measurements including
kinematics and step timing. Using over 7000 monocular video from an
instrumented gait analysis lab, we trained a neural network to map 3D joint
trajectories and the height of individuals onto interpretable biomechanical
outputs including gait cycle timing and sagittal plane joint kinematics and
spatiotemporal trajectories. This task specific layer produces accurate
estimates of the timing of foot contact and foot off events. After parsing the
kinematic outputs into individual gait cycles, it also enables accurate
cycle-by-cycle estimates of cadence, step time, double and single support time,
walking speed and step length.
Related papers
- Markerless Stride Length estimation in Athletic using Pose Estimation with monocular vision [2.334978724544296]
Performance measures such as stride length in athletics and the pace of runners can be estimated using different tricks.<n>This paper investigates a computer vision-based approach for estimating stride length and speed transition from video sequences.
arXiv Detail & Related papers (2025-07-02T13:37:53Z) - Learning golf swing signatures from a single wrist-worn inertial sensor [0.0]
We build a data-driven framework for personalized golf swing analysis from a single wrist-worn sensor.<n>We learn a compositional, discrete vocabulary of motion primitives that facilitates the detection and visualization of technical flaws.<n>Our system accurately estimates full-body kinematics and swing events from wrist data, delivering lab-grade motion analysis on-course.
arXiv Detail & Related papers (2025-06-20T22:57:59Z) - Validation of Human Pose Estimation and Human Mesh Recovery for Extracting Clinically Relevant Motion Data from Videos [79.62407455005561]
Marker-less motion capture using human pose estimation produces results in-line with the results of both the IMU and MoCap kinematics.
While there is still room for improvement when it comes to the quality of the data produced, we believe that this compromise is within the room of error.
arXiv Detail & Related papers (2025-03-18T22:18:33Z) - Spatial-Temporal Graph Diffusion Policy with Kinematic Modeling for Bimanual Robotic Manipulation [88.83749146867665]
Existing approaches learn a policy to predict a distant next-best end-effector pose.
They then compute the corresponding joint rotation angles for motion using inverse kinematics.
We propose Kinematics enhanced Spatial-TemporAl gRaph diffuser.
arXiv Detail & Related papers (2025-03-13T17:48:35Z) - 3D Kinematics Estimation from Video with a Biomechanical Model and
Synthetic Training Data [4.130944152992895]
We propose a novel biomechanics-aware network that directly outputs 3D kinematics from two input views.
Our experiments demonstrate that the proposed approach, only trained on synthetic data, outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2024-02-20T17:33:40Z) - Advancing Monocular Video-Based Gait Analysis Using Motion Imitation
with Physics-Based Simulation [2.07180164747172]
We use reinforcement learning to control a physics simulation of human movement to replicate the movement seen in video.
This forces the inferred movements to be physically plausible, while improving the accuracy of the inferred step length and walking velocity.
arXiv Detail & Related papers (2024-02-20T02:48:58Z) - Pose2Gait: Extracting Gait Features from Monocular Video of Individuals
with Dementia [3.2739089842471136]
Video-based ambient monitoring of gait for older adults with dementia has the potential to detect negative changes in health.
Computer vision-based pose tracking models can process video data automatically and extract joint locations.
These models are not optimized for gait analysis on older adults or clinical populations.
arXiv Detail & Related papers (2023-08-22T14:59:17Z) - Markerless Motion Capture and Biomechanical Analysis Pipeline [0.0]
Markerless motion capture has the potential to expand access to precise movement analysis.
Our pipeline makes it easy to obtain accurate biomechanical estimates of movement in a rehabilitation hospital.
arXiv Detail & Related papers (2023-03-19T13:31:57Z) - Towards Single Camera Human 3D-Kinematics [15.559206592078425]
We propose a novel approach for direct 3D human kinematic estimation D3KE from videos using deep neural networks.
Our experiments demonstrate that the proposed end-to-end training is robust and outperforms 2D and 3D markerless motion capture based kinematic estimation pipelines.
arXiv Detail & Related papers (2023-01-13T08:44:09Z) - Imposing Temporal Consistency on Deep Monocular Body Shape and Pose
Estimation [67.23327074124855]
This paper presents an elegant solution for the integration of temporal constraints in the fitting process.
We derive parameters of a sequence of body models, representing shape and motion of a person, including jaw poses, facial expressions, and finger poses.
Our approach enables the derivation of realistic 3D body models from image sequences, including facial expression and articulated hands.
arXiv Detail & Related papers (2022-02-07T11:11:55Z) - AcinoSet: A 3D Pose Estimation Dataset and Baseline Models for Cheetahs
in the Wild [51.35013619649463]
We present an extensive dataset of free-running cheetahs in the wild, called AcinoSet.
The dataset contains 119,490 frames of multi-view synchronized high-speed video footage, camera calibration files and 7,588 human-annotated frames.
The resulting 3D trajectories, human-checked 3D ground truth, and an interactive tool to inspect the data is also provided.
arXiv Detail & Related papers (2021-03-24T15:54:11Z) - PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time [89.68248627276955]
Marker-less 3D motion capture from a single colour camera has seen significant progress.
However, it is a very challenging and severely ill-posed problem.
We present PhysCap, the first algorithm for physically plausible, real-time and marker-less human 3D motion capture.
arXiv Detail & Related papers (2020-08-20T10:46:32Z) - Contact and Human Dynamics from Monocular Video [73.47466545178396]
Existing deep models predict 2D and 3D kinematic poses from video that are approximately accurate, but contain visible errors.
We present a physics-based method for inferring 3D human motion from video sequences that takes initial 2D and 3D pose estimates as input.
arXiv Detail & Related papers (2020-07-22T21:09:11Z) - Learning Motion Flows for Semi-supervised Instrument Segmentation from
Robotic Surgical Video [64.44583693846751]
We study the semi-supervised instrument segmentation from robotic surgical videos with sparse annotations.
By exploiting generated data pairs, our framework can recover and even enhance temporal consistency of training sequences.
Results show that our method outperforms the state-of-the-art semisupervised methods by a large margin.
arXiv Detail & Related papers (2020-07-06T02:39:32Z) - Pedestrian orientation dynamics from high-fidelity measurements [65.06084067891364]
We propose a novel measurement method based on a deep neural architecture that we train on the basis of generic physical properties of the motion of pedestrians.
We show that our method is capable of estimating orientation with an error as low as 7.5 degrees.
This tool opens up new possibilities in the studies of human crowd dynamics where orientation is key.
arXiv Detail & Related papers (2020-01-14T07:08:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.