Human Leg Motion Tracking by Fusing IMUs and RGB Camera Data Using
Extended Kalman Filter
- URL: http://arxiv.org/abs/2011.00574v2
- Date: Mon, 7 Dec 2020 22:20:27 GMT
- Title: Human Leg Motion Tracking by Fusing IMUs and RGB Camera Data Using
Extended Kalman Filter
- Authors: Omid Taheri, Hassan Salarieh, Aria Alasty
- Abstract summary: IMU-based systems, as well as Marker-based motion tracking systems, are the most popular methods to track movement due to their low cost of implementation and lightweight.
This paper proposes a quaternion-based Extended Kalman filter approach to recover the human leg segments motions with a set of IMU sensors data fused with camera-marker system data.
- Score: 4.189643331553922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human motion capture is frequently used to study rehabilitation and clinical
problems, as well as to provide realistic animation for the entertainment
industry. IMU-based systems, as well as Marker-based motion tracking systems,
are the most popular methods to track movement due to their low cost of
implementation and lightweight. This paper proposes a quaternion-based Extended
Kalman filter approach to recover the human leg segments motions with a set of
IMU sensors data fused with camera-marker system data. In this paper, an
Extended Kalman Filter approach is developed to fuse the data of two IMUs and
one RGB camera for human leg motion tracking. Based on the complementary
properties of the inertial sensors and camera-marker system, in the introduced
new measurement model, the orientation data of the upper leg and the lower leg
is updated through three measurement equations. The positioning of the human
body is made possible by the tracked position of the pelvis joint by the camera
marker system. A mathematical model has been utilized to estimate joints' depth
in 2D images. The efficiency of the proposed algorithm is evaluated by an
optical motion tracker system.
Related papers
- COIN: Control-Inpainting Diffusion Prior for Human and Camera Motion Estimation [98.05046790227561]
COIN is a control-inpainting motion diffusion prior that enables fine-grained control to disentangle human and camera motions.
COIN outperforms the state-of-the-art methods in terms of global human motion estimation and camera motion estimation.
arXiv Detail & Related papers (2024-08-29T10:36:29Z) - OpenCap markerless motion capture estimation of lower extremity kinematics and dynamics in cycling [0.0]
Markerless motion capture offers several benefits over traditional marker-based systems.
System can directly detect human body landmarks, reducing manual processing and errors associated with marker placement.
This study compares the performance of OpenCap, a markerless motion capture system, with traditional marker-based systems in assessing cycling biomechanics.
arXiv Detail & Related papers (2024-08-20T15:57:40Z) - KFD-NeRF: Rethinking Dynamic NeRF with Kalman Filter [49.85369344101118]
We introduce KFD-NeRF, a novel dynamic neural radiance field integrated with an efficient and high-quality motion reconstruction framework based on Kalman filtering.
Our key idea is to model the dynamic radiance field as a dynamic system whose temporally varying states are estimated based on two sources of knowledge: observations and predictions.
Our KFD-NeRF demonstrates similar or even superior performance within comparable computational time and state-of-the-art view synthesis performance with thorough training.
arXiv Detail & Related papers (2024-07-18T05:48:24Z) - Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring [71.60457491155451]
Eliminating image blur produced by various kinds of motion has been a challenging problem.
We propose a novel real-world deblurring filtering model called the Motion-adaptive Separable Collaborative Filter.
Our method provides an effective solution for real-world motion blur removal and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-04-19T19:44:24Z) - Motion-Guided Dual-Camera Tracker for Endoscope Tracking and Motion Analysis in a Mechanical Gastric Simulator [5.073179848641095]
The motion-guided dual-camera vision tracker is proposed to provide robust and accurate tracking of the endoscope tip's 3D position.
The proposed tracker achieves superior performance against state-of-the-art vision trackers, achieving 42% and 72% improvements against the second-best method in average error and maximum error.
arXiv Detail & Related papers (2024-03-08T08:31:46Z) - 3D Kinematics Estimation from Video with a Biomechanical Model and
Synthetic Training Data [4.130944152992895]
We propose a novel biomechanics-aware network that directly outputs 3D kinematics from two input views.
Our experiments demonstrate that the proposed approach, only trained on synthetic data, outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2024-02-20T17:33:40Z) - Dynamic Inertial Poser (DynaIP): Part-Based Motion Dynamics Learning for
Enhanced Human Pose Estimation with Sparse Inertial Sensors [17.3834029178939]
This paper introduces a novel human pose estimation approach using sparse inertial sensors.
It leverages a diverse array of real inertial motion capture data from different skeleton formats to improve motion diversity and model generalization.
The approach demonstrates superior performance over state-of-the-art models across five public datasets, notably reducing pose error by 19% on the DIP-IMU dataset.
arXiv Detail & Related papers (2023-12-02T13:17:10Z) - MotionTrack: Learning Motion Predictor for Multiple Object Tracking [68.68339102749358]
We introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor.
Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT.
arXiv Detail & Related papers (2023-06-05T04:24:11Z) - GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras [99.07219478953982]
We present an approach for 3D global human mesh recovery from monocular videos recorded with dynamic cameras.
We first propose a deep generative motion infiller, which autoregressively infills the body motions of occluded humans based on visible motions.
In contrast to prior work, our approach reconstructs human meshes in consistent global coordinates even with dynamic cameras.
arXiv Detail & Related papers (2021-12-02T18:59:54Z) - Human POSEitioning System (HPS): 3D Human Pose Estimation and
Self-localization in Large Scenes from Body-Mounted Sensors [71.29186299435423]
We introduce (HPS) Human POSEitioning System, a method to recover the full 3D pose of a human registered with a 3D scan of the surrounding environment.
We show that our optimization-based integration exploits the benefits of the two, resulting in pose accuracy free of drift.
HPS could be used for VR/AR applications where humans interact with the scene without requiring direct line of sight with an external camera.
arXiv Detail & Related papers (2021-03-31T17:58:31Z) - Particle Filter Based Monocular Human Tracking with a 3D Cardbox Model
and a Novel Deterministic Resampling Strategy [8.894218894797977]
The proposed system tracks human motion based on monocular silhouette-matching.
A new 3D articulated human upper body model with the name 3D cardbox model is created and is proven to work successfully for motion tracking.
arXiv Detail & Related papers (2020-02-21T21:21:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.