UMotion: Uncertainty-driven Human Motion Estimation from Inertial and Ultra-wideband Units
- URL: http://arxiv.org/abs/2505.09393v1
- Date: Wed, 14 May 2025 13:48:36 GMT
- Title: UMotion: Uncertainty-driven Human Motion Estimation from Inertial and Ultra-wideband Units
- Authors: Huakun Liu, Hiroki Ota, Xin Wei, Yutaro Hirao, Monica Perusquia-Hernandez, Hideaki Uchiyama, Kiyoshi Kiyokawa,
- Abstract summary: UMotion is an uncertainty-driven, online fusing-all state estimation framework for 3D human shape and pose estimation.<n>It is supported by six integrated, body-worn ultra-wideband (UWB) distance sensors with IMUs.
- Score: 11.911147790899816
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse wearable inertial measurement units (IMUs) have gained popularity for estimating 3D human motion. However, challenges such as pose ambiguity, data drift, and limited adaptability to diverse bodies persist. To address these issues, we propose UMotion, an uncertainty-driven, online fusing-all state estimation framework for 3D human shape and pose estimation, supported by six integrated, body-worn ultra-wideband (UWB) distance sensors with IMUs. UWB sensors measure inter-node distances to infer spatial relationships, aiding in resolving pose ambiguities and body shape variations when combined with anthropometric data. Unfortunately, IMUs are prone to drift, and UWB sensors are affected by body occlusions. Consequently, we develop a tightly coupled Unscented Kalman Filter (UKF) framework that fuses uncertainties from sensor data and estimated human motion based on individual body shape. The UKF iteratively refines IMU and UWB measurements by aligning them with uncertain human motion constraints in real-time, producing optimal estimates for each. Experiments on both synthetic and real-world datasets demonstrate the effectiveness of UMotion in stabilizing sensor data and the improvement over state of the art in pose accuracy.
Related papers
- Human Motion Capture from Loose and Sparse Inertial Sensors with Garment-aware Diffusion Models [25.20942802233326]
We present a new task of full-body human pose estimation using sparse, loosely attached IMU sensors.<n>We developed transformer-based diffusion models to synthesize loose IMU data and estimate human poses based on this challenging loose IMU data.
arXiv Detail & Related papers (2025-06-18T09:16:36Z) - Spatial-Related Sensors Matters: 3D Human Motion Reconstruction Assisted
with Textual Semantics [4.9493039356268875]
Leveraging wearable devices for motion reconstruction has emerged as an economical and viable technique.
In this paper, we explore the spatial importance of multiple sensors, supervised by text that describes specific actions.
With textual supervision, our method not only differentiates between ambiguous actions such as sitting and standing but also produces more precise and natural motion.
arXiv Detail & Related papers (2023-12-27T04:21:45Z) - Multimodal Active Measurement for Human Mesh Recovery in Close Proximity [13.265259738826302]
In physical human-robot interactions, a robot needs to estimate the accurate body pose of a target person.
In these pHRI scenarios, the robot cannot fully observe the target person's body with equipped cameras because the target person must be close to the robot for physical interaction.
We propose an active measurement and sensor fusion framework of the equipped cameras with touch and ranging sensors such as 2D LiDAR.
arXiv Detail & Related papers (2023-10-12T08:17:57Z) - Multi-Visual-Inertial System: Analysis, Calibration and Estimation [26.658649118048032]
We study state estimation of multi-visual-inertial systems (MVIS) and develop sensor fusion algorithms.
We are interested in the full calibration of the associated visual-inertial sensors.
arXiv Detail & Related papers (2023-08-10T02:47:36Z) - On the Importance of Accurate Geometry Data for Dense 3D Vision Tasks [61.74608497496841]
Training on inaccurate or corrupt data induces model bias and hampers generalisation capabilities.
This paper investigates the effect of sensor errors for the dense 3D vision tasks of depth estimation and reconstruction.
arXiv Detail & Related papers (2023-03-26T22:32:44Z) - Pose-Oriented Transformer with Uncertainty-Guided Refinement for
2D-to-3D Human Pose Estimation [51.00725889172323]
We propose a Pose-Oriented Transformer (POT) with uncertainty guided refinement for 3D human pose estimation.
We first develop novel pose-oriented self-attention mechanism and distance-related position embedding for POT to explicitly exploit the human skeleton topology.
We present an Uncertainty-Guided Refinement Network (UGRN) to refine pose predictions from POT, especially for the difficult joints.
arXiv Detail & Related papers (2023-02-15T00:22:02Z) - FusePose: IMU-Vision Sensor Fusion in Kinematic Space for Parametric
Human Pose Estimation [12.821740951249552]
We propose a framework called emphFusePose under a parametric human kinematic model.
We aggregate different information of IMU or vision data and introduce three distinctive sensor fusion approaches: NaiveFuse, KineFuse and AdaDeepFuse.
The performance of 3D human pose estimation is improved compared to the baseline result.
arXiv Detail & Related papers (2022-08-25T09:35:27Z) - Towards Scale-Aware, Robust, and Generalizable Unsupervised Monocular
Depth Estimation by Integrating IMU Motion Dynamics [74.1720528573331]
Unsupervised monocular depth and ego-motion estimation has drawn extensive research attention in recent years.
We propose DynaDepth, a novel scale-aware framework that integrates information from vision and IMU motion dynamics.
We validate the effectiveness of DynaDepth by conducting extensive experiments and simulations on the KITTI and Make3D datasets.
arXiv Detail & Related papers (2022-07-11T07:50:22Z) - Transformer Inertial Poser: Attention-based Real-time Human Motion
Reconstruction from Sparse IMUs [79.72586714047199]
We propose an attention-based deep learning method to reconstruct full-body motion from six IMU sensors in real-time.
Our method achieves new state-of-the-art results both quantitatively and qualitatively, while being simple to implement and smaller in size.
arXiv Detail & Related papers (2022-03-29T16:24:52Z) - Uncertainty-Aware Adaptation for Self-Supervised 3D Human Pose
Estimation [70.32536356351706]
We introduce MRP-Net that constitutes a common deep network backbone with two output heads subscribing to two diverse configurations.
We derive suitable measures to quantify prediction uncertainty at both pose and joint level.
We present a comprehensive evaluation of the proposed approach and demonstrate state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-03-29T07:14:58Z) - TLIO: Tight Learned Inertial Odometry [43.17991168599939]
We propose a tightly-coupled Extended Kalman Filter framework for IMU-only state estimation.
We show that our network, trained with pedestrian data from a headset, can produce statistically consistent measurement and uncertainty.
arXiv Detail & Related papers (2020-07-06T03:13:34Z) - Learning Selective Sensor Fusion for States Estimation [47.76590539558037]
We propose SelectFusion, an end-to-end selective sensor fusion module.
During prediction, the network is able to assess the reliability of the latent features from different sensor modalities.
We extensively evaluate all fusion strategies in both public datasets and on progressively degraded datasets.
arXiv Detail & Related papers (2019-12-30T20:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.