BaroPoser: Real-time Human Motion Tracking from IMUs and Barometers in Everyday Devices
- URL: http://arxiv.org/abs/2508.03313v1
- Date: Tue, 05 Aug 2025 10:46:59 GMT
- Title: BaroPoser: Real-time Human Motion Tracking from IMUs and Barometers in Everyday Devices
- Authors: Libo Zhang, Xinyu Yi, Feng Xu,
- Abstract summary: We present BaroPoser, the first method that combines IMU and barometric data recorded by a smartphone and a smartwatch to estimate human pose and global translation in real time.<n>By leveraging barometric readings, we estimate sensor height changes, which provide valuable cues for both improving the accuracy of human pose estimation and predicting global translation on non-flat terrain.
- Score: 12.374794959250828
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, tracking human motion using IMUs from everyday devices such as smartphones and smartwatches has gained increasing popularity. However, due to the sparsity of sensor measurements and the lack of datasets capturing human motion over uneven terrain, existing methods often struggle with pose estimation accuracy and are typically limited to recovering movements on flat terrain only. To this end, we present BaroPoser, the first method that combines IMU and barometric data recorded by a smartphone and a smartwatch to estimate human pose and global translation in real time. By leveraging barometric readings, we estimate sensor height changes, which provide valuable cues for both improving the accuracy of human pose estimation and predicting global translation on non-flat terrain. Furthermore, we propose a local thigh coordinate frame to disentangle local and global motion input for better pose representation learning. We evaluate our method on both public benchmark datasets and real-world recordings. Quantitative and qualitative results demonstrate that our approach outperforms the state-of-the-art (SOTA) methods that use IMUs only with the same hardware configuration.
Related papers
- IMUCoCo: Enabling Flexible On-Body IMU Placement for Human Pose Estimation and Activity Recognition [23.514920388531184]
We introduce IMU over Continuous Coordinates (IMUCoCo), a novel framework that maps signals from a variable number of IMUs placed on the body surface into a unified feature space.<n>Our evaluations demonstrate that IMUCoCo supports accurate pose estimation in a wide range of typical and atypical sensor placements.
arXiv Detail & Related papers (2025-08-03T19:09:20Z) - MobilePoser: Real-Time Full-Body Pose Estimation and 3D Human Translation from IMUs in Mobile Consumer Devices [9.50274333425178]
We introduce MobilePoser, a real-time system for full-body pose and global translation estimation.<n>MobilePoser employs a physics-based motion estimation followed by a deep neural network for pose estimation, achieving state-of-the-art accuracy while remaining lightweight.<n>We conclude with a series of applications to illustrate the unique potential of MobilePoser across a variety of fields, such as health and wellness, gaming, and indoor navigation to name a few.
arXiv Detail & Related papers (2025-04-16T21:19:47Z) - Unified Human Localization and Trajectory Prediction with Monocular Vision [64.19384064365431]
MonoTransmotion is a Transformer-based framework that uses only a monocular camera to jointly solve localization and prediction tasks.<n>We show that by jointly training both tasks with our unified framework, our method is more robust in real-world scenarios made of noisy inputs.
arXiv Detail & Related papers (2025-03-05T14:18:39Z) - Mobile Augmented Reality Framework with Fusional Localization and Pose Estimation [9.73202312695815]
GPS-based mobile AR systems usually perform poorly due to the inaccurate positioning in the indoor environment.<n>This paper first conducts a comprehensive study of the state-of-the-art AR and localization systems on mobile platforms.<n>Then, we propose an effective indoor mobile AR framework.<n>In the framework, a fusional localization method and a new pose estimation implementation are developed to increase the overall matching rate and thus improving AR display accuracy.
arXiv Detail & Related papers (2025-01-06T19:02:39Z) - LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free
Environment [59.320414108383055]
We present LiveHPS, a novel single-LiDAR-based approach for scene-level human pose and shape estimation.
We propose a huge human motion dataset, named FreeMotion, which is collected in various scenarios with diverse human poses.
arXiv Detail & Related papers (2024-02-27T03:08:44Z) - AlphaPose: Whole-Body Regional Multi-Person Pose Estimation and Tracking
in Real-Time [47.19339667836196]
We present AlphaPose, a system that can perform accurate whole-body pose estimation and tracking jointly while running in realtime.
We show a significant improvement over current state-of-the-art methods in both speed and accuracy on COCO-wholebody, COCO, PoseTrack, and our proposed Halpe-FullBody pose estimation dataset.
arXiv Detail & Related papers (2022-11-07T09:15:38Z) - LaMAR: Benchmarking Localization and Mapping for Augmented Reality [80.23361950062302]
We introduce LaMAR, a new benchmark with a comprehensive capture and GT pipeline that co-registers realistic trajectories and sensor streams captured by heterogeneous AR devices.
We publish a benchmark dataset of diverse and large-scale scenes recorded with head-mounted and hand-held AR devices.
arXiv Detail & Related papers (2022-10-19T17:58:17Z) - Transformer Inertial Poser: Attention-based Real-time Human Motion
Reconstruction from Sparse IMUs [79.72586714047199]
We propose an attention-based deep learning method to reconstruct full-body motion from six IMU sensors in real-time.
Our method achieves new state-of-the-art results both quantitatively and qualitatively, while being simple to implement and smaller in size.
arXiv Detail & Related papers (2022-03-29T16:24:52Z) - Improving Robustness and Accuracy via Relative Information Encoding in
3D Human Pose Estimation [59.94032196768748]
We propose a relative information encoding method that yields positional and temporal enhanced representations.
Our method outperforms state-of-the-art methods on two public datasets.
arXiv Detail & Related papers (2021-07-29T14:12:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.