Contact-Aware Refinement of Human Pose Pseudo-Ground Truth via Bioimpedance Sensing
- URL: http://arxiv.org/abs/2512.04862v1
- Date: Thu, 04 Dec 2025 14:45:38 GMT
- Title: Contact-Aware Refinement of Human Pose Pseudo-Ground Truth via Bioimpedance Sensing
- Authors: Maria-Paola Forte, Nikos Athanasiou, Giulia Ballardini, Jan Ulrich Bartels, Katherine J. Kuchenbecker, Michael J. Black,
- Abstract summary: We propose a novel framework that combines visual pose estimators with bioimpedance sensing to capture the 3D pose of people by taking self-contact into account.<n>We validate our approach using a new dataset of synchronized RGB video, bioimpedance measurements, and 3D motion capture.<n>We also present a miniature wearable bioimpedance sensor that enables efficient large-scale collection of contact-aware training data.
- Score: 42.371736670824575
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Capturing accurate 3D human pose in the wild would provide valuable data for training pose estimation and motion generation methods. While video-based estimation approaches have become increasingly accurate, they often fail in common scenarios involving self-contact, such as a hand touching the face. In contrast, wearable bioimpedance sensing can cheaply and unobtrusively measure ground-truth skin-to-skin contact. Consequently, we propose a novel framework that combines visual pose estimators with bioimpedance sensing to capture the 3D pose of people by taking self-contact into account. Our method, BioTUCH, initializes the pose using an off-the-shelf estimator and introduces contact-aware pose optimization during measured self-contact: reprojection error and deviations from the input estimate are minimized while enforcing vertex proximity constraints. We validate our approach using a new dataset of synchronized RGB video, bioimpedance measurements, and 3D motion capture. Testing with three input pose estimators, we demonstrate an average of 11.7% improvement in reconstruction accuracy. We also present a miniature wearable bioimpedance sensor that enables efficient large-scale collection of contact-aware training data for improving pose estimation and generation using BioTUCH. Code and data are available at biotuch.is.tue.mpg.de
Related papers
- Reconstructing Humans with a Biomechanically Accurate Skeleton [55.06027148976482]
We introduce a method for reconstructing 3D humans from a single image using a biomechanically accurate skeleton model.<n>Compared to state-of-the-art methods for 3D human mesh recovery, our model achieves competitive performance on standard benchmarks.
arXiv Detail & Related papers (2025-03-27T17:56:24Z) - Pose Priors from Language Models [74.61186408764559]
Language is often used to describe physical interaction, yet most 3D human pose estimation methods overlook this rich source of information.<n>We bridge this gap by leveraging large multimodal models (LMMs) as priors for reconstructing contact poses.
arXiv Detail & Related papers (2024-05-06T17:59:36Z) - Hybrid 3D Human Pose Estimation with Monocular Video and Sparse IMUs [15.017274891943162]
Temporal 3D human pose estimation from monocular videos is a challenging task in human-centered computer vision.
Inertial sensor has been introduced to provide complementary source of information.
It remains challenging to integrate heterogeneous sensor data for producing physically rational 3D human poses.
arXiv Detail & Related papers (2024-04-27T09:02:42Z) - Efficient, Self-Supervised Human Pose Estimation with Inductive Prior
Tuning [30.256493625913127]
We analyze the relationship between reconstruction quality and pose estimation accuracy.
We develop a model pipeline that outperforms the baseline, using less than one-third the amount of training data.
We show that a combination of well-engineered reconstruction losses and inductive priors can help coordinate pose learning alongside reconstruction.
arXiv Detail & Related papers (2023-11-06T01:19:57Z) - Multimodal Active Measurement for Human Mesh Recovery in Close Proximity [13.265259738826302]
In physical human-robot interactions, a robot needs to estimate the accurate body pose of a target person.
In these pHRI scenarios, the robot cannot fully observe the target person's body with equipped cameras because the target person must be close to the robot for physical interaction.
We propose an active measurement and sensor fusion framework of the equipped cameras with touch and ranging sensors such as 2D LiDAR.
arXiv Detail & Related papers (2023-10-12T08:17:57Z) - AlphaPose: Whole-Body Regional Multi-Person Pose Estimation and Tracking
in Real-Time [47.19339667836196]
We present AlphaPose, a system that can perform accurate whole-body pose estimation and tracking jointly while running in realtime.
We show a significant improvement over current state-of-the-art methods in both speed and accuracy on COCO-wholebody, COCO, PoseTrack, and our proposed Halpe-FullBody pose estimation dataset.
arXiv Detail & Related papers (2022-11-07T09:15:38Z) - Non-Local Latent Relation Distillation for Self-Adaptive 3D Human Pose
Estimation [63.199549837604444]
3D human pose estimation approaches leverage different forms of strong (2D/3D pose) or weak (multi-view or depth) paired supervision.
We cast 3D pose learning as a self-supervised adaptation problem that aims to transfer the task knowledge from a labeled source domain to a completely unpaired target.
We evaluate different self-adaptation settings and demonstrate state-of-the-art 3D human pose estimation performance on standard benchmarks.
arXiv Detail & Related papers (2022-04-05T03:52:57Z) - On Self-Contact and Human Pose [50.96752167102025]
We develop new datasets and methods that significantly improve human pose estimation with self-contact.
We show that the new self-contact training data significantly improves 3D human pose estimates on withheld test data and existing datasets like 3DPW.
arXiv Detail & Related papers (2021-04-07T15:10:38Z) - Human POSEitioning System (HPS): 3D Human Pose Estimation and
Self-localization in Large Scenes from Body-Mounted Sensors [71.29186299435423]
We introduce (HPS) Human POSEitioning System, a method to recover the full 3D pose of a human registered with a 3D scan of the surrounding environment.
We show that our optimization-based integration exploits the benefits of the two, resulting in pose accuracy free of drift.
HPS could be used for VR/AR applications where humans interact with the scene without requiring direct line of sight with an external camera.
arXiv Detail & Related papers (2021-03-31T17:58:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.