Fusing uncalibrated IMUs and handheld smartphone video to reconstruct knee kinematics
- URL: http://arxiv.org/abs/2405.17368v1
- Date: Mon, 27 May 2024 17:23:16 GMT
- Title: Fusing uncalibrated IMUs and handheld smartphone video to reconstruct knee kinematics
- Authors: J. D. Peiffer, Kunal Shah, Shawana Anarwala, Kayan Abdou, R. James Cotton,
- Abstract summary: We present a method to combine handheld smartphone video and uncalibrated wearable sensor data at their full temporal resolution.
We validate this in a mixture of people with no gait impairments, lower limb prosthesis users, and individuals with a history of stroke.
- Score: 1.5728609542259502
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Video and wearable sensor data provide complementary information about human movement. Video provides a holistic understanding of the entire body in the world while wearable sensors provide high-resolution measurements of specific body segments. A robust method to fuse these modalities and obtain biomechanically accurate kinematics would have substantial utility for clinical assessment and monitoring. While multiple video-sensor fusion methods exist, most assume that a time-intensive, and often brittle, sensor-body calibration process has already been performed. In this work, we present a method to combine handheld smartphone video and uncalibrated wearable sensor data at their full temporal resolution. Our monocular, video-only, biomechanical reconstruction already performs well, with only several degrees of error at the knee during walking compared to markerless motion capture. Reconstructing from a fusion of video and wearable sensor data further reduces this error. We validate this in a mixture of people with no gait impairments, lower limb prosthesis users, and individuals with a history of stroke. We also show that sensor data allows tracking through periods of visual occlusion.
Related papers
- OpenCap markerless motion capture estimation of lower extremity kinematics and dynamics in cycling [0.0]
Markerless motion capture offers several benefits over traditional marker-based systems.
System can directly detect human body landmarks, reducing manual processing and errors associated with marker placement.
This study compares the performance of OpenCap, a markerless motion capture system, with traditional marker-based systems in assessing cycling biomechanics.
arXiv Detail & Related papers (2024-08-20T15:57:40Z) - Multimodal Active Measurement for Human Mesh Recovery in Close Proximity [13.265259738826302]
In physical human-robot interactions, a robot needs to estimate the accurate body pose of a target person.
In these pHRI scenarios, the robot cannot fully observe the target person's body with equipped cameras because the target person must be close to the robot for physical interaction.
We propose an active measurement and sensor fusion framework of the equipped cameras with touch and ranging sensors such as 2D LiDAR.
arXiv Detail & Related papers (2023-10-12T08:17:57Z) - Intelligent Knee Sleeves: A Real-time Multimodal Dataset for 3D Lower
Body Motion Estimation Using Smart Textile [2.2008680042670123]
We present a multimodal dataset with benchmarks collected using a novel pair of Intelligent Knee Sleeves for human pose estimation.
Our system utilizes synchronized datasets that comprise time-series data from the Knee Sleeves and the corresponding ground truth labels from the visualized motion capture camera system.
We employ these to generate 3D human models solely based on the wearable data of individuals performing different activities.
arXiv Detail & Related papers (2023-10-02T00:34:21Z) - Multimodal video and IMU kinematic dataset on daily life activities
using affordable devices (VIDIMU) [0.0]
The objective of the dataset is to pave the way towards affordable patient gross motor tracking solutions for daily life activities recognition and kinematic analysis.
The novelty of dataset lies in: (i) the clinical relevance of the chosen movements, (ii) the combined utilization of affordable video and custom sensors, and (iii) the implementation of state-of-the-art tools for multimodal data processing of 3D body pose tracking and motion reconstruction.
arXiv Detail & Related papers (2023-03-27T14:05:49Z) - DynImp: Dynamic Imputation for Wearable Sensing Data Through Sensory and
Temporal Relatedness [78.98998551326812]
We argue that traditional methods have rarely made use of both times-series dynamics of the data as well as the relatedness of the features from different sensors.
We propose a model, termed as DynImp, to handle different time point's missingness with nearest neighbors along feature axis.
We show that the method can exploit the multi-modality features from related sensors and also learn from history time-series dynamics to reconstruct the data under extreme missingness.
arXiv Detail & Related papers (2022-09-26T21:59:14Z) - QuestSim: Human Motion Tracking from Sparse Sensors with Simulated
Avatars [80.05743236282564]
Real-time tracking of human body motion is crucial for immersive experiences in AR/VR.
We present a reinforcement learning framework that takes in sparse signals from an HMD and two controllers.
We show that a single policy can be robust to diverse locomotion styles, different body sizes, and novel environments.
arXiv Detail & Related papers (2022-09-20T00:25:54Z) - Drone Detection and Tracking in Real-Time by Fusion of Different Sensing
Modalities [66.4525391417921]
We design and evaluate a multi-sensor drone detection system.
Our solution integrates a fish-eye camera as well to monitor a wider part of the sky and steer the other cameras towards objects of interest.
The thermal camera is shown to be a feasible solution as good as the video camera, even if the camera employed here has a lower resolution.
arXiv Detail & Related papers (2022-07-05T10:00:58Z) - Human POSEitioning System (HPS): 3D Human Pose Estimation and
Self-localization in Large Scenes from Body-Mounted Sensors [71.29186299435423]
We introduce (HPS) Human POSEitioning System, a method to recover the full 3D pose of a human registered with a 3D scan of the surrounding environment.
We show that our optimization-based integration exploits the benefits of the two, resulting in pose accuracy free of drift.
HPS could be used for VR/AR applications where humans interact with the scene without requiring direct line of sight with an external camera.
arXiv Detail & Related papers (2021-03-31T17:58:31Z) - Human Leg Motion Tracking by Fusing IMUs and RGB Camera Data Using
Extended Kalman Filter [4.189643331553922]
IMU-based systems, as well as Marker-based motion tracking systems, are the most popular methods to track movement due to their low cost of implementation and lightweight.
This paper proposes a quaternion-based Extended Kalman filter approach to recover the human leg segments motions with a set of IMU sensors data fused with camera-marker system data.
arXiv Detail & Related papers (2020-11-01T17:54:53Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z) - OmniTact: A Multi-Directional High Resolution Touch Sensor [109.28703530853542]
Existing tactile sensors are either flat, have small sensitive fields or only provide low-resolution signals.
We introduce OmniTact, a multi-directional high-resolution tactile sensor.
We evaluate the capabilities of OmniTact on a challenging robotic control task.
arXiv Detail & Related papers (2020-03-16T01:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.