Multimodal video and IMU kinematic dataset on daily life activities
using affordable devices (VIDIMU)
- URL: http://arxiv.org/abs/2303.16150v2
- Date: Fri, 2 Feb 2024 05:40:41 GMT
- Title: Multimodal video and IMU kinematic dataset on daily life activities
using affordable devices (VIDIMU)
- Authors: Mario Mart\'inez-Zarzuela, Javier Gonz\'alez-Alonso, M\'iriam
Ant\'on-Rodr\'iguez, Francisco J. D\'iaz-Pernas, Henning M\"uller, Cristina
Sim\'on-Mart\'inez
- Abstract summary: The objective of the dataset is to pave the way towards affordable patient gross motor tracking solutions for daily life activities recognition and kinematic analysis.
The novelty of dataset lies in: (i) the clinical relevance of the chosen movements, (ii) the combined utilization of affordable video and custom sensors, and (iii) the implementation of state-of-the-art tools for multimodal data processing of 3D body pose tracking and motion reconstruction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human activity recognition and clinical biomechanics are challenging problems
in physical telerehabilitation medicine. However, most publicly available
datasets on human body movements cannot be used to study both problems in an
out-of-the-lab movement acquisition setting. The objective of the VIDIMU
dataset is to pave the way towards affordable patient gross motor tracking
solutions for daily life activities recognition and kinematic analysis. The
dataset includes 13 activities registered using a commodity camera and five
inertial sensors. The video recordings were acquired in 54 subjects, of which
16 also had simultaneous recordings of inertial sensors. The novelty of dataset
lies in: (i) the clinical relevance of the chosen movements, (ii) the combined
utilization of affordable video and custom sensors, and (iii) the
implementation of state-of-the-art tools for multimodal data processing of 3D
body pose tracking and motion reconstruction in a musculoskeletal model from
inertial data. The validation confirms that a minimally disturbing acquisition
protocol, performed according to real-life conditions can provide a
comprehensive picture of human joint angles during daily life activities.
Related papers
- Scaling Wearable Foundation Models [54.93979158708164]
We investigate the scaling properties of sensor foundation models across compute, data, and model size.
Using a dataset of up to 40 million hours of in-situ heart rate, heart rate variability, electrodermal activity, accelerometer, skin temperature, and altimeter per-minute data from over 165,000 people, we create LSM.
Our results establish the scaling laws of LSM for tasks such as imputation, extrapolation, both across time and sensor modalities.
arXiv Detail & Related papers (2024-10-17T15:08:21Z) - Motion Capture from Inertial and Vision Sensors [60.5190090684795]
MINIONS is a large-scale Motion capture dataset collected from INertial and visION Sensors.
We conduct experiments on multi-modal motion capture using a monocular camera and very few IMUs.
arXiv Detail & Related papers (2024-07-23T09:41:10Z) - Daily Physical Activity Monitoring -- Adaptive Learning from Multi-source Motion Sensor Data [17.604797095380114]
In healthcare applications, there is a growing need to develop machine learning models that use data from a single source, such as from a wrist wearable device.
However, the limitation of using single-source data often compromises the model's accuracy, as it fails to capture the full scope of human activities.
We introduce a transfer learning framework that optimize machine learning models for everyday applications by leveraging multi-source data collected in a laboratory setting.
arXiv Detail & Related papers (2024-05-26T01:08:28Z) - Scaling Up Dynamic Human-Scene Interaction Modeling [58.032368564071895]
TRUMANS is the most comprehensive motion-captured HSI dataset currently available.
It intricately captures whole-body human motions and part-level object dynamics.
We devise a diffusion-based autoregressive model that efficiently generates HSI sequences of any length.
arXiv Detail & Related papers (2024-03-13T15:45:04Z) - 3D Kinematics Estimation from Video with a Biomechanical Model and
Synthetic Training Data [4.130944152992895]
We propose a novel biomechanics-aware network that directly outputs 3D kinematics from two input views.
Our experiments demonstrate that the proposed approach, only trained on synthetic data, outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2024-02-20T17:33:40Z) - Intelligent Knee Sleeves: A Real-time Multimodal Dataset for 3D Lower
Body Motion Estimation Using Smart Textile [2.2008680042670123]
We present a multimodal dataset with benchmarks collected using a novel pair of Intelligent Knee Sleeves for human pose estimation.
Our system utilizes synchronized datasets that comprise time-series data from the Knee Sleeves and the corresponding ground truth labels from the visualized motion capture camera system.
We employ these to generate 3D human models solely based on the wearable data of individuals performing different activities.
arXiv Detail & Related papers (2023-10-02T00:34:21Z) - Motion Matters: Neural Motion Transfer for Better Camera Physiological
Measurement [25.27559386977351]
Body motion is one of the most significant sources of noise when attempting to recover the subtle cardiac pulse from a video.
We adapt a neural video synthesis approach to augment videos for the task of remote photoplethys.
We demonstrate a 47% improvement over existing inter-dataset results using various state-of-the-art methods.
arXiv Detail & Related papers (2023-03-21T17:51:23Z) - Reconfigurable Data Glove for Reconstructing Physical and Virtual Grasps [100.72245315180433]
We present a reconfigurable data glove design to capture different modes of human hand-object interactions.
The glove operates in three modes for various downstream tasks with distinct features.
We evaluate the system's three modes by (i) recording hand gestures and associated forces, (ii) improving manipulation fluency in VR, and (iii) producing realistic simulation effects of various tool uses.
arXiv Detail & Related papers (2023-01-14T05:35:50Z) - QuestSim: Human Motion Tracking from Sparse Sensors with Simulated
Avatars [80.05743236282564]
Real-time tracking of human body motion is crucial for immersive experiences in AR/VR.
We present a reinforcement learning framework that takes in sparse signals from an HMD and two controllers.
We show that a single policy can be robust to diverse locomotion styles, different body sizes, and novel environments.
arXiv Detail & Related papers (2022-09-20T00:25:54Z) - Synthesizing Skeletal Motion and Physiological Signals as a Function of
a Virtual Human's Actions and Emotions [10.59409233835301]
We develop for the first time a system consisting of computational models for synchronously skeletal motion, electrocardiogram, blood pressure, respiration, and skin conductance signals.
The proposed framework is modular and allows the flexibility to experiment with different models.
In addition to facilitating ML research for round-the-clock monitoring at a reduced cost, the proposed framework will allow reusability of code and data.
arXiv Detail & Related papers (2021-02-08T21:56:15Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.