Synthesizing Skeletal Motion and Physiological Signals as a Function of
a Virtual Human's Actions and Emotions
- URL: http://arxiv.org/abs/2102.04548v1
- Date: Mon, 8 Feb 2021 21:56:15 GMT
- Title: Synthesizing Skeletal Motion and Physiological Signals as a Function of
a Virtual Human's Actions and Emotions
- Authors: Bonny Banerjee, Masoumeh Heidari Kapourchali, Murchana Baruah, Mousumi
Deb, Kenneth Sakauye, Mette Olufsen
- Abstract summary: We develop for the first time a system consisting of computational models for synchronously skeletal motion, electrocardiogram, blood pressure, respiration, and skin conductance signals.
The proposed framework is modular and allows the flexibility to experiment with different models.
In addition to facilitating ML research for round-the-clock monitoring at a reduced cost, the proposed framework will allow reusability of code and data.
- Score: 10.59409233835301
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Round-the-clock monitoring of human behavior and emotions is required in many
healthcare applications which is very expensive but can be automated using
machine learning (ML) and sensor technologies. Unfortunately, the lack of
infrastructure for collection and sharing of such data is a bottleneck for ML
research applied to healthcare. Our goal is to circumvent this bottleneck by
simulating a human body in virtual environment. This will allow generation of
potentially infinite amounts of shareable data from an individual as a function
of his actions, interactions and emotions in a care facility or at home, with
no risk of confidentiality breach or privacy invasion. In this paper, we
develop for the first time a system consisting of computational models for
synchronously synthesizing skeletal motion, electrocardiogram, blood pressure,
respiration, and skin conductance signals as a function of an open-ended set of
actions and emotions. Our experimental evaluations, involving user studies,
benchmark datasets and comparison to findings in the literature, show that our
models can generate skeletal motion and physiological signals with high
fidelity. The proposed framework is modular and allows the flexibility to
experiment with different models. In addition to facilitating ML research for
round-the-clock monitoring at a reduced cost, the proposed framework will allow
reusability of code and data, and may be used as a training tool for ML
practitioners and healthcare professionals.
Related papers
- Scaling Wearable Foundation Models [54.93979158708164]
We investigate the scaling properties of sensor foundation models across compute, data, and model size.
Using a dataset of up to 40 million hours of in-situ heart rate, heart rate variability, electrodermal activity, accelerometer, skin temperature, and altimeter per-minute data from over 165,000 people, we create LSM.
Our results establish the scaling laws of LSM for tasks such as imputation, extrapolation, both across time and sensor modalities.
arXiv Detail & Related papers (2024-10-17T15:08:21Z) - The Role of Functional Muscle Networks in Improving Hand Gesture Perception for Human-Machine Interfaces [2.367412330421982]
Surface electromyography (sEMG) has been explored for its rich informational context and accessibility.
This paper proposes the decoding of muscle synchronization rather than individual muscle activation.
It achieves an accuracy of 85.1%, demonstrating improved performance compared to existing methods.
arXiv Detail & Related papers (2024-08-05T15:17:34Z) - Daily Physical Activity Monitoring -- Adaptive Learning from Multi-source Motion Sensor Data [17.604797095380114]
In healthcare applications, there is a growing need to develop machine learning models that use data from a single source, such as from a wrist wearable device.
However, the limitation of using single-source data often compromises the model's accuracy, as it fails to capture the full scope of human activities.
We introduce a transfer learning framework that optimize machine learning models for everyday applications by leveraging multi-source data collected in a laboratory setting.
arXiv Detail & Related papers (2024-05-26T01:08:28Z) - Scaling Up Dynamic Human-Scene Interaction Modeling [58.032368564071895]
TRUMANS is the most comprehensive motion-captured HSI dataset currently available.
It intricately captures whole-body human motions and part-level object dynamics.
We devise a diffusion-based autoregressive model that efficiently generates HSI sequences of any length.
arXiv Detail & Related papers (2024-03-13T15:45:04Z) - InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint [67.6297384588837]
We introduce a novel controllable motion generation method, InterControl, to encourage the synthesized motions maintaining the desired distance between joint pairs.
We demonstrate that the distance between joint pairs for human-wise interactions can be generated using an off-the-shelf Large Language Model.
arXiv Detail & Related papers (2023-11-27T14:32:33Z) - Reconfigurable Data Glove for Reconstructing Physical and Virtual Grasps [100.72245315180433]
We present a reconfigurable data glove design to capture different modes of human hand-object interactions.
The glove operates in three modes for various downstream tasks with distinct features.
We evaluate the system's three modes by (i) recording hand gestures and associated forces, (ii) improving manipulation fluency in VR, and (iii) producing realistic simulation effects of various tool uses.
arXiv Detail & Related papers (2023-01-14T05:35:50Z) - Open-VICO: An Open-Source Gazebo Toolkit for Multi-Camera-based Skeleton
Tracking in Human-Robot Collaboration [0.0]
This work presents Open-VICO, an open-source toolkit to integrate virtual human models in Gazebo.
In particular, Open-VICO allows to combine in the same simulation environment realistic human kinematic models, multi-camera vision setups, and human-tracking techniques.
arXiv Detail & Related papers (2022-03-28T13:21:32Z) - Self-supervised transfer learning of physiological representations from
free-living wearable data [12.863826659440026]
We present a novel self-supervised representation learning method using activity and heart rate (HR) signals without semantic labels.
We evaluate our model in the largest free-living combined-sensing dataset (comprising >280k hours of wrist accelerometer & wearable ECG data)
arXiv Detail & Related papers (2020-11-18T23:21:34Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.