Utilizing Task-Generic Motion Prior to Recover Full-Body Motion from
Very Sparse Signals
- URL: http://arxiv.org/abs/2308.15839v1
- Date: Wed, 30 Aug 2023 08:21:52 GMT
- Title: Utilizing Task-Generic Motion Prior to Recover Full-Body Motion from
Very Sparse Signals
- Authors: Myungjin Shin, Dohae Lee, In-Kwon Lee
- Abstract summary: We propose a method that utilizes information from a neural motion prior to improve the accuracy of reconstructed user's motions.
This is based on the premise that the ultimate goal of pose reconstruction is to reconstruct the motion, which is a series of poses.
- Score: 3.8079353598215757
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The most popular type of devices used to track a user's posture in a virtual
reality experience consists of a head-mounted display and two controllers held
in both hands. However, due to the limited number of tracking sensors (three in
total), faithfully recovering the user in full-body is challenging, limiting
the potential for interactions among simulated user avatars within the virtual
world. Therefore, recent studies have attempted to reconstruct full-body poses
using neural networks that utilize previously learned human poses or accept a
series of past poses over a short period. In this paper, we propose a method
that utilizes information from a neural motion prior to improve the accuracy of
reconstructed user's motions. Our approach aims to reconstruct user's full-body
poses by predicting the latent representation of the user's overall motion from
limited input signals and integrating this information with tracking sensor
inputs. This is based on the premise that the ultimate goal of pose
reconstruction is to reconstruct the motion, which is a series of poses. Our
results show that this integration enables more accurate reconstruction of the
user's full-body motion, particularly enhancing the robustness of lower body
motion reconstruction from impoverished signals. Web:
https://https://mjsh34.github.io/mp-sspe/
Related papers
- Self-Avatar Animation in Virtual Reality: Impact of Motion Signals Artifacts on the Full-Body Pose Reconstruction [13.422686350235615]
We aim to measure the impact on the reconstruction of the articulated self-avatar's full-body pose.
We analyze the motion reconstruction errors using ground truth and 3D Cartesian coordinates estimated from textitYOLOv8 pose estimation.
arXiv Detail & Related papers (2024-04-29T12:02:06Z) - SparsePoser: Real-time Full-body Motion Reconstruction from Sparse Data [1.494051815405093]
We introduce SparsePoser, a novel deep learning-based solution for reconstructing a full-body pose from sparse data.
Our system incorporates a convolutional-based autoencoder that synthesizes high-quality continuous human poses.
We show that our method outperforms state-of-the-art techniques using IMU sensors or 6-DoF tracking devices.
arXiv Detail & Related papers (2023-11-03T18:48:01Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Physics-based Motion Retargeting from Sparse Inputs [73.94570049637717]
Commercial AR/VR products consist only of a headset and controllers, providing very limited sensor data of the user's pose.
We introduce a method to retarget motions in real-time from sparse human sensor data to characters of various morphologies.
We show that the avatar poses often match the user surprisingly well, despite having no sensor information of the lower body available.
arXiv Detail & Related papers (2023-07-04T21:57:05Z) - Humans in 4D: Reconstructing and Tracking Humans with Transformers [72.50856500760352]
We present an approach to reconstruct humans and track them over time.
At the core of our approach, we propose a fully "transformerized" version of a network for human mesh recovery.
This network, HMR 2.0, advances the state of the art and shows the capability to analyze unusual poses that have in the past been difficult to reconstruct from single images.
arXiv Detail & Related papers (2023-05-31T17:59:52Z) - Avatars Grow Legs: Generating Smooth Human Motion from Sparse Tracking
Inputs with Diffusion Model [18.139630622759636]
We present AGRoL, a novel conditional diffusion model specifically designed to track full bodies given sparse upper-body tracking signals.
Our model is based on a simple multi-layer perceptron (MLP) architecture and a novel conditioning scheme for motion data.
Unlike common diffusion architectures, our compact architecture can run in real-time, making it suitable for online body-tracking applications.
arXiv Detail & Related papers (2023-04-17T19:35:13Z) - MotionBERT: A Unified Perspective on Learning Human Motion
Representations [46.67364057245364]
We present a unified perspective on tackling various human-centric video tasks by learning human motion representations from large-scale and heterogeneous data resources.
We propose a pretraining stage in which a motion encoder is trained to recover the underlying 3D motion from noisy partial 2D observations.
We implement motion encoder with a Dual-stream Spatio-temporal Transformer (DSTformer) neural network.
arXiv Detail & Related papers (2022-10-12T19:46:25Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - QuestSim: Human Motion Tracking from Sparse Sensors with Simulated
Avatars [80.05743236282564]
Real-time tracking of human body motion is crucial for immersive experiences in AR/VR.
We present a reinforcement learning framework that takes in sparse signals from an HMD and two controllers.
We show that a single policy can be robust to diverse locomotion styles, different body sizes, and novel environments.
arXiv Detail & Related papers (2022-09-20T00:25:54Z) - AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion
Sensing [24.053096294334694]
We present AvatarPoser, the first learning-based method that predicts full-body poses in world coordinates using only motion input from the user's head and hands.
Our method builds on a Transformer encoder to extract deep features from the input signals and decouples global motion from the learned local joint orientations.
In our evaluation, AvatarPoser achieved new state-of-the-art results in evaluations on large motion capture datasets.
arXiv Detail & Related papers (2022-07-27T20:52:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.