QuestSim: Human Motion Tracking from Sparse Sensors with Simulated
Avatars
- URL: http://arxiv.org/abs/2209.09391v1
- Date: Tue, 20 Sep 2022 00:25:54 GMT
- Title: QuestSim: Human Motion Tracking from Sparse Sensors with Simulated
Avatars
- Authors: Alexander Winkler, Jungdam Won, Yuting Ye
- Abstract summary: Real-time tracking of human body motion is crucial for immersive experiences in AR/VR.
We present a reinforcement learning framework that takes in sparse signals from an HMD and two controllers.
We show that a single policy can be robust to diverse locomotion styles, different body sizes, and novel environments.
- Score: 80.05743236282564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-time tracking of human body motion is crucial for interactive and
immersive experiences in AR/VR. However, very limited sensor data about the
body is available from standalone wearable devices such as HMDs (Head Mounted
Devices) or AR glasses. In this work, we present a reinforcement learning
framework that takes in sparse signals from an HMD and two controllers, and
simulates plausible and physically valid full body motions. Using high quality
full body motion as dense supervision during training, a simple policy network
can learn to output appropriate torques for the character to balance, walk, and
jog, while closely following the input signals. Our results demonstrate
surprisingly similar leg motions to ground truth without any observations of
the lower body, even when the input is only the 6D transformations of the HMD.
We also show that a single policy can be robust to diverse locomotion styles,
different body sizes, and novel environments.
Related papers
- Learning Multi-Modal Whole-Body Control for Real-World Humanoid Robots [13.229028132036321]
Masked Humanoid Controller (MHC) supports standing, walking, and mimicry of whole and partial-body motions.
MHC imitates partially masked motions from a library of behaviors spanning standing, walking, optimized reference trajectories, re-targeted video clips, and human motion capture data.
We demonstrate sim-to-real transfer on the real-world Digit V3 humanoid robot.
arXiv Detail & Related papers (2024-07-30T09:10:24Z) - Real-Time Simulated Avatar from Head-Mounted Sensors [70.41580295721525]
We present SimXR, a method for controlling a simulated avatar from information (headset pose and cameras) obtained from AR / VR headsets.
To synergize headset poses with cameras, we control a humanoid to track headset movement while analyzing input images to decide body movement.
When body parts are seen, the movements of hands and feet will be guided by the images; when unseen, the laws of physics guide the controller to generate plausible motion.
arXiv Detail & Related papers (2024-03-11T16:15:51Z) - DivaTrack: Diverse Bodies and Motions from Acceleration-Enhanced
Three-Point Trackers [13.258923087528354]
Full-body avatar presence is crucial for immersive social and environmental interactions in digital reality.
Current devices only provide three six degrees of freedom (DOF) poses from the headset and two controllers.
We propose a deep learning framework, DivaTrack, which outperforms existing methods when applied to diverse body sizes and activities.
arXiv Detail & Related papers (2024-02-14T14:46:03Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - HMD-NeMo: Online 3D Avatar Motion Generation From Sparse Observations [7.096701481970196]
Head-Mounted Devices (HMDs) typically only provide a few input signals, such as head and hands 6-DoF.
We propose the first unified approach, HMD-NeMo, that addresses plausible and accurate full body motion generation even when the hands may be only partially visible.
arXiv Detail & Related papers (2023-08-22T08:07:12Z) - Physics-based Motion Retargeting from Sparse Inputs [73.94570049637717]
Commercial AR/VR products consist only of a headset and controllers, providing very limited sensor data of the user's pose.
We introduce a method to retarget motions in real-time from sparse human sensor data to characters of various morphologies.
We show that the avatar poses often match the user surprisingly well, despite having no sensor information of the lower body available.
arXiv Detail & Related papers (2023-07-04T21:57:05Z) - QuestEnvSim: Environment-Aware Simulated Motion Tracking from Sparse
Sensors [69.75711933065378]
We show that headset and controller pose can generate realistic full-body poses even in highly constrained environments.
We discuss three features, the environment representation, the contact reward and scene randomization, crucial to the performance of the method.
arXiv Detail & Related papers (2023-06-09T04:40:38Z) - Avatars Grow Legs: Generating Smooth Human Motion from Sparse Tracking
Inputs with Diffusion Model [18.139630622759636]
We present AGRoL, a novel conditional diffusion model specifically designed to track full bodies given sparse upper-body tracking signals.
Our model is based on a simple multi-layer perceptron (MLP) architecture and a novel conditioning scheme for motion data.
Unlike common diffusion architectures, our compact architecture can run in real-time, making it suitable for online body-tracking applications.
arXiv Detail & Related papers (2023-04-17T19:35:13Z) - Human POSEitioning System (HPS): 3D Human Pose Estimation and
Self-localization in Large Scenes from Body-Mounted Sensors [71.29186299435423]
We introduce (HPS) Human POSEitioning System, a method to recover the full 3D pose of a human registered with a 3D scan of the surrounding environment.
We show that our optimization-based integration exploits the benefits of the two, resulting in pose accuracy free of drift.
HPS could be used for VR/AR applications where humans interact with the scene without requiring direct line of sight with an external camera.
arXiv Detail & Related papers (2021-03-31T17:58:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.