Physics-based Motion Retargeting from Sparse Inputs
- URL: http://arxiv.org/abs/2307.01938v1
- Date: Tue, 4 Jul 2023 21:57:05 GMT
- Title: Physics-based Motion Retargeting from Sparse Inputs
- Authors: Daniele Reda, Jungdam Won, Yuting Ye, Michiel van de Panne, Alexander
Winkler
- Abstract summary: Commercial AR/VR products consist only of a headset and controllers, providing very limited sensor data of the user's pose.
We introduce a method to retarget motions in real-time from sparse human sensor data to characters of various morphologies.
We show that the avatar poses often match the user surprisingly well, despite having no sensor information of the lower body available.
- Score: 73.94570049637717
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Avatars are important to create interactive and immersive experiences in
virtual worlds. One challenge in animating these characters to mimic a user's
motion is that commercial AR/VR products consist only of a headset and
controllers, providing very limited sensor data of the user's pose. Another
challenge is that an avatar might have a different skeleton structure than a
human and the mapping between them is unclear. In this work we address both of
these challenges. We introduce a method to retarget motions in real-time from
sparse human sensor data to characters of various morphologies. Our method uses
reinforcement learning to train a policy to control characters in a physics
simulator. We only require human motion capture data for training, without
relying on artist-generated animations for each avatar. This allows us to use
large motion capture datasets to train general policies that can track unseen
users from real and sparse data in real-time. We demonstrate the feasibility of
our approach on three characters with different skeleton structure: a dinosaur,
a mouse-like creature and a human. We show that the avatar poses often match
the user surprisingly well, despite having no sensor information of the lower
body available. We discuss and ablate the important components in our
framework, specifically the kinematic retargeting step, the imitation, contact
and action reward as well as our asymmetric actor-critic observations. We
further explore the robustness of our method in a variety of settings including
unbalancing, dancing and sports motions.
Related papers
- AvatarGO: Zero-shot 4D Human-Object Interaction Generation and Animation [60.5897687447003]
AvatarGO is a novel framework designed to generate realistic 4D HOI scenes from textual inputs.
Our framework not only generates coherent compositional motions, but also exhibits greater robustness in handling issues.
As the first attempt to synthesize 4D avatars with object interactions, we hope AvatarGO could open new doors for human-centric 4D content creation.
arXiv Detail & Related papers (2024-10-09T17:58:56Z) - SparsePoser: Real-time Full-body Motion Reconstruction from Sparse Data [1.494051815405093]
We introduce SparsePoser, a novel deep learning-based solution for reconstructing a full-body pose from sparse data.
Our system incorporates a convolutional-based autoencoder that synthesizes high-quality continuous human poses.
We show that our method outperforms state-of-the-art techniques using IMU sensors or 6-DoF tracking devices.
arXiv Detail & Related papers (2023-11-03T18:48:01Z) - Pose-to-Motion: Cross-Domain Motion Retargeting with Pose Prior [48.104051952928465]
Current learning-based motion synthesis methods depend on extensive motion datasets.
pose data is more accessible, since posed characters are easier to create and can even be extracted from images.
Our method generates plausible motions for characters that have only pose data by transferring motion from an existing motion capture dataset of another character.
arXiv Detail & Related papers (2023-10-31T08:13:00Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - QuestEnvSim: Environment-Aware Simulated Motion Tracking from Sparse
Sensors [69.75711933065378]
We show that headset and controller pose can generate realistic full-body poses even in highly constrained environments.
We discuss three features, the environment representation, the contact reward and scene randomization, crucial to the performance of the method.
arXiv Detail & Related papers (2023-06-09T04:40:38Z) - QuestSim: Human Motion Tracking from Sparse Sensors with Simulated
Avatars [80.05743236282564]
Real-time tracking of human body motion is crucial for immersive experiences in AR/VR.
We present a reinforcement learning framework that takes in sparse signals from an HMD and two controllers.
We show that a single policy can be robust to diverse locomotion styles, different body sizes, and novel environments.
arXiv Detail & Related papers (2022-09-20T00:25:54Z) - AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion
Sensing [24.053096294334694]
We present AvatarPoser, the first learning-based method that predicts full-body poses in world coordinates using only motion input from the user's head and hands.
Our method builds on a Transformer encoder to extract deep features from the input signals and decouples global motion from the learned local joint orientations.
In our evaluation, AvatarPoser achieved new state-of-the-art results in evaluations on large motion capture datasets.
arXiv Detail & Related papers (2022-07-27T20:52:39Z) - S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling [103.65625425020129]
We represent the pedestrian's shape, pose and skinning weights as neural implicit functions that are directly learned from data.
We demonstrate the effectiveness of our approach on various datasets and show that our reconstructions outperform existing state-of-the-art methods.
arXiv Detail & Related papers (2021-01-17T02:16:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.