Pose-to-Motion: Cross-Domain Motion Retargeting with Pose Prior
- URL: http://arxiv.org/abs/2310.20249v1
- Date: Tue, 31 Oct 2023 08:13:00 GMT
- Title: Pose-to-Motion: Cross-Domain Motion Retargeting with Pose Prior
- Authors: Qingqing Zhao and Peizhuo Li and Wang Yifan and Olga Sorkine-Hornung
and Gordon Wetzstein
- Abstract summary: Current learning-based motion synthesis methods depend on extensive motion datasets.
pose data is more accessible, since posed characters are easier to create and can even be extracted from images.
Our method generates plausible motions for characters that have only pose data by transferring motion from an existing motion capture dataset of another character.
- Score: 48.104051952928465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Creating believable motions for various characters has long been a goal in
computer graphics. Current learning-based motion synthesis methods depend on
extensive motion datasets, which are often challenging, if not impossible, to
obtain. On the other hand, pose data is more accessible, since static posed
characters are easier to create and can even be extracted from images using
recent advancements in computer vision. In this paper, we utilize this
alternative data source and introduce a neural motion synthesis approach
through retargeting. Our method generates plausible motions for characters that
have only pose data by transferring motion from an existing motion capture
dataset of another character, which can have drastically different skeletons.
Our experiments show that our method effectively combines the motion features
of the source character with the pose features of the target character, and
performs robustly with small or noisy pose data sets, ranging from a few
artist-created poses to noisy poses estimated directly from images.
Additionally, a conducted user study indicated that a majority of participants
found our retargeted motion to be more enjoyable to watch, more lifelike in
appearance, and exhibiting fewer artifacts. Project page:
https://cyanzhao42.github.io/pose2motion
Related papers
- FreeMotion: MoCap-Free Human Motion Synthesis with Multimodal Large Language Models [19.09048969615117]
We explore open-set human motion synthesis using natural language instructions as user control signals based on MLLMs.
Our method can achieve general human motion synthesis for many downstream tasks.
arXiv Detail & Related papers (2024-06-15T21:10:37Z) - Generating Human Interaction Motions in Scenes with Text Control [66.74298145999909]
We present TeSMo, a method for text-controlled scene-aware motion generation based on denoising diffusion models.
Our approach begins with pre-training a scene-agnostic text-to-motion diffusion model.
To facilitate training, we embed annotated navigation and interaction motions within scenes.
arXiv Detail & Related papers (2024-04-16T16:04:38Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Physics-based Motion Retargeting from Sparse Inputs [73.94570049637717]
Commercial AR/VR products consist only of a headset and controllers, providing very limited sensor data of the user's pose.
We introduce a method to retarget motions in real-time from sparse human sensor data to characters of various morphologies.
We show that the avatar poses often match the user surprisingly well, despite having no sensor information of the lower body available.
arXiv Detail & Related papers (2023-07-04T21:57:05Z) - Mutual Information-Based Temporal Difference Learning for Human Pose
Estimation in Video [16.32910684198013]
We present a novel multi-frame human pose estimation framework, which employs temporal differences across frames to model dynamic contexts.
To be specific, we design a multi-stage entangled learning sequences conditioned on multi-stage differences to derive informative motion representation sequences.
These place us to rank No.1 in the Crowd Pose Estimation in Complex Events Challenge on benchmark HiEve.
arXiv Detail & Related papers (2023-03-15T09:29:03Z) - A Hierarchy-Aware Pose Representation for Deep Character Animation [2.47343886645587]
We present a robust pose representation for motion modeling, suitable for deep character animation.
Our representation is based on dual quaternions, the mathematical abstractions with well-defined operations, which simultaneously encode rotational and positional orientation.
We show that our representation overcomes common motion artifacts, and assess its performance compared to other popular representations.
arXiv Detail & Related papers (2021-11-27T14:33:24Z) - High-Fidelity Neural Human Motion Transfer from Monocular Video [71.75576402562247]
Video-based human motion transfer creates video animations of humans following a source motion.
We present a new framework which performs high-fidelity and temporally-consistent human motion transfer with natural pose-dependent non-rigid deformations.
In the experimental results, we significantly outperform the state-of-the-art in terms of video realism.
arXiv Detail & Related papers (2020-12-20T16:54:38Z) - Human Motion Transfer from Poses in the Wild [61.6016458288803]
We tackle the problem of human motion transfer, where we synthesize novel motion video for a target person that imitates the movement from a reference video.
It is a video-to-video translation task in which the estimated poses are used to bridge two domains.
We introduce a novel pose-to-video translation framework for generating high-quality videos that are temporally coherent even for in-the-wild pose sequences unseen during training.
arXiv Detail & Related papers (2020-04-07T05:59:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.