Hierarchical Style-based Networks for Motion Synthesis
- URL: http://arxiv.org/abs/2008.10162v1
- Date: Mon, 24 Aug 2020 02:11:02 GMT
- Title: Hierarchical Style-based Networks for Motion Synthesis
- Authors: Jingwei Xu, Huazhe Xu, Bingbing Ni, Xiaokang Yang, Xiaolong Wang,
Trevor Darrell
- Abstract summary: We propose a self-supervised method for generating long-range, diverse and plausible behaviors to achieve a specific goal location.
Our proposed method learns to model the motion of human by decomposing a long-range generation task in a hierarchical manner.
On large-scale skeleton dataset, we show that the proposed method is able to synthesise long-range, diverse and plausible motion.
- Score: 150.226137503563
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating diverse and natural human motion is one of the long-standing goals
for creating intelligent characters in the animated world. In this paper, we
propose a self-supervised method for generating long-range, diverse and
plausible behaviors to achieve a specific goal location. Our proposed method
learns to model the motion of human by decomposing a long-range generation task
in a hierarchical manner. Given the starting and ending states, a memory bank
is used to retrieve motion references as source material for short-range clip
generation. We first propose to explicitly disentangle the provided motion
material into style and content counterparts via bi-linear transformation
modelling, where diverse synthesis is achieved by free-form combination of
these two components. The short-range clips are then connected to form a
long-range motion sequence. Without ground truth annotation, we propose a
parameterized bi-directional interpolation scheme to guarantee the physical
validity and visual naturalness of generated results. On large-scale skeleton
dataset, we show that the proposed method is able to synthesise long-range,
diverse and plausible motion, which is also generalizable to unseen motion data
during testing. Moreover, we demonstrate the generated sequences are useful as
subgoals for actual physical execution in the animated world.
Related papers
- Infinite Motion: Extended Motion Generation via Long Text Instructions [51.61117351997808]
"Infinite Motion" is a novel approach that leverages long text to extended motion generation.
Key innovation of our model is its ability to accept arbitrary lengths of text as input.
We incorporate the timestamp design for text which allows precise editing of local segments within the generated sequences.
arXiv Detail & Related papers (2024-07-11T12:33:56Z) - Hierarchical Generation of Human-Object Interactions with Diffusion
Probabilistic Models [71.64318025625833]
This paper presents a novel approach to generating the 3D motion of a human interacting with a target object.
Our framework first generates a set of milestones and then synthesizes the motion along them.
The experiments on the NSM, COUCH, and SAMP datasets show that our approach outperforms previous methods by a large margin in both quality and diversity.
arXiv Detail & Related papers (2023-10-03T17:50:23Z) - Task-Oriented Human-Object Interactions Generation with Implicit Neural
Representations [61.659439423703155]
TOHO: Task-Oriented Human-Object Interactions Generation with Implicit Neural Representations.
Our method generates continuous motions that are parameterized only by the temporal coordinate.
This work takes a step further toward general human-scene interaction simulation.
arXiv Detail & Related papers (2023-03-23T09:31:56Z) - Correspondence-free online human motion retargeting [1.7008985510992145]
We present a data-driven framework for unsupervised human motion that animates a target subject with the motion of a source subject.
Our method is correspondence-free, requiring neither correspondences between the source and target shapes nor temporal correspondences between different frames of the source motion.
This allows to animate a target shape with arbitrary sequences of humans in motion, possibly captured using 4D acquisition platforms or consumer devices.
arXiv Detail & Related papers (2023-02-01T16:23:21Z) - MoDi: Unconditional Motion Synthesis from Diverse Data [51.676055380546494]
We present MoDi, an unconditional generative model that synthesizes diverse motions.
Our model is trained in a completely unsupervised setting from a diverse, unstructured and unlabeled motion dataset.
We show that despite the lack of any structure in the dataset, the latent space can be semantically clustered.
arXiv Detail & Related papers (2022-06-16T09:06:25Z) - Towards Diverse and Natural Scene-aware 3D Human Motion Synthesis [117.15586710830489]
We focus on the problem of synthesizing diverse scene-aware human motions under the guidance of target action sequences.
Based on this factorized scheme, a hierarchical framework is proposed, with each sub-module responsible for modeling one aspect.
Experiment results show that the proposed framework remarkably outperforms previous methods in terms of diversity and naturalness.
arXiv Detail & Related papers (2022-05-25T18:20:01Z) - GANimator: Neural Motion Synthesis from a Single Sequence [38.361579401046875]
We present GANimator, a generative model that learns to synthesize novel motions from a single, short motion sequence.
GANimator generates motions that resemble the core elements of the original motion, while simultaneously synthesizing novel and diverse movements.
We show a number of applications, including crowd simulation, key-frame editing, style transfer, and interactive control, which all learn from a single input sequence.
arXiv Detail & Related papers (2022-05-05T13:04:14Z) - LARNet: Latent Action Representation for Human Action Synthesis [3.3454373538792552]
We present LARNet, a novel end-to-end approach for generating human action videos.
We learn action dynamics in latent space avoiding the need of a driving video during inference.
We evaluate the proposed approach on four real-world human action datasets.
arXiv Detail & Related papers (2021-10-21T05:04:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.