GANimator: Neural Motion Synthesis from a Single Sequence
- URL: http://arxiv.org/abs/2205.02625v1
- Date: Thu, 5 May 2022 13:04:14 GMT
- Title: GANimator: Neural Motion Synthesis from a Single Sequence
- Authors: Peizhuo Li, Kfir Aberman, Zihan Zhang, Rana Hanocka, Olga
Sorkine-Hornung
- Abstract summary: We present GANimator, a generative model that learns to synthesize novel motions from a single, short motion sequence.
GANimator generates motions that resemble the core elements of the original motion, while simultaneously synthesizing novel and diverse movements.
We show a number of applications, including crowd simulation, key-frame editing, style transfer, and interactive control, which all learn from a single input sequence.
- Score: 38.361579401046875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present GANimator, a generative model that learns to synthesize novel
motions from a single, short motion sequence. GANimator generates motions that
resemble the core elements of the original motion, while simultaneously
synthesizing novel and diverse movements. Existing data-driven techniques for
motion synthesis require a large motion dataset which contains the desired and
specific skeletal structure. By contrast, GANimator only requires training on a
single motion sequence, enabling novel motion synthesis for a variety of
skeletal structures e.g., bipeds, quadropeds, hexapeds, and more. Our framework
contains a series of generative and adversarial neural networks, each
responsible for generating motions in a specific frame rate. The framework
progressively learns to synthesize motion from random noise, enabling
hierarchical control over the generated motion content across varying levels of
detail. We show a number of applications, including crowd simulation, key-frame
editing, style transfer, and interactive control, which all learn from a single
input sequence. Code and data for this paper are at
https://peizhuoli.github.io/ganimator.
Related papers
- Infinite Motion: Extended Motion Generation via Long Text Instructions [51.61117351997808]
"Infinite Motion" is a novel approach that leverages long text to extended motion generation.
Key innovation of our model is its ability to accept arbitrary lengths of text as input.
We incorporate the timestamp design for text which allows precise editing of local segments within the generated sequences.
arXiv Detail & Related papers (2024-07-11T12:33:56Z) - FreeMotion: A Unified Framework for Number-free Text-to-Motion Synthesis [65.85686550683806]
This paper reconsiders motion generation and proposes to unify the single and multi-person motion by the conditional motion distribution.
Based on our framework, the current single-person motion spatial control method could be seamlessly integrated, achieving precise control of multi-person motion.
arXiv Detail & Related papers (2024-05-24T17:57:57Z) - Example-based Motion Synthesis via Generative Motion Matching [44.20519633463265]
We present GenMM, a generative model that "mines" as many diverse motions as possible from a single or few example sequences.
GenMM inherits the training-free nature and the superior quality of the well-known Motion Matching method.
arXiv Detail & Related papers (2023-06-01T06:19:33Z) - NEURAL MARIONETTE: A Transformer-based Multi-action Human Motion
Synthesis System [51.43113919042621]
We present a neural network-based system for long-term, multi-action human motion synthesis.
The system can produce meaningful motions with smooth transitions from simple user input.
We also present a new dataset dedicated to the multi-action motion synthesis task.
arXiv Detail & Related papers (2022-09-27T07:10:20Z) - MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model [35.32967411186489]
MotionDiffuse is a diffusion model-based text-driven motion generation framework.
It excels at modeling complicated data distribution and generating vivid motion sequences.
It responds to fine-grained instructions on body parts, and arbitrary-length motion synthesis with time-varied text prompts.
arXiv Detail & Related papers (2022-08-31T17:58:54Z) - Diverse Dance Synthesis via Keyframes with Transformer Controllers [10.23813069057791]
We propose a novel motion-based motion generation network based on multiple constraints, which can achieve diverse dance synthesis via learned knowledge.
The backbone of our network is a hierarchical RNN module composed of two long short-term memory (LSTM) units, in which the first LSTM is utilized to embed the posture information of the historical frames into a latent space.
Our framework contains two Transformer-based controllers, which are used to model the constraints of the root trajectory and the velocity factor respectively.
arXiv Detail & Related papers (2022-07-13T00:56:46Z) - MoDi: Unconditional Motion Synthesis from Diverse Data [51.676055380546494]
We present MoDi, an unconditional generative model that synthesizes diverse motions.
Our model is trained in a completely unsupervised setting from a diverse, unstructured and unlabeled motion dataset.
We show that despite the lack of any structure in the dataset, the latent space can be semantically clustered.
arXiv Detail & Related papers (2022-06-16T09:06:25Z) - NeMF: Neural Motion Fields for Kinematic Animation [6.570955948572252]
We express the vast motion space as a continuous function over time, hence the name Neural Motion Fields (NeMF)
We use a neural network to learn this function for miscellaneous sets of motions.
We train our model with diverse human motion dataset and quadruped dataset to prove its versatility.
arXiv Detail & Related papers (2022-06-04T05:53:27Z) - Hierarchical Style-based Networks for Motion Synthesis [150.226137503563]
We propose a self-supervised method for generating long-range, diverse and plausible behaviors to achieve a specific goal location.
Our proposed method learns to model the motion of human by decomposing a long-range generation task in a hierarchical manner.
On large-scale skeleton dataset, we show that the proposed method is able to synthesise long-range, diverse and plausible motion.
arXiv Detail & Related papers (2020-08-24T02:11:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.