MoDi: Unconditional Motion Synthesis from Diverse Data
- URL: http://arxiv.org/abs/2206.08010v1
- Date: Thu, 16 Jun 2022 09:06:25 GMT
- Title: MoDi: Unconditional Motion Synthesis from Diverse Data
- Authors: Sigal Raab, Inbal Leibovitch, Peizhuo Li, Kfir Aberman, Olga
Sorkine-Hornung, Daniel Cohen-Or
- Abstract summary: We present MoDi, an unconditional generative model that synthesizes diverse motions.
Our model is trained in a completely unsupervised setting from a diverse, unstructured and unlabeled motion dataset.
We show that despite the lack of any structure in the dataset, the latent space can be semantically clustered.
- Score: 51.676055380546494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emergence of neural networks has revolutionized the field of motion
synthesis. Yet, learning to unconditionally synthesize motions from a given
distribution remains a challenging task, especially when the motions are highly
diverse. We present MoDi, an unconditional generative model that synthesizes
diverse motions. Our model is trained in a completely unsupervised setting from
a diverse, unstructured and unlabeled motion dataset and yields a well-behaved,
highly semantic latent space. The design of our model follows the prolific
architecture of StyleGAN and adapts two of its key technical components into
the motion domain: a set of style-codes injected into each level of the
generator hierarchy and a mapping function that learns and forms a disentangled
latent space. We show that despite the lack of any structure in the dataset,
the latent space can be semantically clustered, and facilitates semantic
editing and motion interpolation. In addition, we propose a technique to invert
unseen motions into the latent space, and demonstrate latent-based motion
editing operations that otherwise cannot be achieved by naive manipulation of
explicit motion representations. Our qualitative and quantitative experiments
show that our framework achieves state-of-the-art synthesis quality that can
follow the distribution of highly diverse motion datasets. Code and trained
models will be released at https://sigal-raab.github.io/MoDi.
Related papers
- Multi-Resolution Generative Modeling of Human Motion from Limited Data [3.5229503563299915]
We present a generative model that learns to synthesize human motion from limited training sequences.
The model adeptly captures human motion patterns by integrating skeletal convolution layers and a multi-scale architecture.
arXiv Detail & Related papers (2024-11-25T15:36:29Z) - Motion-Oriented Compositional Neural Radiance Fields for Monocular Dynamic Human Modeling [10.914612535745789]
This paper introduces Motion-oriented Compositional Neural Radiance Fields (MoCo-NeRF)
MoCo-NeRF is a framework designed to perform free-viewpoint rendering of monocular human videos.
arXiv Detail & Related papers (2024-07-16T17:59:01Z) - A Unified Framework for Multimodal, Multi-Part Human Motion Synthesis [17.45562922442149]
We introduce a cohesive and scalable approach that consolidates multimodal (text, music, speech) and multi-part (hand, torso) human motion generation.
Our method frames the multimodal motion generation challenge as a token prediction task, drawing from specialized codebooks based on the modality of the control signal.
arXiv Detail & Related papers (2023-11-28T04:13:49Z) - DiverseMotion: Towards Diverse Human Motion Generation via Discrete
Diffusion [70.33381660741861]
We present DiverseMotion, a new approach for synthesizing high-quality human motions conditioned on textual descriptions.
We show that our DiverseMotion achieves the state-of-the-art motion quality and competitive motion diversity.
arXiv Detail & Related papers (2023-09-04T05:43:48Z) - MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis [73.52948992990191]
MoFusion is a new denoising-diffusion-based framework for high-quality conditional human motion synthesis.
We present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework.
We demonstrate the effectiveness of MoFusion compared to the state of the art on established benchmarks in the literature.
arXiv Detail & Related papers (2022-12-08T18:59:48Z) - Executing your Commands via Motion Diffusion in Latent Space [51.64652463205012]
We propose a Motion Latent-based Diffusion model (MLD) to produce vivid motion sequences conforming to the given conditional inputs.
Our MLD achieves significant improvements over the state-of-the-art methods among extensive human motion generation tasks.
arXiv Detail & Related papers (2022-12-08T03:07:00Z) - NeMF: Neural Motion Fields for Kinematic Animation [6.570955948572252]
We express the vast motion space as a continuous function over time, hence the name Neural Motion Fields (NeMF)
We use a neural network to learn this function for miscellaneous sets of motions.
We train our model with diverse human motion dataset and quadruped dataset to prove its versatility.
arXiv Detail & Related papers (2022-06-04T05:53:27Z) - GANimator: Neural Motion Synthesis from a Single Sequence [38.361579401046875]
We present GANimator, a generative model that learns to synthesize novel motions from a single, short motion sequence.
GANimator generates motions that resemble the core elements of the original motion, while simultaneously synthesizing novel and diverse movements.
We show a number of applications, including crowd simulation, key-frame editing, style transfer, and interactive control, which all learn from a single input sequence.
arXiv Detail & Related papers (2022-05-05T13:04:14Z) - Unsupervised Motion Representation Learning with Capsule Autoencoders [54.81628825371412]
Motion Capsule Autoencoder (MCAE) models motion in a two-level hierarchy.
MCAE is evaluated on a novel Trajectory20 motion dataset and various real-world skeleton-based human action datasets.
arXiv Detail & Related papers (2021-10-01T16:52:03Z) - Hierarchical Style-based Networks for Motion Synthesis [150.226137503563]
We propose a self-supervised method for generating long-range, diverse and plausible behaviors to achieve a specific goal location.
Our proposed method learns to model the motion of human by decomposing a long-range generation task in a hierarchical manner.
On large-scale skeleton dataset, we show that the proposed method is able to synthesise long-range, diverse and plausible motion.
arXiv Detail & Related papers (2020-08-24T02:11:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.