Example-based Motion Synthesis via Generative Motion Matching
- URL: http://arxiv.org/abs/2306.00378v1
- Date: Thu, 1 Jun 2023 06:19:33 GMT
- Title: Example-based Motion Synthesis via Generative Motion Matching
- Authors: Weiyu Li, Xuelin Chen, Peizhuo Li, Olga Sorkine-Hornung, Baoquan Chen
- Abstract summary: We present GenMM, a generative model that "mines" as many diverse motions as possible from a single or few example sequences.
GenMM inherits the training-free nature and the superior quality of the well-known Motion Matching method.
- Score: 44.20519633463265
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present GenMM, a generative model that "mines" as many diverse motions as
possible from a single or few example sequences. In stark contrast to existing
data-driven methods, which typically require long offline training time, are
prone to visual artifacts, and tend to fail on large and complex skeletons,
GenMM inherits the training-free nature and the superior quality of the
well-known Motion Matching method. GenMM can synthesize a high-quality motion
within a fraction of a second, even with highly complex and large skeletal
structures. At the heart of our generative framework lies the generative motion
matching module, which utilizes the bidirectional visual similarity as a
generative cost function to motion matching, and operates in a multi-stage
framework to progressively refine a random guess using exemplar motion matches.
In addition to diverse motion generation, we show the versatility of our
generative framework by extending it to a number of scenarios that are not
possible with motion matching alone, including motion completion, key
frame-guided generation, infinite looping, and motion reassembly. Code and data
for this paper are at https://wyysf-98.github.io/GenMM/
Related papers
- MotionCraft: Crafting Whole-Body Motion with Plug-and-Play Multimodal Controls [30.487510829107908]
We propose MotionCraft, a unified diffusion transformer that crafts whole-body motion with plug-and-play multimodal control.
Our framework employs a coarse-to-fine training strategy, starting with the first stage of text-to-motion semantic pre-training.
We introduce MC-Bench, the first available multimodal whole-body motion generation benchmark based on the unified SMPL-X format.
arXiv Detail & Related papers (2024-07-30T18:57:06Z) - Large Motion Model for Unified Multi-Modal Motion Generation [50.56268006354396]
Large Motion Model (LMM) is a motion-centric, multi-modal framework that unifies mainstream motion generation tasks into a generalist model.
LMM tackles these challenges from three principled aspects.
arXiv Detail & Related papers (2024-04-01T17:55:11Z) - FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and Editing [56.29102849106382]
FineMoGen is a diffusion-based motion generation and editing framework.
It can synthesize fine-grained motions, with spatial-temporal composition to the user instructions.
FineMoGen further enables zero-shot motion editing capabilities with the aid of modern large language models.
arXiv Detail & Related papers (2023-12-22T16:56:02Z) - Hierarchical Generation of Human-Object Interactions with Diffusion
Probabilistic Models [71.64318025625833]
This paper presents a novel approach to generating the 3D motion of a human interacting with a target object.
Our framework first generates a set of milestones and then synthesizes the motion along them.
The experiments on the NSM, COUCH, and SAMP datasets show that our approach outperforms previous methods by a large margin in both quality and diversity.
arXiv Detail & Related papers (2023-10-03T17:50:23Z) - MoDi: Unconditional Motion Synthesis from Diverse Data [51.676055380546494]
We present MoDi, an unconditional generative model that synthesizes diverse motions.
Our model is trained in a completely unsupervised setting from a diverse, unstructured and unlabeled motion dataset.
We show that despite the lack of any structure in the dataset, the latent space can be semantically clustered.
arXiv Detail & Related papers (2022-06-16T09:06:25Z) - GANimator: Neural Motion Synthesis from a Single Sequence [38.361579401046875]
We present GANimator, a generative model that learns to synthesize novel motions from a single, short motion sequence.
GANimator generates motions that resemble the core elements of the original motion, while simultaneously synthesizing novel and diverse movements.
We show a number of applications, including crowd simulation, key-frame editing, style transfer, and interactive control, which all learn from a single input sequence.
arXiv Detail & Related papers (2022-05-05T13:04:14Z) - MUGL: Large Scale Multi Person Conditional Action Generation with
Locomotion [9.30315673109153]
MUGL is a novel deep neural model for large-scale, diverse generation of single and multi-person pose-based action sequences with locomotion.
Our controllable approach enables variable-length generations customizable by action category, across more than 100 categories.
arXiv Detail & Related papers (2021-10-21T20:11:53Z) - Hierarchical Style-based Networks for Motion Synthesis [150.226137503563]
We propose a self-supervised method for generating long-range, diverse and plausible behaviors to achieve a specific goal location.
Our proposed method learns to model the motion of human by decomposing a long-range generation task in a hierarchical manner.
On large-scale skeleton dataset, we show that the proposed method is able to synthesise long-range, diverse and plausible motion.
arXiv Detail & Related papers (2020-08-24T02:11:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.