Action-conditioned On-demand Motion Generation
- URL: http://arxiv.org/abs/2207.08164v1
- Date: Sun, 17 Jul 2022 13:04:44 GMT
- Title: Action-conditioned On-demand Motion Generation
- Authors: Qiujing Lu, Yipeng Zhang, Mingjian Lu, Vwani Roychowdhury
- Abstract summary: We propose a novel framework, On-Demand MOtion Generation (ODMO), for generating realistic and diverse long-term 3D human motion sequences.
ODMO shows improvements over SOTA approaches on all traditional motion evaluation metrics when evaluated on three public datasets.
- Score: 11.45641608124365
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel framework, On-Demand MOtion Generation (ODMO), for
generating realistic and diverse long-term 3D human motion sequences
conditioned only on action types with an additional capability of
customization. ODMO shows improvements over SOTA approaches on all traditional
motion evaluation metrics when evaluated on three public datasets (HumanAct12,
UESTC, and MoCap). Furthermore, we provide both qualitative evaluations and
quantitative metrics demonstrating several first-known customization
capabilities afforded by our framework, including mode discovery,
interpolation, and trajectory customization. These capabilities significantly
widen the spectrum of potential applications of such motion generation models.
The novel on-demand generative capabilities are enabled by innovations in both
the encoder and decoder architectures: (i) Encoder: Utilizing contrastive
learning in low-dimensional latent space to create a hierarchical embedding of
motion sequences, where not only the codes of different action types form
different groups, but within an action type, codes of similar inherent patterns
(motion styles) cluster together, making them readily discoverable; (ii)
Decoder: Using a hierarchical decoding strategy where the motion trajectory is
reconstructed first and then used to reconstruct the whole motion sequence.
Such an architecture enables effective trajectory control. Our code is released
on the Github page: https://github.com/roychowdhuryresearch/ODMO
Related papers
- Deciphering Movement: Unified Trajectory Generation Model for Multi-Agent [53.637837706712794]
We propose a Unified Trajectory Generation model, UniTraj, that processes arbitrary trajectories as masked inputs.
Specifically, we introduce a Ghost Spatial Masking (GSM) module embedded within a Transformer encoder for spatial feature extraction.
We benchmark three practical sports game datasets, Basketball-U, Football-U, and Soccer-U, for evaluation.
arXiv Detail & Related papers (2024-05-27T22:15:23Z) - Hierarchical Generation of Human-Object Interactions with Diffusion
Probabilistic Models [71.64318025625833]
This paper presents a novel approach to generating the 3D motion of a human interacting with a target object.
Our framework first generates a set of milestones and then synthesizes the motion along them.
The experiments on the NSM, COUCH, and SAMP datasets show that our approach outperforms previous methods by a large margin in both quality and diversity.
arXiv Detail & Related papers (2023-10-03T17:50:23Z) - NEURAL MARIONETTE: A Transformer-based Multi-action Human Motion
Synthesis System [51.43113919042621]
We present a neural network-based system for long-term, multi-action human motion synthesis.
The system can produce meaningful motions with smooth transitions from simple user input.
We also present a new dataset dedicated to the multi-action motion synthesis task.
arXiv Detail & Related papers (2022-09-27T07:10:20Z) - MoDi: Unconditional Motion Synthesis from Diverse Data [51.676055380546494]
We present MoDi, an unconditional generative model that synthesizes diverse motions.
Our model is trained in a completely unsupervised setting from a diverse, unstructured and unlabeled motion dataset.
We show that despite the lack of any structure in the dataset, the latent space can be semantically clustered.
arXiv Detail & Related papers (2022-06-16T09:06:25Z) - MoCaNet: Motion Retargeting in-the-wild via Canonicalization Networks [77.56526918859345]
We present a novel framework that brings the 3D motion task from controlled environments to in-the-wild scenarios.
It is capable of body motion from a character in a 2D monocular video to a 3D character without using any motion capture system or 3D reconstruction procedure.
arXiv Detail & Related papers (2021-12-19T07:52:05Z) - MUGL: Large Scale Multi Person Conditional Action Generation with
Locomotion [9.30315673109153]
MUGL is a novel deep neural model for large-scale, diverse generation of single and multi-person pose-based action sequences with locomotion.
Our controllable approach enables variable-length generations customizable by action category, across more than 100 categories.
arXiv Detail & Related papers (2021-10-21T20:11:53Z) - Unsupervised Motion Representation Learning with Capsule Autoencoders [54.81628825371412]
Motion Capsule Autoencoder (MCAE) models motion in a two-level hierarchy.
MCAE is evaluated on a novel Trajectory20 motion dataset and various real-world skeleton-based human action datasets.
arXiv Detail & Related papers (2021-10-01T16:52:03Z) - Action-Conditioned 3D Human Motion Synthesis with Transformer VAE [44.523477804533364]
We tackle the problem of action-conditioned generation of realistic and diverse human motion sequences.
In contrast to methods that complete, or extend, motion sequences, this task does not require an initial pose or sequence.
We learn an action-aware latent representation for human motions by training a generative variational autoencoder.
arXiv Detail & Related papers (2021-04-12T17:40:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.