Action2Motion: Conditioned Generation of 3D Human Motions
- URL: http://arxiv.org/abs/2007.15240v1
- Date: Thu, 30 Jul 2020 05:29:59 GMT
- Title: Action2Motion: Conditioned Generation of 3D Human Motions
- Authors: Chuan Guo, Xinxin Zuo, Sen Wang, Shihao Zou, Qingyao Sun, Annan Deng,
Minglun Gong and Li Cheng
- Abstract summary: We aim to generateplausible human motion sequences in 3D.
Each sampled sequence faithfully resembles anaturalhuman bodyarticulation dynamics.
A new 3D human motion dataset, HumanAct12, is also constructed.
- Score: 28.031644518303075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Action recognition is a relatively established task, where givenan input
sequence of human motion, the goal is to predict its ac-tion category. This
paper, on the other hand, considers a relativelynew problem, which could be
thought of as an inverse of actionrecognition: given a prescribed action type,
we aim to generateplausible human motion sequences in 3D. Importantly, the set
ofgenerated motions are expected to maintain itsdiversityto be ableto explore
the entire action-conditioned motion space; meanwhile,each sampled sequence
faithfully resembles anaturalhuman bodyarticulation dynamics. Motivated by
these objectives, we followthe physics law of human kinematics by adopting the
Lie Algebratheory to represent thenaturalhuman motions; we also propose
atemporal Variational Auto-Encoder (VAE) that encourages adiversesampling of
the motion space. A new 3D human motion dataset, HumanAct12, is also
constructed. Empirical experiments overthree distinct human motion datasets
(including ours) demonstratethe effectiveness of our approach.
Related papers
- Sitcom-Crafter: A Plot-Driven Human Motion Generation System in 3D Scenes [83.55301458112672]
Sitcom-Crafter is a system for human motion generation in 3D space.
Central to the function generation modules is our novel 3D scene-aware human-human interaction module.
Augmentation modules encompass plot comprehension for command generation, motion synchronization for seamless integration of different motion types.
arXiv Detail & Related papers (2024-10-14T17:56:19Z) - MoManifold: Learning to Measure 3D Human Motion via Decoupled Joint Acceleration Manifolds [20.83684434910106]
We present MoManifold, a novel human motion prior, which models plausible human motion in continuous high-dimensional motion space.
Specifically, we propose novel decoupled joint acceleration to model human dynamics from existing limited motion data.
Extensive experiments demonstrate that MoManifold outperforms existing SOTAs as a prior in several downstream tasks.
arXiv Detail & Related papers (2024-09-01T15:00:16Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Task-Oriented Human-Object Interactions Generation with Implicit Neural
Representations [61.659439423703155]
TOHO: Task-Oriented Human-Object Interactions Generation with Implicit Neural Representations.
Our method generates continuous motions that are parameterized only by the temporal coordinate.
This work takes a step further toward general human-scene interaction simulation.
arXiv Detail & Related papers (2023-03-23T09:31:56Z) - Executing your Commands via Motion Diffusion in Latent Space [51.64652463205012]
We propose a Motion Latent-based Diffusion model (MLD) to produce vivid motion sequences conforming to the given conditional inputs.
Our MLD achieves significant improvements over the state-of-the-art methods among extensive human motion generation tasks.
arXiv Detail & Related papers (2022-12-08T03:07:00Z) - Towards Diverse and Natural Scene-aware 3D Human Motion Synthesis [117.15586710830489]
We focus on the problem of synthesizing diverse scene-aware human motions under the guidance of target action sequences.
Based on this factorized scheme, a hierarchical framework is proposed, with each sub-module responsible for modeling one aspect.
Experiment results show that the proposed framework remarkably outperforms previous methods in terms of diversity and naturalness.
arXiv Detail & Related papers (2022-05-25T18:20:01Z) - Action2video: Generating Videos of Human 3D Actions [31.665831044217363]
We aim to tackle the interesting yet challenging problem of generating videos of diverse and natural human motions from prescribed action categories.
Key issue lies in the ability to synthesize multiple distinct motion sequences that are realistic in their visual appearances.
Action2motionally generates plausible 3D pose sequences of a prescribed action category, which are processed and rendered by motion2video to form 2D videos.
arXiv Detail & Related papers (2021-11-12T20:20:37Z) - Scene-aware Generative Network for Human Motion Synthesis [125.21079898942347]
We propose a new framework, with the interaction between the scene and the human motion taken into account.
Considering the uncertainty of human motion, we formulate this task as a generative task.
We derive a GAN based learning approach, with discriminators to enforce the compatibility between the human motion and the contextual scene.
arXiv Detail & Related papers (2021-05-31T09:05:50Z) - 3D Human Motion Estimation via Motion Compression and Refinement [27.49664453166726]
We develop a technique for generating smooth and accurate 3D human pose and motion estimates from RGB video sequences.
Our method, which we call Motion Estimation via Variational Autoencoder (MEVA), decomposes a temporal sequence of human motion into a smooth motion representation.
arXiv Detail & Related papers (2020-08-09T19:02:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.