Dynamic Future Net: Diversified Human Motion Generation
- URL: http://arxiv.org/abs/2009.05109v1
- Date: Tue, 25 Aug 2020 02:31:41 GMT
- Title: Dynamic Future Net: Diversified Human Motion Generation
- Authors: Wenheng Chen, He Wang, Yi Yuan, Tianjia Shao, Kun Zhou
- Abstract summary: Human motion modelling is crucial in many areas such as computer graphics, vision and virtual reality.
We present Dynamic Future Net, a new deep learning model where we explicitly focuses on the intrinsic motionity of human motion dynamics.
Our model can generate a large number of high-quality motions with arbitrary duration, and visuallyincing variations in both space and time.
- Score: 31.987602940970888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human motion modelling is crucial in many areas such as computer graphics,
vision and virtual reality. Acquiring high-quality skeletal motions is
difficult due to the need for specialized equipment and laborious manual
post-posting, which necessitates maximizing the use of existing data to
synthesize new data. However, it is a challenge due to the intrinsic motion
stochasticity of human motion dynamics, manifested in the short and long terms.
In the short term, there is strong randomness within a couple frames, e.g. one
frame followed by multiple possible frames leading to different motion styles;
while in the long term, there are non-deterministic action transitions. In this
paper, we present Dynamic Future Net, a new deep learning model where we
explicitly focuses on the aforementioned motion stochasticity by constructing a
generative model with non-trivial modelling capacity in temporal stochasticity.
Given limited amounts of data, our model can generate a large number of
high-quality motions with arbitrary duration, and visually-convincing
variations in both space and time. We evaluate our model on a wide range of
motions and compare it with the state-of-the-art methods. Both qualitative and
quantitative results show the superiority of our method, for its robustness,
versatility and high-quality.
Related papers
- IMUDiffusion: A Diffusion Model for Multivariate Time Series Synthetisation for Inertial Motion Capturing Systems [0.0]
We propose IMUDiffusion, a probabilistic diffusion model specifically designed for time series generation.
Our approach enables the generation of high-quality time series sequences which accurately capture the dynamics of human activities.
In some cases, we are able to improve the macro F1-score by almost 30%.
arXiv Detail & Related papers (2024-11-05T09:53:52Z) - Quo Vadis, Motion Generation? From Large Language Models to Large Motion Models [70.78051873517285]
We present MotionBase, the first million-level motion generation benchmark.
By leveraging this vast dataset, our large motion model demonstrates strong performance across a broad range of motions.
We introduce a novel 2D lookup-free approach for motion tokenization, which preserves motion information and expands codebook capacity.
arXiv Detail & Related papers (2024-10-04T10:48:54Z) - HUMOS: Human Motion Model Conditioned on Body Shape [54.20419874234214]
We introduce a new approach to develop a generative motion model based on body shape.
We show that it's possible to train this model using unpaired data.
The resulting model generates diverse, physically plausible, and dynamically stable human motions.
arXiv Detail & Related papers (2024-09-05T23:50:57Z) - Motion-Oriented Compositional Neural Radiance Fields for Monocular Dynamic Human Modeling [10.914612535745789]
This paper introduces Motion-oriented Compositional Neural Radiance Fields (MoCo-NeRF)
MoCo-NeRF is a framework designed to perform free-viewpoint rendering of monocular human videos.
arXiv Detail & Related papers (2024-07-16T17:59:01Z) - Large Motion Model for Unified Multi-Modal Motion Generation [50.56268006354396]
Large Motion Model (LMM) is a motion-centric, multi-modal framework that unifies mainstream motion generation tasks into a generalist model.
LMM tackles these challenges from three principled aspects.
arXiv Detail & Related papers (2024-04-01T17:55:11Z) - DROP: Dynamics Responses from Human Motion Prior and Projective Dynamics [21.00283279991885]
We introduce DROP, a novel framework for modeling Dynamics Responses of humans using generative mOtion prior and Projective dynamics.
We conduct extensive evaluations of our model across different motion tasks and various physical perturbations, demonstrating the scalability and diversity of responses.
arXiv Detail & Related papers (2023-09-24T20:25:59Z) - MoDi: Unconditional Motion Synthesis from Diverse Data [51.676055380546494]
We present MoDi, an unconditional generative model that synthesizes diverse motions.
Our model is trained in a completely unsupervised setting from a diverse, unstructured and unlabeled motion dataset.
We show that despite the lack of any structure in the dataset, the latent space can be semantically clustered.
arXiv Detail & Related papers (2022-06-16T09:06:25Z) - MoCo-Flow: Neural Motion Consensus Flow for Dynamic Humans in Stationary
Monocular Cameras [98.40768911788854]
We introduce MoCo-Flow, a representation that models the dynamic scene using a 4D continuous time-variant function.
At the heart of our work lies a novel optimization formulation, which is constrained by a motion consensus regularization on the motion flow.
We extensively evaluate MoCo-Flow on several datasets that contain human motions of varying complexity.
arXiv Detail & Related papers (2021-06-08T16:03:50Z) - Multi-frame sequence generator of 4D human body motion [0.0]
We propose a generative auto-encoder-based framework, which encodes, global locomotion including translation and rotation, and multi-frame temporal motion as a single latent space vector.
Our results validate the ability of the model to reconstruct 4D sequences of human morphology within a low error bound.
We also illustrate the benefits of the approach for 4D human motion prediction of future frames from initial human frames.
arXiv Detail & Related papers (2021-06-07T13:56:46Z) - Real-time Deep Dynamic Characters [95.5592405831368]
We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance.
We use a novel graph convolutional network architecture to enable motion-dependent deformation learning of body and clothing.
We show that our model creates motion-dependent surface deformations, physically plausible dynamic clothing deformations, as well as video-realistic surface textures at a much higher level of detail than previous state of the art approaches.
arXiv Detail & Related papers (2021-05-04T23:28:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.