Dance In the Wild: Monocular Human Animation with Neural Dynamic
Appearance Synthesis
- URL: http://arxiv.org/abs/2111.05916v1
- Date: Wed, 10 Nov 2021 20:18:57 GMT
- Title: Dance In the Wild: Monocular Human Animation with Neural Dynamic
Appearance Synthesis
- Authors: Tuanfeng Y. Wang and Duygu Ceylan and Krishna Kumar Singh and Niloy J.
Mitra
- Abstract summary: We propose a video based synthesis method that tackles challenges and demonstrates high quality results for in-the-wild videos.
We introduce a novel motion signature that is used to modulate the generator weights to capture dynamic appearance changes.
We evaluate our method on a set of challenging videos and show that our approach achieves state-of-the art performance both qualitatively and quantitatively.
- Score: 56.550999933048075
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Synthesizing dynamic appearances of humans in motion plays a central role in
applications such as AR/VR and video editing. While many recent methods have
been proposed to tackle this problem, handling loose garments with complex
textures and high dynamic motion still remains challenging. In this paper, we
propose a video based appearance synthesis method that tackles such challenges
and demonstrates high quality results for in-the-wild videos that have not been
shown before. Specifically, we adopt a StyleGAN based architecture to the task
of person specific video based motion retargeting. We introduce a novel motion
signature that is used to modulate the generator weights to capture dynamic
appearance changes as well as regularizing the single frame based pose
estimates to improve temporal coherency. We evaluate our method on a set of
challenging videos and show that our approach achieves state-of-the art
performance both qualitatively and quantitatively.
Related papers
- Diffusion Priors for Dynamic View Synthesis from Monocular Videos [59.42406064983643]
Dynamic novel view synthesis aims to capture the temporal evolution of visual content within videos.
We first finetune a pretrained RGB-D diffusion model on the video frames using a customization technique.
We distill the knowledge from the finetuned model to a 4D representations encompassing both dynamic and static Neural Radiance Fields.
arXiv Detail & Related papers (2024-01-10T23:26:41Z) - LEO: Generative Latent Image Animator for Human Video Synthesis [42.925592662547814]
We propose a novel framework for human video synthesis, placing emphasis on synthesizing-temporal coherency.
Our key idea is to represent motion as a sequence of flow maps in the generation process, which inherently isolate motion from appearance.
We implement this idea via a flow-based image animator and a Latent Motion Diffusion Model (LMDM)
arXiv Detail & Related papers (2023-05-06T09:29:12Z) - MoStGAN-V: Video Generation with Temporal Motion Styles [28.082294960744726]
Previous works attempt to generate videos in arbitrary lengths either in an autoregressive manner or regarding time as a continuous signal.
We argue that a single time-agnostic latent vector of style-based generator is insufficient to model various and temporally-consistent motions.
We introduce additional time-dependent motion styles to model diverse motion patterns.
arXiv Detail & Related papers (2023-04-05T22:47:12Z) - DynIBaR: Neural Dynamic Image-Based Rendering [79.44655794967741]
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene.
We adopt a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views.
We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets.
arXiv Detail & Related papers (2022-11-20T20:57:02Z) - Render In-between: Motion Guided Video Synthesis for Action
Interpolation [53.43607872972194]
We propose a motion-guided frame-upsampling framework that is capable of producing realistic human motion and appearance.
A novel motion model is trained to inference the non-linear skeletal motion between frames by leveraging a large-scale motion-capture dataset.
Our pipeline only requires low-frame-rate videos and unpaired human motion data but does not require high-frame-rate videos for training.
arXiv Detail & Related papers (2021-11-01T15:32:51Z) - Dynamic View Synthesis from Dynamic Monocular Video [69.80425724448344]
We present an algorithm for generating views at arbitrary viewpoints and any input time step given a monocular video of a dynamic scene.
We show extensive quantitative and qualitative results of dynamic view synthesis from casually captured videos.
arXiv Detail & Related papers (2021-05-13T17:59:50Z) - High-Fidelity Neural Human Motion Transfer from Monocular Video [71.75576402562247]
Video-based human motion transfer creates video animations of humans following a source motion.
We present a new framework which performs high-fidelity and temporally-consistent human motion transfer with natural pose-dependent non-rigid deformations.
In the experimental results, we significantly outperform the state-of-the-art in terms of video realism.
arXiv Detail & Related papers (2020-12-20T16:54:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.