Single-Shot Freestyle Dance Reenactment
- URL: http://arxiv.org/abs/2012.01158v2
- Date: Sun, 21 Mar 2021 14:11:57 GMT
- Title: Single-Shot Freestyle Dance Reenactment
- Authors: Oran Gafni, Oron Ashual, Lior Wolf
- Abstract summary: The task of motion transfer between a source dancer and a target person is a special case of the pose transfer problem.
We propose a novel method that can reanimate a single image by arbitrary video sequences, unseen during training.
- Score: 89.91619150027265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The task of motion transfer between a source dancer and a target person is a
special case of the pose transfer problem, in which the target person changes
their pose in accordance with the motions of the dancer.
In this work, we propose a novel method that can reanimate a single image by
arbitrary video sequences, unseen during training. The method combines three
networks: (i) a segmentation-mapping network, (ii) a realistic frame-rendering
network, and (iii) a face refinement network. By separating this task into
three stages, we are able to attain a novel sequence of realistic frames,
capturing natural motion and appearance. Our method obtains significantly
better visual quality than previous methods and is able to animate diverse body
types and appearances, which are captured in challenging poses, as shown in the
experiments and supplementary video.
Related papers
- Replace Anyone in Videos [39.4019337319795]
We propose the ReplaceAnyone framework, which focuses on localizing and manipulating human motion in videos.
Specifically, we formulate this task as an image-conditioned pose-driven video inpainting paradigm.
We introduce diverse mask forms involving regular and irregular shapes to avoid shape leakage and allow granular local control.
arXiv Detail & Related papers (2024-09-30T03:27:33Z) - Towards 4D Human Video Stylization [56.33756124829298]
We present a first step towards 4D (3D and time) human video stylization, which addresses style transfer, novel view synthesis and human animation.
We leverage Neural Radiance Fields (NeRFs) to represent videos, conducting stylization in the rendered feature space.
Our framework uniquely extends its capabilities to accommodate novel poses and viewpoints, making it a versatile tool for creative human video stylization.
arXiv Detail & Related papers (2023-12-07T08:58:33Z) - Skeleton-free Pose Transfer for Stylized 3D Characters [53.33996932633865]
We present the first method that automatically transfers poses between stylized 3D characters without skeletal rigging.
We propose a novel pose transfer network that predicts the character skinning weights and deformation transformations jointly to articulate the target character to match the desired pose.
Our method is trained in a semi-supervised manner absorbing all existing character data with paired/unpaired poses and stylized shapes.
arXiv Detail & Related papers (2022-07-28T20:05:57Z) - Flow Guided Transformable Bottleneck Networks for Motion Retargeting [29.16125343915916]
Existing efforts leverage a long training video from each target person to train a subject-specific motion transfer model.
Few-shot motion transfer techniques, which only require one or a few images from a target, have recently drawn considerable attention.
Inspired by the Transformable Bottleneck Network, we propose an approach based on an implicit volumetric representation of the image content.
arXiv Detail & Related papers (2021-06-14T21:58:30Z) - High-Fidelity Neural Human Motion Transfer from Monocular Video [71.75576402562247]
Video-based human motion transfer creates video animations of humans following a source motion.
We present a new framework which performs high-fidelity and temporally-consistent human motion transfer with natural pose-dependent non-rigid deformations.
In the experimental results, we significantly outperform the state-of-the-art in terms of video realism.
arXiv Detail & Related papers (2020-12-20T16:54:38Z) - Pose-Guided Human Animation from a Single Image in the Wild [83.86903892201656]
We present a new pose transfer method for synthesizing a human animation from a single image of a person controlled by a sequence of body poses.
Existing pose transfer methods exhibit significant visual artifacts when applying to a novel scene.
We design a compositional neural network that predicts the silhouette, garment labels, and textures.
We are able to synthesize human animations that can preserve the identity and appearance of the person in a temporally coherent way without any fine-tuning of the network on the testing scene.
arXiv Detail & Related papers (2020-12-07T15:38:29Z) - Human Motion Transfer from Poses in the Wild [61.6016458288803]
We tackle the problem of human motion transfer, where we synthesize novel motion video for a target person that imitates the movement from a reference video.
It is a video-to-video translation task in which the estimated poses are used to bridge two domains.
We introduce a novel pose-to-video translation framework for generating high-quality videos that are temporally coherent even for in-the-wild pose sequences unseen during training.
arXiv Detail & Related papers (2020-04-07T05:59:53Z) - Do As I Do: Transferring Human Motion and Appearance between Monocular
Videos with Spatial and Temporal Constraints [8.784162652042959]
Marker-less human motion estimation and shape modeling from images in the wild bring this challenge to the fore.
We propose a unifying formulation for transferring appearance and human motion from monocular videos.
Our method is able to transfer both human motion and appearance outperforming state-of-the-art methods.
arXiv Detail & Related papers (2020-01-08T16:39:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.