Sparse to Dense Motion Transfer for Face Image Animation
- URL: http://arxiv.org/abs/2109.00471v2
- Date: Fri, 3 Sep 2021 04:05:08 GMT
- Title: Sparse to Dense Motion Transfer for Face Image Animation
- Authors: Ruiqi Zhao, Tianyi Wu and Guodong Guo
- Abstract summary: Given a source face image and a sequence of sparse face landmarks, our goal is to generate a video of the face imitating the motion of landmarks.
We develop an efficient and effective method for motion transfer from sparse landmarks to the face image.
- Score: 34.16015389505612
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Face image animation from a single image has achieved remarkable progress.
However, it remains challenging when only sparse landmarks are available as the
driving signal. Given a source face image and a sequence of sparse face
landmarks, our goal is to generate a video of the face imitating the motion of
landmarks. We develop an efficient and effective method for motion transfer
from sparse landmarks to the face image. We then combine global and local
motion estimation in a unified model to faithfully transfer the motion. The
model can learn to segment the moving foreground from the background and
generate not only global motion, such as rotation and translation of the face,
but also subtle local motion such as the gaze change. We further improve face
landmark detection on videos. With temporally better aligned landmark sequences
for training, our method can generate temporally coherent videos with higher
visual quality. Experiments suggest we achieve results comparable to the
state-of-the-art image driven method on the same identity testing and better
results on cross identity testing.
Related papers
- G3FA: Geometry-guided GAN for Face Animation [14.488117084637631]
We introduce Geometry-guided GAN for Face Animation (G3FA) to tackle this limitation.
Our novel approach empowers the face animation model to incorporate 3D information using only 2D images.
In our face reenactment model, we leverage 2D motion warping to capture motion dynamics.
arXiv Detail & Related papers (2024-08-23T13:13:24Z) - Puppet-Master: Scaling Interactive Video Generation as a Motion Prior for Part-Level Dynamics [67.97235923372035]
We present Puppet-Master, an interactive video generative model that can serve as a motion prior for part-level dynamics.
At test time, given a single image and a sparse set of motion trajectories, Puppet-Master can synthesize a video depicting realistic part-level motion faithful to the given drag interactions.
arXiv Detail & Related papers (2024-08-08T17:59:38Z) - Reenact Anything: Semantic Video Motion Transfer Using Motion-Textual Inversion [9.134743677331517]
We propose a pre-trained image-to-video model to disentangle appearance from motion.
Our method, called motion-textual inversion, leverages our observation that image-to-video models extract appearance mainly from the (latent) image input.
By operating on an inflated motion-text embedding containing multiple text/image embedding tokens per frame, we achieve a high temporal motion granularity.
Our approach does not require spatial alignment between the motion reference video and target image, generalizes across various domains, and can be applied to various tasks.
arXiv Detail & Related papers (2024-08-01T10:55:20Z) - Controllable Longer Image Animation with Diffusion Models [12.565739255499594]
We introduce an open-domain controllable image animation method using motion priors with video diffusion models.
Our method achieves precise control over the direction and speed of motion in the movable region by extracting the motion field information from videos.
We propose an efficient long-duration video generation method based on noise reschedule specifically tailored for image animation tasks.
arXiv Detail & Related papers (2024-05-27T16:08:00Z) - Learning Motion Refinement for Unsupervised Face Animation [45.807582064277305]
Unsupervised face animation aims to generate a human face video based on the appearance of a source image, mimicking the motion from a driving video.
Existing methods typically adopted a prior-based motion model (e.g., the local affine motion model or the local thin-plate-spline motion model)
In this work, we design a new unsupervised face animation approach to learn simultaneously the coarse and finer motions.
arXiv Detail & Related papers (2023-10-21T05:52:25Z) - Human MotionFormer: Transferring Human Motions with Vision Transformers [73.48118882676276]
Human motion transfer aims to transfer motions from a target dynamic person to a source static one for motion synthesis.
We propose Human MotionFormer, a hierarchical ViT framework that leverages global and local perceptions to capture large and subtle motion matching.
Experiments show that our Human MotionFormer sets the new state-of-the-art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2023-02-22T11:42:44Z) - Motion Transformer for Unsupervised Image Animation [37.35527776043379]
Image animation aims to animate a source image by using motion learned from a driving video.
Current state-of-the-art methods typically use convolutional neural networks (CNNs) to predict motion information.
We propose a new method, the motion transformer, which is the first attempt to build a motion estimator based on a vision transformer.
arXiv Detail & Related papers (2022-09-28T12:04:58Z) - Copy Motion From One to Another: Fake Motion Video Generation [53.676020148034034]
A compelling application of artificial intelligence is to generate a video of a target person performing arbitrary desired motion.
Current methods typically employ GANs with a L2 loss to assess the authenticity of the generated videos.
We propose a theoretically motivated Gromov-Wasserstein loss that facilitates learning the mapping from a pose to a foreground image.
Our method is able to generate realistic target person videos, faithfully copying complex motions from a source person.
arXiv Detail & Related papers (2022-05-03T08:45:22Z) - Animating Pictures with Eulerian Motion Fields [90.30598913855216]
We show a fully automatic method for converting a still image into a realistic animated looping video.
We target scenes with continuous fluid motion, such as flowing water and billowing smoke.
We propose a novel video looping technique that flows features both forward and backward in time and then blends the results.
arXiv Detail & Related papers (2020-11-30T18:59:06Z) - First Order Motion Model for Image Animation [90.712718329677]
Image animation consists of generating a video sequence so that an object in a source image is animated according to the motion of a driving video.
Our framework addresses this problem without using any annotation or prior information about the specific object to animate.
arXiv Detail & Related papers (2020-02-29T07:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.