Image Animation with Keypoint Mask
- URL: http://arxiv.org/abs/2112.10457v2
- Date: Tue, 21 Dec 2021 22:15:23 GMT
- Title: Image Animation with Keypoint Mask
- Authors: Or Toledano, Yanir Marmor, Dov Gertz
- Abstract summary: Motion transfer is the task of synthesizing future video frames of a single source image according to the motion from a given driving video.
In this work, we extract the structure from a keypoint heatmap, without an explicit motion representation.
Then, the structures from the image and the video are extracted to warp the image according to the video, by a deep generator.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Motion transfer is the task of synthesizing future video frames of a single
source image according to the motion from a given driving video. In order to
solve it, we face the challenging complexity of motion representation and the
unknown relations between the driving video and the source image. Despite its
difficulty, this problem attracted great interests from researches at the
recent years, with gradual improvements. The goal is often thought as the
decoupling of motion and appearance, which is may be solved by extracting the
motion from keypoint movement. We chose to tackle the generic, unsupervised
setting, where we need to apply animation to any arbitrary object, without any
domain specific model for the structure of the input. In this work, we extract
the structure from a keypoint heatmap, without an explicit motion
representation. Then, the structures from the image and the video are extracted
to warp the image according to the video, by a deep generator. We suggest two
variants of the structure from different steps in the keypoint module, and show
superior qualitative pose and quantitative scores.
Related papers
- Controllable Longer Image Animation with Diffusion Models [12.565739255499594]
We introduce an open-domain controllable image animation method using motion priors with video diffusion models.
Our method achieves precise control over the direction and speed of motion in the movable region by extracting the motion field information from videos.
We propose an efficient long-duration video generation method based on noise reschedule specifically tailored for image animation tasks.
arXiv Detail & Related papers (2024-05-27T16:08:00Z) - VMC: Video Motion Customization using Temporal Attention Adaption for
Text-to-Video Diffusion Models [58.93124686141781]
Video Motion Customization (VMC) is a novel one-shot tuning approach crafted to adapt temporal attention layers within video diffusion models.
Our approach introduces a novel motion distillation objective using residual vectors between consecutive frames as a motion reference.
We validate our method against state-of-the-art video generative models across diverse real-world motions and contexts.
arXiv Detail & Related papers (2023-12-01T06:50:11Z) - Animation from Blur: Multi-modal Blur Decomposition with Motion Guidance [83.25826307000717]
We study the challenging problem of recovering detailed motion from a single motion-red image.
Existing solutions to this problem estimate a single image sequence without considering the motion ambiguity for each region.
In this paper, we explicitly account for such motion ambiguity, allowing us to generate multiple plausible solutions all in sharp detail.
arXiv Detail & Related papers (2022-07-20T18:05:53Z) - QS-Craft: Learning to Quantize, Scrabble and Craft for Conditional Human
Motion Animation [66.97112599818507]
This paper studies the task of conditional Human Motion Animation (cHMA)
Given a source image and a driving video, the model should animate the new frame sequence.
The key novelties come from the newly introduced three key steps: quantize, scrabble and craft.
arXiv Detail & Related papers (2022-03-22T11:34:40Z) - NeuralDiff: Segmenting 3D objects that move in egocentric videos [92.95176458079047]
We study the problem of decomposing the observed 3D scene into a static background and a dynamic foreground.
This task is reminiscent of the classic background subtraction problem, but is significantly harder because all parts of the scene, static and dynamic, generate a large apparent motion.
In particular, we consider egocentric videos and further separate the dynamic component into objects and the actor that observes and moves them.
arXiv Detail & Related papers (2021-10-19T12:51:35Z) - Motion Representations for Articulated Animation [34.54825980226596]
We propose novel motion representations for animating articulated objects consisting of distinct parts.
In a completely unsupervised manner, our method identifies object parts, tracks them in a driving video, and infers their motions by considering their principal axes.
Our model can animate a variety of objects, surpassing previous methods by a large margin on existing benchmarks.
arXiv Detail & Related papers (2021-04-22T18:53:56Z) - Affine-modeled video extraction from a single motion blurred image [3.0080996413230667]
A motion-blurred image is the temporal average of multiple sharp frames over the exposure time.
In this work, we report a generalized video extraction method using the affine motion modeling.
Experiments on both public datasets and real captured data validate the state-of-the-art performance of the reported technique.
arXiv Detail & Related papers (2021-04-08T13:59:14Z) - Animating Pictures with Eulerian Motion Fields [90.30598913855216]
We show a fully automatic method for converting a still image into a realistic animated looping video.
We target scenes with continuous fluid motion, such as flowing water and billowing smoke.
We propose a novel video looping technique that flows features both forward and backward in time and then blends the results.
arXiv Detail & Related papers (2020-11-30T18:59:06Z) - First Order Motion Model for Image Animation [90.712718329677]
Image animation consists of generating a video sequence so that an object in a source image is animated according to the motion of a driving video.
Our framework addresses this problem without using any annotation or prior information about the specific object to animate.
arXiv Detail & Related papers (2020-02-29T07:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.