Animating Pictures with Eulerian Motion Fields
- URL: http://arxiv.org/abs/2011.15128v1
- Date: Mon, 30 Nov 2020 18:59:06 GMT
- Title: Animating Pictures with Eulerian Motion Fields
- Authors: Aleksander Holynski, Brian Curless, Steven M. Seitz, Richard Szeliski
- Abstract summary: We show a fully automatic method for converting a still image into a realistic animated looping video.
We target scenes with continuous fluid motion, such as flowing water and billowing smoke.
We propose a novel video looping technique that flows features both forward and backward in time and then blends the results.
- Score: 90.30598913855216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we demonstrate a fully automatic method for converting a still
image into a realistic animated looping video. We target scenes with continuous
fluid motion, such as flowing water and billowing smoke. Our method relies on
the observation that this type of natural motion can be convincingly reproduced
from a static Eulerian motion description, i.e. a single, temporally constant
flow field that defines the immediate motion of a particle at a given 2D
location. We use an image-to-image translation network to encode motion priors
of natural scenes collected from online videos, so that for a new photo, we can
synthesize a corresponding motion field. The image is then animated using the
generated motion through a deep warping technique: pixels are encoded as deep
features, those features are warped via Eulerian motion, and the resulting
warped feature maps are decoded as images. In order to produce continuous,
seamlessly looping video textures, we propose a novel video looping technique
that flows features both forward and backward in time and then blends the
results. We demonstrate the effectiveness and robustness of our method by
applying it to a large collection of examples including beaches, waterfalls,
and flowing rivers.
Related papers
- Reenact Anything: Semantic Video Motion Transfer Using Motion-Textual Inversion [9.134743677331517]
We propose a pre-trained image-to-video model to disentangle appearance from motion.
Our method, called motion-textual inversion, leverages our observation that image-to-video models extract appearance mainly from the (latent) image input.
By operating on an inflated motion-text embedding containing multiple text/image embedding tokens per frame, we achieve a high temporal motion granularity.
Our approach does not require spatial alignment between the motion reference video and target image, generalizes across various domains, and can be applied to various tasks.
arXiv Detail & Related papers (2024-08-01T10:55:20Z) - Controllable Longer Image Animation with Diffusion Models [12.565739255499594]
We introduce an open-domain controllable image animation method using motion priors with video diffusion models.
Our method achieves precise control over the direction and speed of motion in the movable region by extracting the motion field information from videos.
We propose an efficient long-duration video generation method based on noise reschedule specifically tailored for image animation tasks.
arXiv Detail & Related papers (2024-05-27T16:08:00Z) - LivePhoto: Real Image Animation with Text-guided Motion Control [51.31418077586208]
This work presents a practical system, named LivePhoto, which allows users to animate an image of their interest with text descriptions.
We first establish a strong baseline that helps a well-learned text-to-image generator (i.e., Stable Diffusion) take an image as a further input.
We then equip the improved generator with a motion module for temporal modeling and propose a carefully designed training pipeline to better link texts and motions.
arXiv Detail & Related papers (2023-12-05T17:59:52Z) - VMC: Video Motion Customization using Temporal Attention Adaption for
Text-to-Video Diffusion Models [58.93124686141781]
Video Motion Customization (VMC) is a novel one-shot tuning approach crafted to adapt temporal attention layers within video diffusion models.
Our approach introduces a novel motion distillation objective using residual vectors between consecutive frames as a motion reference.
We validate our method against state-of-the-art video generative models across diverse real-world motions and contexts.
arXiv Detail & Related papers (2023-12-01T06:50:11Z) - AnimateAnything: Fine-Grained Open Domain Image Animation with Motion
Guidance [13.416296247896042]
We introduce an open domain image animation method that leverages the motion prior of video diffusion model.
Our approach introduces targeted motion area guidance and motion strength guidance, enabling precise control of the movable area and its motion speed.
We validate the effectiveness of our method through rigorous experiments on an open-domain dataset.
arXiv Detail & Related papers (2023-11-21T03:47:54Z) - Generative Image Dynamics [80.70729090482575]
We present an approach to modeling an image-space prior on scene motion.
Our prior is learned from a collection of motion trajectories extracted from real video sequences.
arXiv Detail & Related papers (2023-09-14T17:54:01Z) - Image Animation with Keypoint Mask [0.0]
Motion transfer is the task of synthesizing future video frames of a single source image according to the motion from a given driving video.
In this work, we extract the structure from a keypoint heatmap, without an explicit motion representation.
Then, the structures from the image and the video are extracted to warp the image according to the video, by a deep generator.
arXiv Detail & Related papers (2021-12-20T11:35:06Z) - Controllable Animation of Fluid Elements in Still Images [9.194534529360691]
We propose a method to interactively control the animation of fluid elements in still images to generate cinemagraphs.
We represent the motion of such fluid elements in the image in the form of a constant 2D optical flow map.
We devise a novel UNet based architecture to autoregressively generate future frames using the refined optical flow map.
arXiv Detail & Related papers (2021-12-06T13:53:08Z) - Learning Fine-Grained Motion Embedding for Landscape Animation [140.57889994591494]
We propose a model named FGLA to generate high-quality and realistic videos by learning Fine-Grained motion embedding.
To train and evaluate on diverse time-lapse videos, we build the largest high-resolution Time-lapse video dataset with Diverse scenes.
Our method achieves relative improvements by 19% on LIPIS and 5.6% on FVD compared with state-of-the-art methods on our dataset.
arXiv Detail & Related papers (2021-09-06T02:47:11Z) - First Order Motion Model for Image Animation [90.712718329677]
Image animation consists of generating a video sequence so that an object in a source image is animated according to the motion of a driving video.
Our framework addresses this problem without using any annotation or prior information about the specific object to animate.
arXiv Detail & Related papers (2020-02-29T07:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.