Controllable Animation of Fluid Elements in Still Images
- URL: http://arxiv.org/abs/2112.03051v3
- Date: Mon, 25 Sep 2023 05:52:17 GMT
- Title: Controllable Animation of Fluid Elements in Still Images
- Authors: Aniruddha Mahapatra and Kuldeep Kulkarni
- Abstract summary: We propose a method to interactively control the animation of fluid elements in still images to generate cinemagraphs.
We represent the motion of such fluid elements in the image in the form of a constant 2D optical flow map.
We devise a novel UNet based architecture to autoregressively generate future frames using the refined optical flow map.
- Score: 9.194534529360691
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose a method to interactively control the animation of fluid elements
in still images to generate cinemagraphs. Specifically, we focus on the
animation of fluid elements like water, smoke, fire, which have the properties
of repeating textures and continuous fluid motion. Taking inspiration from
prior works, we represent the motion of such fluid elements in the image in the
form of a constant 2D optical flow map. To this end, we allow the user to
provide any number of arrow directions and their associated speeds along with a
mask of the regions the user wants to animate. The user-provided input arrow
directions, their corresponding speed values, and the mask are then converted
into a dense flow map representing a constant optical flow map (FD). We observe
that FD, obtained using simple exponential operations can closely approximate
the plausible motion of elements in the image. We further refine computed dense
optical flow map FD using a generative-adversarial network (GAN) to obtain a
more realistic flow map. We devise a novel UNet based architecture to
autoregressively generate future frames using the refined optical flow map by
forward-warping the input image features at different resolutions. We conduct
extensive experiments on a publicly available dataset and show that our method
is superior to the baselines in terms of qualitative and quantitative metrics.
In addition, we show the qualitative animations of the objects in directions
that did not exist in the training set and provide a way to synthesize videos
that otherwise would not exist in the real world.
Related papers
- Motion-Aware Video Frame Interpolation [49.49668436390514]
We introduce a Motion-Aware Video Frame Interpolation (MA-VFI) network, which directly estimates intermediate optical flow from consecutive frames.
It not only extracts global semantic relationships and spatial details from input frames with different receptive fields, but also effectively reduces the required computational cost and complexity.
arXiv Detail & Related papers (2024-02-05T11:00:14Z) - Motion Guidance: Diffusion-Based Image Editing with Differentiable
Motion Estimators [19.853978560075305]
Motion guidance is a technique that allows a user to specify dense, complex motion fields that indicate where each pixel in an image should move.
We demonstrate that our technique works on complex motions and produces high quality edits of real and generated images.
arXiv Detail & Related papers (2024-01-31T18:59:59Z) - MoVideo: Motion-Aware Video Generation with Diffusion Models [97.03352319694795]
We propose a novel motion-aware generation (MoVideo) framework that takes motion into consideration from two aspects: video depth and optical flow.
MoVideo achieves state-of-the-art results in both text-to-video and image-to-video generation, showing promising prompt consistency, frame consistency and visual quality.
arXiv Detail & Related papers (2023-11-19T13:36:03Z) - Generative Image Dynamics [80.70729090482575]
We present an approach to modeling an image-space prior on scene motion.
Our prior is learned from a collection of motion trajectories extracted from real video sequences.
arXiv Detail & Related papers (2023-09-14T17:54:01Z) - MPI-Flow: Learning Realistic Optical Flow with Multiplane Images [18.310665144874775]
We investigate generating realistic optical flow datasets from real-world images.
To generate highly realistic new images, we construct a layered depth representation, known as multiplane images (MPI), from single-view images.
To ensure the realism of motion, we present an independent object motion module that can separate the camera and dynamic object motion in MPI.
arXiv Detail & Related papers (2023-09-13T04:31:00Z) - Inferring Fluid Dynamics via Inverse Rendering [37.87293082992423]
Humans have a strong intuitive understanding of physical processes such as fluid falling by just a glimpse of such a scene picture.
This work achieves such a photo-to-fluid reconstruction functionality learned from unannotated videos.
arXiv Detail & Related papers (2023-04-10T08:23:17Z) - Progressive Temporal Feature Alignment Network for Video Inpainting [51.26380898255555]
Video convolution aims to fill in-temporal "corrupted regions" with plausible content.
Current methods achieve this goal through attention, flow-based warping, or 3D temporal convolution.
We propose 'Progressive Temporal Feature Alignment Network', which progressively enriches features extracted from the current frame with the warped feature from neighbouring frames.
arXiv Detail & Related papers (2021-04-08T04:50:33Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - Animating Pictures with Eulerian Motion Fields [90.30598913855216]
We show a fully automatic method for converting a still image into a realistic animated looping video.
We target scenes with continuous fluid motion, such as flowing water and billowing smoke.
We propose a novel video looping technique that flows features both forward and backward in time and then blends the results.
arXiv Detail & Related papers (2020-11-30T18:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.