Image2Gif: Generating Continuous Realistic Animations with Warping NODEs
- URL: http://arxiv.org/abs/2205.04519v1
- Date: Mon, 9 May 2022 18:39:47 GMT
- Title: Image2Gif: Generating Continuous Realistic Animations with Warping NODEs
- Authors: Jurijs Nazarovs, Zhichun Huang
- Abstract summary: We propose a new framework, Warping Neural ODE, for generating a smooth animation (video frame) in a continuous manner.
This allows us to achieve the smoothness and the realism of an animation with infinitely small time steps between the frames.
We show the application of our work in generating an animation given two frames, in different training settings, including Generative Adversarial Network (GAN) and with $L$ loss.
- Score: 0.8218964199015377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating smooth animations from a limited number of sequential observations
has a number of applications in vision. For example, it can be used to increase
number of frames per second, or generating a new trajectory only based on first
and last frames, e.g. a motion of face emotions. Despite the discrete observed
data (frames), the problem of generating a new trajectory is a continues
problem. In addition, to be perceptually realistic, the domain of an image
should not alter drastically through the trajectory of changes. In this paper,
we propose a new framework, Warping Neural ODE, for generating a smooth
animation (video frame interpolation) in a continuous manner, given two
("farther apart") frames, denoting the start and the end of the animation. The
key feature of our framework is utilizing the continuous spatial transformation
of the image based on the vector field, derived from a system of differential
equations. This allows us to achieve the smoothness and the realism of an
animation with infinitely small time steps between the frames. We show the
application of our work in generating an animation given two frames, in
different training settings, including Generative Adversarial Network (GAN) and
with $L_2$ loss.
Related papers
- Framer: Interactive Frame Interpolation [73.06734414930227]
Framer targets producing smoothly transitioning frames between two images as per user creativity.
Our approach supports customizing the transition process by tailoring the trajectory of some selected keypoints.
It is noteworthy that our system also offers an "autopilot" mode, where we introduce a module to estimate the keypoints and the trajectory automatically.
arXiv Detail & Related papers (2024-10-24T17:59:51Z) - UniAnimate: Taming Unified Video Diffusion Models for Consistent Human Image Animation [53.16986875759286]
We present a UniAnimate framework to enable efficient and long-term human video generation.
We map the reference image along with the posture guidance and noise video into a common feature space.
We also propose a unified noise input that supports random noised input as well as first frame conditioned input.
arXiv Detail & Related papers (2024-06-03T10:51:10Z) - AnimateZero: Video Diffusion Models are Zero-Shot Image Animators [63.938509879469024]
We propose AnimateZero to unveil the pre-trained text-to-video diffusion model, i.e., AnimateDiff.
For appearance control, we borrow intermediate latents and their features from the text-to-image (T2I) generation.
For temporal control, we replace the global temporal attention of the original T2V model with our proposed positional-corrected window attention.
arXiv Detail & Related papers (2023-12-06T13:39:35Z) - MagicAnimate: Temporally Consistent Human Image Animation using
Diffusion Model [74.84435399451573]
This paper studies the human image animation task, which aims to generate a video of a certain reference identity following a particular motion sequence.
Existing animation works typically employ the frame-warping technique to animate the reference image towards the target motion.
We introduce MagicAnimate, a diffusion-based framework that aims at enhancing temporal consistency, preserving reference image faithfully, and improving animation fidelity.
arXiv Detail & Related papers (2023-11-27T18:32:31Z) - AnimateAnything: Fine-Grained Open Domain Image Animation with Motion
Guidance [13.416296247896042]
We introduce an open domain image animation method that leverages the motion prior of video diffusion model.
Our approach introduces targeted motion area guidance and motion strength guidance, enabling precise control of the movable area and its motion speed.
We validate the effectiveness of our method through rigorous experiments on an open-domain dataset.
arXiv Detail & Related papers (2023-11-21T03:47:54Z) - Regenerating Arbitrary Video Sequences with Distillation Path-Finding [6.687073794084539]
This paper presents an interactive framework to generate new sequences according to the users' preference on the starting frame.
To achieve this effectively, we first learn the feature correlation on the frameset of the given video through a proposed network called RSFNet.
Then, we develop a novel path-finding algorithm, SDPF, which formulates the knowledge of motion directions of the source video to estimate the smooth and plausible sequences.
arXiv Detail & Related papers (2023-11-13T09:05:30Z) - TTVFI: Learning Trajectory-Aware Transformer for Video Frame
Interpolation [50.49396123016185]
Video frame (VFI) aims to synthesize an intermediate frame between two consecutive frames.
We propose a novel Trajectory-aware Transformer for Video Frame Interpolation (TTVFI)
Our method outperforms other state-of-the-art methods in four widely-used VFI benchmarks.
arXiv Detail & Related papers (2022-07-19T03:37:49Z) - Wassersplines for Stylized Neural Animation [36.43240177060714]
Much of computer-generated animation is created by manipulating meshes with rigs.
We introducesplines, a novel inference method for animating unstructured densities.
We demonstrate our tool on various problems to produce temporally-coherent animations without meshing or rigging.
arXiv Detail & Related papers (2022-01-28T05:36:02Z) - Deep Animation Video Interpolation in the Wild [115.24454577119432]
In this work, we formally define and study the animation video code problem for the first time.
We propose an effective framework, AnimeInterp, with two dedicated modules in a coarse-to-fine manner.
Notably, AnimeInterp shows favorable perceptual quality and robustness for animation scenarios in the wild.
arXiv Detail & Related papers (2021-04-06T13:26:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.