Enhanced Deep Animation Video Interpolation
- URL: http://arxiv.org/abs/2206.12657v1
- Date: Sat, 25 Jun 2022 14:00:48 GMT
- Title: Enhanced Deep Animation Video Interpolation
- Authors: Wang Shen, Cheng Ming, Wenbo Bao, Guangtao Zhai, Li Chen, Zhiyong Gao
- Abstract summary: Existing learning-based frame algorithms extract consecutive frames from high-speed natural videos to train the model.
Compared to natural videos, cartoon videos are usually in a low frame rate.
We present AutoFI, a method to automatically render training data for deep animation video.
- Score: 47.7046169124373
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Existing learning-based frame interpolation algorithms extract consecutive
frames from high-speed natural videos to train the model. Compared to natural
videos, cartoon videos are usually in a low frame rate. Besides, the motion
between consecutive cartoon frames is typically nonlinear, which breaks the
linear motion assumption of interpolation algorithms. Thus, it is unsuitable
for generating a training set directly from cartoon videos. For better adapting
frame interpolation algorithms from nature video to animation video, we present
AutoFI, a simple and effective method to automatically render training data for
deep animation video interpolation. AutoFI takes a layered architecture to
render synthetic data, which ensures the assumption of linear motion.
Experimental results show that AutoFI performs favorably in training both DAIN
and ANIN. However, most frame interpolation algorithms will still fail in
error-prone areas, such as fast motion or large occlusion. Besides AutoFI, we
also propose a plug-and-play sketch-based post-processing module, named SktFI,
to refine the final results using user-provided sketches manually. With AutoFI
and SktFI, the interpolated animation frames show high perceptual quality.
Related papers
- TTVFI: Learning Trajectory-Aware Transformer for Video Frame
Interpolation [50.49396123016185]
Video frame (VFI) aims to synthesize an intermediate frame between two consecutive frames.
We propose a novel Trajectory-aware Transformer for Video Frame Interpolation (TTVFI)
Our method outperforms other state-of-the-art methods in four widely-used VFI benchmarks.
arXiv Detail & Related papers (2022-07-19T03:37:49Z) - Unsupervised Video Interpolation by Learning Multilayered 2.5D Motion
Fields [75.81417944207806]
This paper presents a self-supervised approach to video frame learning that requires only a single video.
We parameterize the video motions by solving an ordinary differentiable equation (ODE) defined on a time-varying motion field.
This implicit neural representation learns the video as a space-time continuum, allowing frame-time continuum at any temporal resolution.
arXiv Detail & Related papers (2022-04-21T06:17:05Z) - Video Frame Interpolation without Temporal Priors [91.04877640089053]
Video frame aims to synthesize non-exist intermediate frames in a video sequence.
The temporal priors of videos, i.e. frames per second (FPS) and frame exposure time, may vary from different camera sensors.
We devise a novel optical flow refinement strategy for better synthesizing results.
arXiv Detail & Related papers (2021-12-02T12:13:56Z) - Render In-between: Motion Guided Video Synthesis for Action
Interpolation [53.43607872972194]
We propose a motion-guided frame-upsampling framework that is capable of producing realistic human motion and appearance.
A novel motion model is trained to inference the non-linear skeletal motion between frames by leveraging a large-scale motion-capture dataset.
Our pipeline only requires low-frame-rate videos and unpaired human motion data but does not require high-frame-rate videos for training.
arXiv Detail & Related papers (2021-11-01T15:32:51Z) - Deep Animation Video Interpolation in the Wild [115.24454577119432]
In this work, we formally define and study the animation video code problem for the first time.
We propose an effective framework, AnimeInterp, with two dedicated modules in a coarse-to-fine manner.
Notably, AnimeInterp shows favorable perceptual quality and robustness for animation scenarios in the wild.
arXiv Detail & Related papers (2021-04-06T13:26:49Z) - Texture-aware Video Frame Interpolation [0.0]
We study the impact of video textures on video frame synthesis, and propose a novel framework where, given an algorithm, separate models are trained on different textures.
Our study shows that video texture has significant impact on the performance of frame models and it is beneficial to have separate models specifically adapted to these texture classes, instead of training a single model that tries to learn generic motion.
arXiv Detail & Related papers (2021-02-26T14:46:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.