Meta-Interpolation: Time-Arbitrary Frame Interpolation via Dual
Meta-Learning
- URL: http://arxiv.org/abs/2207.13670v1
- Date: Wed, 27 Jul 2022 17:36:23 GMT
- Title: Meta-Interpolation: Time-Arbitrary Frame Interpolation via Dual
Meta-Learning
- Authors: Shixing Yu, Yiyang Ma, Wenhan Yang, Wei Xiang, Jiaying Liu
- Abstract summary: We consider processing different time-steps with adaptively generated convolutional kernels in a unified way with the help of meta-learning.
We develop a dual meta-learned frame framework to synthesize intermediate frames with the guidance of context information and optical flow.
- Score: 65.85319901760478
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing video frame interpolation methods can only interpolate the frame at
a given intermediate time-step, e.g. 1/2. In this paper, we aim to explore a
more generalized kind of video frame interpolation, that at an arbitrary
time-step. To this end, we consider processing different time-steps with
adaptively generated convolutional kernels in a unified way with the help of
meta-learning. Specifically, we develop a dual meta-learned frame interpolation
framework to synthesize intermediate frames with the guidance of context
information and optical flow as well as taking the time-step as side
information. First, a content-aware meta-learned flow refinement module is
built to improve the accuracy of the optical flow estimation based on the
down-sampled version of the input frames. Second, with the refined optical flow
and the time-step as the input, a motion-aware meta-learned frame interpolation
module generates the convolutional kernels for every pixel used in the
convolution operations on the feature map of the coarse warped version of the
input frames to generate the predicted frame. Extensive qualitative and
quantitative evaluations, as well as ablation studies, demonstrate that, via
introducing meta-learning in our framework in such a well-designed way, our
method not only achieves superior performance to state-of-the-art frame
interpolation approaches but also owns an extended capacity to support the
interpolation at an arbitrary time-step.
Related papers
- OCAI: Improving Optical Flow Estimation by Occlusion and Consistency Aware Interpolation [55.676358801492114]
We propose OCAI, a method that supports robust frame ambiguities by generating intermediate video frames alongside optical flows in between.
Our evaluations demonstrate superior quality and enhanced optical flow accuracy on established benchmarks such as Sintel and KITTI.
arXiv Detail & Related papers (2024-03-26T20:23:48Z) - Motion-Aware Video Frame Interpolation [49.49668436390514]
We introduce a Motion-Aware Video Frame Interpolation (MA-VFI) network, which directly estimates intermediate optical flow from consecutive frames.
It not only extracts global semantic relationships and spatial details from input frames with different receptive fields, but also effectively reduces the required computational cost and complexity.
arXiv Detail & Related papers (2024-02-05T11:00:14Z) - IDO-VFI: Identifying Dynamics via Optical Flow Guidance for Video Frame
Interpolation with Events [14.098949778274733]
Event cameras are ideal for capturing inter-frame dynamics with their extremely high temporal resolution.
We propose an event-and-frame-based video frame method named IDO-VFI that assigns varying amounts of computation for different sub-regions.
Our proposed method maintains high-quality performance while reducing computation time and computational effort by 10% and 17% respectively on Vimeo90K datasets.
arXiv Detail & Related papers (2023-05-17T13:22:21Z) - Asymmetric Bilateral Motion Estimation for Video Frame Interpolation [50.44508853885882]
We propose a novel video frame algorithm based on asymmetric bilateral motion estimation (ABME)
We predict symmetric bilateral motion fields to interpolate an anchor frame.
We estimate asymmetric bilateral motions fields from the anchor frame to the input frames.
Third, we use the asymmetric fields to warp the input frames backward and reconstruct the intermediate frame.
arXiv Detail & Related papers (2021-08-15T21:11:35Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - EA-Net: Edge-Aware Network for Flow-based Video Frame Interpolation [101.75999290175412]
We propose to reduce the image blur and get the clear shape of objects by preserving the edges in the interpolated frames.
The proposed Edge-Aware Network (EANet) integrates the edge information into the frame task.
Three edge-aware mechanisms are developed to emphasize the frame edges in estimating flow maps.
arXiv Detail & Related papers (2021-05-17T08:44:34Z) - Video Frame Interpolation via Structure-Motion based Iterative Fusion [19.499969588931414]
We propose a structure-motion based iterative fusion method for video frame Interpolation.
Inspired by the observation that audiences have different visual preferences on foreground and background objects, we for the first time propose to use saliency masks in the evaluation processes of the task of video frame Interpolation.
arXiv Detail & Related papers (2021-05-11T22:11:17Z) - Video Frame Interpolation via Generalized Deformable Convolution [18.357839820102683]
Video frame aims at synthesizing intermediate frames from nearby source frames while maintaining spatial and temporal consistencies.
Existing deeplearning-based video frame methods can be divided into two categories: flow-based methods and kernel-based methods.
A novel mechanism named generalized deformable convolution is proposed, which can effectively learn motion in a data-driven manner and freely select sampling points in space-time.
arXiv Detail & Related papers (2020-08-24T20:00:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.