Frame Interpolation for Dynamic Scenes with Implicit Flow Encoding
- URL: http://arxiv.org/abs/2209.13284v1
- Date: Tue, 27 Sep 2022 10:00:05 GMT
- Title: Frame Interpolation for Dynamic Scenes with Implicit Flow Encoding
- Authors: Pedro Figueir\^edo, Avinash Paliwal, Nima Khademi Kalantari
- Abstract summary: We propose an algorithm to interpolate between a pair of images of a dynamic scene.
We take advantage of the existing optical flow methods that are highly robust to the variations in the illumination.
Our approach is able to produce significantly better results than state-of-the-art frame blending algorithms.
- Score: 10.445563506186307
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose an algorithm to interpolate between a pair of
images of a dynamic scene. While in the past years significant progress in
frame interpolation has been made, current approaches are not able to handle
images with brightness and illumination changes, which are common even when the
images are captured shortly apart. We propose to address this problem by taking
advantage of the existing optical flow methods that are highly robust to the
variations in the illumination. Specifically, using the bidirectional flows
estimated using an existing pre-trained flow network, we predict the flows from
an intermediate frame to the two input images. To do this, we propose to encode
the bidirectional flows into a coordinate-based network, powered by a
hypernetwork, to obtain a continuous representation of the flow across time.
Once we obtain the estimated flows, we use them within an existing blending
network to obtain the final intermediate frame. Through extensive experiments,
we demonstrate that our approach is able to produce significantly better
results than state-of-the-art frame interpolation algorithms.
Related papers
- OCAI: Improving Optical Flow Estimation by Occlusion and Consistency Aware Interpolation [55.676358801492114]
We propose OCAI, a method that supports robust frame ambiguities by generating intermediate video frames alongside optical flows in between.
Our evaluations demonstrate superior quality and enhanced optical flow accuracy on established benchmarks such as Sintel and KITTI.
arXiv Detail & Related papers (2024-03-26T20:23:48Z) - Motion-Aware Video Frame Interpolation [49.49668436390514]
We introduce a Motion-Aware Video Frame Interpolation (MA-VFI) network, which directly estimates intermediate optical flow from consecutive frames.
It not only extracts global semantic relationships and spatial details from input frames with different receptive fields, but also effectively reduces the required computational cost and complexity.
arXiv Detail & Related papers (2024-02-05T11:00:14Z) - Dynamic Frame Interpolation in Wavelet Domain [57.25341639095404]
Video frame is an important low-level computation vision task, which can increase frame rate for more fluent visual experience.
Existing methods have achieved great success by employing advanced motion models and synthesis networks.
WaveletVFI can reduce computation up to 40% while maintaining similar accuracy, making it perform more efficiently against other state-of-the-arts.
arXiv Detail & Related papers (2023-09-07T06:41:15Z) - Video Interpolation by Event-driven Anisotropic Adjustment of Optical
Flow [11.914613556594725]
We propose an end-to-end training method A2OF for video frame with event-driven Anisotropic Adjustment of Optical Flows.
Specifically, we use events to generate optical flow distribution masks for the intermediate optical flow, which can model the complicated motion between two frames.
arXiv Detail & Related papers (2022-08-19T02:31:33Z) - Meta-Interpolation: Time-Arbitrary Frame Interpolation via Dual
Meta-Learning [65.85319901760478]
We consider processing different time-steps with adaptively generated convolutional kernels in a unified way with the help of meta-learning.
We develop a dual meta-learned frame framework to synthesize intermediate frames with the guidance of context information and optical flow.
arXiv Detail & Related papers (2022-07-27T17:36:23Z) - USegScene: Unsupervised Learning of Depth, Optical Flow and Ego-Motion
with Semantic Guidance and Coupled Networks [31.600708674008384]
USegScene is a framework for semantically guided unsupervised learning of depth, optical flow and ego-motion estimation for stereo camera images.
We present results on the popular KITTI dataset and show that our approach outperforms other methods by a large margin.
arXiv Detail & Related papers (2022-07-15T13:25:47Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.