Video Interpolation by Event-driven Anisotropic Adjustment of Optical
Flow
- URL: http://arxiv.org/abs/2208.09127v1
- Date: Fri, 19 Aug 2022 02:31:33 GMT
- Title: Video Interpolation by Event-driven Anisotropic Adjustment of Optical
Flow
- Authors: Song Wu, Kaichao You, Weihua He, Chen Yang, Yang Tian, Yaoyuan Wang,
Ziyang Zhang, Jianxing Liao
- Abstract summary: We propose an end-to-end training method A2OF for video frame with event-driven Anisotropic Adjustment of Optical Flows.
Specifically, we use events to generate optical flow distribution masks for the intermediate optical flow, which can model the complicated motion between two frames.
- Score: 11.914613556594725
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Video frame interpolation is a challenging task due to the ever-changing
real-world scene. Previous methods often calculate the bi-directional optical
flows and then predict the intermediate optical flows under the linear motion
assumptions, leading to isotropic intermediate flow generation. Follow-up
research obtained anisotropic adjustment through estimated higher-order motion
information with extra frames. Based on the motion assumptions, their methods
are hard to model the complicated motion in real scenes. In this paper, we
propose an end-to-end training method A^2OF for video frame interpolation with
event-driven Anisotropic Adjustment of Optical Flows. Specifically, we use
events to generate optical flow distribution masks for the intermediate optical
flow, which can model the complicated motion between two frames. Our proposed
method outperforms the previous methods in video frame interpolation, taking
supervised event-based video interpolation to a higher stage.
Related papers
- Generalizable Implicit Motion Modeling for Video Frame Interpolation [51.966062283735596]
Motion is critical in flow-based Video Frame Interpolation (VFI)
We introduce General Implicit Motion Modeling (IMM), a novel and effective approach to motion modeling VFI.
Our GIMM can be easily integrated with existing flow-based VFI works by supplying accurately modeled motion.
arXiv Detail & Related papers (2024-07-11T17:13:15Z) - Motion-aware Latent Diffusion Models for Video Frame Interpolation [51.78737270917301]
Motion estimation between neighboring frames plays a crucial role in avoiding motion ambiguity.
We propose a novel diffusion framework, motion-aware latent diffusion models (MADiff)
Our method achieves state-of-the-art performance significantly outperforming existing approaches.
arXiv Detail & Related papers (2024-04-21T05:09:56Z) - Motion-Aware Video Frame Interpolation [49.49668436390514]
We introduce a Motion-Aware Video Frame Interpolation (MA-VFI) network, which directly estimates intermediate optical flow from consecutive frames.
It not only extracts global semantic relationships and spatial details from input frames with different receptive fields, but also effectively reduces the required computational cost and complexity.
arXiv Detail & Related papers (2024-02-05T11:00:14Z) - IDO-VFI: Identifying Dynamics via Optical Flow Guidance for Video Frame
Interpolation with Events [14.098949778274733]
Event cameras are ideal for capturing inter-frame dynamics with their extremely high temporal resolution.
We propose an event-and-frame-based video frame method named IDO-VFI that assigns varying amounts of computation for different sub-regions.
Our proposed method maintains high-quality performance while reducing computation time and computational effort by 10% and 17% respectively on Vimeo90K datasets.
arXiv Detail & Related papers (2023-05-17T13:22:21Z) - Frame Interpolation for Dynamic Scenes with Implicit Flow Encoding [10.445563506186307]
We propose an algorithm to interpolate between a pair of images of a dynamic scene.
We take advantage of the existing optical flow methods that are highly robust to the variations in the illumination.
Our approach is able to produce significantly better results than state-of-the-art frame blending algorithms.
arXiv Detail & Related papers (2022-09-27T10:00:05Z) - Meta-Interpolation: Time-Arbitrary Frame Interpolation via Dual
Meta-Learning [65.85319901760478]
We consider processing different time-steps with adaptively generated convolutional kernels in a unified way with the help of meta-learning.
We develop a dual meta-learned frame framework to synthesize intermediate frames with the guidance of context information and optical flow.
arXiv Detail & Related papers (2022-07-27T17:36:23Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation [97.99012124785177]
FLAVR is a flexible and efficient architecture that uses 3D space-time convolutions to enable end-to-end learning and inference for video framesupervised.
We demonstrate that FLAVR can serve as a useful self- pretext task for action recognition, optical flow estimation, and motion magnification.
arXiv Detail & Related papers (2020-12-15T18:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.