Motion-Aware Video Frame Interpolation
- URL: http://arxiv.org/abs/2402.02892v1
- Date: Mon, 5 Feb 2024 11:00:14 GMT
- Title: Motion-Aware Video Frame Interpolation
- Authors: Pengfei Han, Fuhua Zhang, Bin Zhao, and Xuelong Li
- Abstract summary: We introduce a Motion-Aware Video Frame Interpolation (MA-VFI) network, which directly estimates intermediate optical flow from consecutive frames.
It not only extracts global semantic relationships and spatial details from input frames with different receptive fields, but also effectively reduces the required computational cost and complexity.
- Score: 49.49668436390514
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video frame interpolation methodologies endeavor to create novel frames
betwixt extant ones, with the intent of augmenting the video's frame frequency.
However, current methods are prone to image blurring and spurious artifacts in
challenging scenarios involving occlusions and discontinuous motion. Moreover,
they typically rely on optical flow estimation, which adds complexity to
modeling and computational costs. To address these issues, we introduce a
Motion-Aware Video Frame Interpolation (MA-VFI) network, which directly
estimates intermediate optical flow from consecutive frames by introducing a
novel hierarchical pyramid module. It not only extracts global semantic
relationships and spatial details from input frames with different receptive
fields, enabling the model to capture intricate motion patterns, but also
effectively reduces the required computational cost and complexity.
Subsequently, a cross-scale motion structure is presented to estimate and
refine intermediate flow maps by the extracted features. This approach
facilitates the interplay between input frame features and flow maps during the
frame interpolation process and markedly heightens the precision of the
intervening flow delineations. Finally, a discerningly fashioned loss centered
around an intermediate flow is meticulously contrived, serving as a deft rudder
to skillfully guide the prognostication of said intermediate flow, thereby
substantially refining the precision of the intervening flow mappings.
Experiments illustrate that MA-VFI surpasses several representative VFI methods
across various datasets, and can enhance efficiency while maintaining
commendable efficacy.
Related papers
- Event-based Video Frame Interpolation with Edge Guided Motion Refinement [28.331148083668857]
We introduce an end-to-end E-VFI learning method to efficiently utilize edge features from event signals for motion flow and warping enhancement.
Our method incorporates an Edge Guided Attentive (EGA) module, which rectifies estimated video motion through attentive aggregation.
Experiments on both synthetic and real datasets show the effectiveness of the proposed approach.
arXiv Detail & Related papers (2024-04-28T12:13:34Z) - Motion-aware Latent Diffusion Models for Video Frame Interpolation [51.78737270917301]
Motion estimation between neighboring frames plays a crucial role in avoiding motion ambiguity.
We propose a novel diffusion framework, motion-aware latent diffusion models (MADiff)
Our method achieves state-of-the-art performance significantly outperforming existing approaches.
arXiv Detail & Related papers (2024-04-21T05:09:56Z) - IDO-VFI: Identifying Dynamics via Optical Flow Guidance for Video Frame
Interpolation with Events [14.098949778274733]
Event cameras are ideal for capturing inter-frame dynamics with their extremely high temporal resolution.
We propose an event-and-frame-based video frame method named IDO-VFI that assigns varying amounts of computation for different sub-regions.
Our proposed method maintains high-quality performance while reducing computation time and computational effort by 10% and 17% respectively on Vimeo90K datasets.
arXiv Detail & Related papers (2023-05-17T13:22:21Z) - Meta-Interpolation: Time-Arbitrary Frame Interpolation via Dual
Meta-Learning [65.85319901760478]
We consider processing different time-steps with adaptively generated convolutional kernels in a unified way with the help of meta-learning.
We develop a dual meta-learned frame framework to synthesize intermediate frames with the guidance of context information and optical flow.
arXiv Detail & Related papers (2022-07-27T17:36:23Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation [97.99012124785177]
FLAVR is a flexible and efficient architecture that uses 3D space-time convolutions to enable end-to-end learning and inference for video framesupervised.
We demonstrate that FLAVR can serve as a useful self- pretext task for action recognition, optical flow estimation, and motion magnification.
arXiv Detail & Related papers (2020-12-15T18:59:30Z) - Video Frame Interpolation via Generalized Deformable Convolution [18.357839820102683]
Video frame aims at synthesizing intermediate frames from nearby source frames while maintaining spatial and temporal consistencies.
Existing deeplearning-based video frame methods can be divided into two categories: flow-based methods and kernel-based methods.
A novel mechanism named generalized deformable convolution is proposed, which can effectively learn motion in a data-driven manner and freely select sampling points in space-time.
arXiv Detail & Related papers (2020-08-24T20:00:39Z) - All at Once: Temporally Adaptive Multi-Frame Interpolation with Advanced
Motion Modeling [52.425236515695914]
State-of-the-art methods are iterative solutions interpolating one frame at the time.
This work introduces a true multi-frame interpolator.
It utilizes a pyramidal style network in the temporal domain to complete the multi-frame task in one-shot.
arXiv Detail & Related papers (2020-07-23T02:34:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.