EA-Net: Edge-Aware Network for Flow-based Video Frame Interpolation
- URL: http://arxiv.org/abs/2105.07673v1
- Date: Mon, 17 May 2021 08:44:34 GMT
- Title: EA-Net: Edge-Aware Network for Flow-based Video Frame Interpolation
- Authors: Bin Zhao and Xuelong Li
- Abstract summary: We propose to reduce the image blur and get the clear shape of objects by preserving the edges in the interpolated frames.
The proposed Edge-Aware Network (EANet) integrates the edge information into the frame task.
Three edge-aware mechanisms are developed to emphasize the frame edges in estimating flow maps.
- Score: 101.75999290175412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video frame interpolation can up-convert the frame rate and enhance the video
quality. In recent years, although the interpolation performance has achieved
great success, image blur usually occurs at the object boundaries owing to the
large motion. It has been a long-standing problem, and has not been addressed
yet. In this paper, we propose to reduce the image blur and get the clear shape
of objects by preserving the edges in the interpolated frames. To this end, the
proposed Edge-Aware Network (EA-Net) integrates the edge information into the
frame interpolation task. It follows an end-to-end architecture and can be
separated into two stages, \emph{i.e.}, edge-guided flow estimation and
edge-protected frame synthesis. Specifically, in the flow estimation stage,
three edge-aware mechanisms are developed to emphasize the frame edges in
estimating flow maps, so that the edge-maps are taken as the auxiliary
information to provide more guidance to boost the flow accuracy. In the frame
synthesis stage, the flow refinement module is designed to refine the flow map,
and the attention module is carried out to adaptively focus on the
bidirectional flow maps when synthesizing the intermediate frames. Furthermore,
the frame and edge discriminators are adopted to conduct the adversarial
training strategy, so as to enhance the reality and clarity of synthesized
frames. Experiments on three benchmarks, including Vimeo90k, UCF101 for
single-frame interpolation and Adobe240-fps for multi-frame interpolation, have
demonstrated the superiority of the proposed EA-Net for the video frame
interpolation task.
Related papers
- Event-based Video Frame Interpolation with Edge Guided Motion Refinement [28.331148083668857]
We introduce an end-to-end E-VFI learning method to efficiently utilize edge features from event signals for motion flow and warping enhancement.
Our method incorporates an Edge Guided Attentive (EGA) module, which rectifies estimated video motion through attentive aggregation.
Experiments on both synthetic and real datasets show the effectiveness of the proposed approach.
arXiv Detail & Related papers (2024-04-28T12:13:34Z) - Motion-Aware Video Frame Interpolation [49.49668436390514]
We introduce a Motion-Aware Video Frame Interpolation (MA-VFI) network, which directly estimates intermediate optical flow from consecutive frames.
It not only extracts global semantic relationships and spatial details from input frames with different receptive fields, but also effectively reduces the required computational cost and complexity.
arXiv Detail & Related papers (2024-02-05T11:00:14Z) - Meta-Interpolation: Time-Arbitrary Frame Interpolation via Dual
Meta-Learning [65.85319901760478]
We consider processing different time-steps with adaptively generated convolutional kernels in a unified way with the help of meta-learning.
We develop a dual meta-learned frame framework to synthesize intermediate frames with the guidance of context information and optical flow.
arXiv Detail & Related papers (2022-07-27T17:36:23Z) - TTVFI: Learning Trajectory-Aware Transformer for Video Frame
Interpolation [50.49396123016185]
Video frame (VFI) aims to synthesize an intermediate frame between two consecutive frames.
We propose a novel Trajectory-aware Transformer for Video Frame Interpolation (TTVFI)
Our method outperforms other state-of-the-art methods in four widely-used VFI benchmarks.
arXiv Detail & Related papers (2022-07-19T03:37:49Z) - Cross-Attention Transformer for Video Interpolation [3.5317804902980527]
TAIN (Transformers and Attention for video INterpolation) aims to interpolate an intermediate frame given two consecutive image frames around it.
We first present a novel visual transformer module, named Cross-Similarity (CS), to globally aggregate input image features with similar appearance as those of the predicted frame.
To account for occlusions in the CS features, we propose an Image Attention (IA) module to allow the network to focus on CS features from one frame over those of the other.
arXiv Detail & Related papers (2022-07-08T21:38:54Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation [97.99012124785177]
FLAVR is a flexible and efficient architecture that uses 3D space-time convolutions to enable end-to-end learning and inference for video framesupervised.
We demonstrate that FLAVR can serve as a useful self- pretext task for action recognition, optical flow estimation, and motion magnification.
arXiv Detail & Related papers (2020-12-15T18:59:30Z) - ALANET: Adaptive Latent Attention Network forJoint Video Deblurring and
Interpolation [38.52446103418748]
We introduce a novel architecture, Adaptive Latent Attention Network (ALANET), which synthesizes sharp high frame-rate videos.
We employ combination of self-attention and cross-attention module between consecutive frames in the latent space to generate optimized representation for each frame.
Our method performs favorably against various state-of-the-art approaches, even though we tackle a much more difficult problem.
arXiv Detail & Related papers (2020-08-31T21:11:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.