Video Frame Interpolation without Temporal Priors
- URL: http://arxiv.org/abs/2112.01161v1
- Date: Thu, 2 Dec 2021 12:13:56 GMT
- Title: Video Frame Interpolation without Temporal Priors
- Authors: Youjian Zhang, Chaoyue Wang, Dacheng Tao
- Abstract summary: Video frame aims to synthesize non-exist intermediate frames in a video sequence.
The temporal priors of videos, i.e. frames per second (FPS) and frame exposure time, may vary from different camera sensors.
We devise a novel optical flow refinement strategy for better synthesizing results.
- Score: 91.04877640089053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video frame interpolation, which aims to synthesize non-exist intermediate
frames in a video sequence, is an important research topic in computer vision.
Existing video frame interpolation methods have achieved remarkable results
under specific assumptions, such as instant or known exposure time. However, in
complicated real-world situations, the temporal priors of videos, i.e. frames
per second (FPS) and frame exposure time, may vary from different camera
sensors. When test videos are taken under different exposure settings from
training ones, the interpolated frames will suffer significant misalignment
problems. In this work, we solve the video frame interpolation problem in a
general situation, where input frames can be acquired under uncertain exposure
(and interval) time. Unlike previous methods that can only be applied to a
specific temporal prior, we derive a general curvilinear motion trajectory
formula from four consecutive sharp frames or two consecutive blurry frames
without temporal priors. Moreover, utilizing constraints within adjacent motion
trajectories, we devise a novel optical flow refinement strategy for better
interpolation results. Finally, experiments demonstrate that one well-trained
model is enough for synthesizing high-quality slow-motion videos under
complicated real-world situations. Codes are available on
https://github.com/yjzhang96/UTI-VFI.
Related papers
- Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - E-VFIA : Event-Based Video Frame Interpolation with Attention [8.93294761619288]
We propose an event-based video frame with attention (E-VFIA) as a lightweight kernel-based method.
E-VFIA fuses event information with standard video frames by deformable convolutions to generate high quality interpolated frames.
The proposed method represents events with high temporal resolution and uses a multi-head self-attention mechanism to better encode event-based information.
arXiv Detail & Related papers (2022-09-19T21:40:32Z) - TTVFI: Learning Trajectory-Aware Transformer for Video Frame
Interpolation [50.49396123016185]
Video frame (VFI) aims to synthesize an intermediate frame between two consecutive frames.
We propose a novel Trajectory-aware Transformer for Video Frame Interpolation (TTVFI)
Our method outperforms other state-of-the-art methods in four widely-used VFI benchmarks.
arXiv Detail & Related papers (2022-07-19T03:37:49Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation [97.99012124785177]
FLAVR is a flexible and efficient architecture that uses 3D space-time convolutions to enable end-to-end learning and inference for video framesupervised.
We demonstrate that FLAVR can serve as a useful self- pretext task for action recognition, optical flow estimation, and motion magnification.
arXiv Detail & Related papers (2020-12-15T18:59:30Z) - ALANET: Adaptive Latent Attention Network forJoint Video Deblurring and
Interpolation [38.52446103418748]
We introduce a novel architecture, Adaptive Latent Attention Network (ALANET), which synthesizes sharp high frame-rate videos.
We employ combination of self-attention and cross-attention module between consecutive frames in the latent space to generate optimized representation for each frame.
Our method performs favorably against various state-of-the-art approaches, even though we tackle a much more difficult problem.
arXiv Detail & Related papers (2020-08-31T21:11:53Z) - Across Scales & Across Dimensions: Temporal Super-Resolution using Deep
Internal Learning [11.658606722158517]
We train a video-specific CNN on examples extracted directly from the low-framerate input video.
Our method exploits the strong recurrence of small space-time patches inside a single video sequence.
The higher spatial resolution of video frames provides strong examples as to how to increase the temporal temporal resolution of that video.
arXiv Detail & Related papers (2020-03-19T15:53:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.