TimeLens: Event-based Video Frame Interpolation
- URL: http://arxiv.org/abs/2106.07286v1
- Date: Mon, 14 Jun 2021 10:33:47 GMT
- Title: TimeLens: Event-based Video Frame Interpolation
- Authors: Stepan Tulyakov, Daniel Gehrig, Stamatios Georgoulis, Julius Erbach,
Mathias Gehrig, Yuanyou Li, Davide Scaramuzza
- Abstract summary: We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
- Score: 54.28139783383213
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: State-of-the-art frame interpolation methods generate intermediate frames by
inferring object motions in the image from consecutive key-frames. In the
absence of additional information, first-order approximations, i.e. optical
flow, must be used, but this choice restricts the types of motions that can be
modeled, leading to errors in highly dynamic scenarios. Event cameras are novel
sensors that address this limitation by providing auxiliary visual information
in the blind-time between frames. They asynchronously measure per-pixel
brightness changes and do this with high temporal resolution and low latency.
Event-based frame interpolation methods typically adopt a synthesis-based
approach, where predicted frame residuals are directly applied to the
key-frames. However, while these approaches can capture non-linear motions they
suffer from ghosting and perform poorly in low-texture regions with few events.
Thus, synthesis-based and flow-based approaches are complementary. In this
work, we introduce Time Lens, a novel indicates equal contribution method that
leverages the advantages of both. We extensively evaluate our method on three
synthetic and two real benchmarks where we show an up to 5.21 dB improvement in
terms of PSNR over state-of-the-art frame-based and event-based methods.
Finally, we release a new large-scale dataset in highly dynamic scenarios,
aimed at pushing the limits of existing methods.
Related papers
- Motion-prior Contrast Maximization for Dense Continuous-Time Motion Estimation [34.529280562470746]
We introduce a novel self-supervised loss combining the Contrast Maximization framework with a non-linear motion prior in the form of pixel-level trajectories.
Their effectiveness is demonstrated in two scenarios: In dense continuous-time motion estimation, our method improves the zero-shot performance of a synthetically trained model by 29%.
arXiv Detail & Related papers (2024-07-15T15:18:28Z) - Motion-Aware Video Frame Interpolation [49.49668436390514]
We introduce a Motion-Aware Video Frame Interpolation (MA-VFI) network, which directly estimates intermediate optical flow from consecutive frames.
It not only extracts global semantic relationships and spatial details from input frames with different receptive fields, but also effectively reduces the required computational cost and complexity.
arXiv Detail & Related papers (2024-02-05T11:00:14Z) - IDO-VFI: Identifying Dynamics via Optical Flow Guidance for Video Frame
Interpolation with Events [14.098949778274733]
Event cameras are ideal for capturing inter-frame dynamics with their extremely high temporal resolution.
We propose an event-and-frame-based video frame method named IDO-VFI that assigns varying amounts of computation for different sub-regions.
Our proposed method maintains high-quality performance while reducing computation time and computational effort by 10% and 17% respectively on Vimeo90K datasets.
arXiv Detail & Related papers (2023-05-17T13:22:21Z) - Time Lens++: Event-based Frame Interpolation with Parametric Non-linear
Flow and Multi-scale Fusion [47.57998625129672]
We introduce multi-scale feature-level fusion and computing one-shot non-linear inter-frame motion from events and images.
We show that our method improves the reconstruction quality by up to 0.2 dB in terms of PSNR and up to 15% in LPIPS score.
arXiv Detail & Related papers (2022-03-31T17:14:58Z) - Video Frame Interpolation without Temporal Priors [91.04877640089053]
Video frame aims to synthesize non-exist intermediate frames in a video sequence.
The temporal priors of videos, i.e. frames per second (FPS) and frame exposure time, may vary from different camera sensors.
We devise a novel optical flow refinement strategy for better synthesizing results.
arXiv Detail & Related papers (2021-12-02T12:13:56Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation [97.99012124785177]
FLAVR is a flexible and efficient architecture that uses 3D space-time convolutions to enable end-to-end learning and inference for video framesupervised.
We demonstrate that FLAVR can serve as a useful self- pretext task for action recognition, optical flow estimation, and motion magnification.
arXiv Detail & Related papers (2020-12-15T18:59:30Z) - All at Once: Temporally Adaptive Multi-Frame Interpolation with Advanced
Motion Modeling [52.425236515695914]
State-of-the-art methods are iterative solutions interpolating one frame at the time.
This work introduces a true multi-frame interpolator.
It utilizes a pyramidal style network in the temporal domain to complete the multi-frame task in one-shot.
arXiv Detail & Related papers (2020-07-23T02:34:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.