Video frame interpolation for high dynamic range sequences captured with
dual-exposure sensors
- URL: http://arxiv.org/abs/2206.09485v3
- Date: Wed, 31 May 2023 13:58:01 GMT
- Title: Video frame interpolation for high dynamic range sequences captured with
dual-exposure sensors
- Authors: U\u{g}ur \c{C}o\u{g}alan, Mojtaba Bemana, Hans-Peter Seidel, Karol
Myszkowski
- Abstract summary: Video frame (VFI) enables many important applications that might involve the temporal domain.
One of the key challenges is handling high dynamic range scenes in the presence of complex motion.
- Score: 24.086089662881044
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Video frame interpolation (VFI) enables many important applications that
might involve the temporal domain, such as slow motion playback, or the spatial
domain, such as stop motion sequences. We are focusing on the former task,
where one of the key challenges is handling high dynamic range (HDR) scenes in
the presence of complex motion. To this end, we explore possible advantages of
dual-exposure sensors that readily provide sharp short and blurry long
exposures that are spatially registered and whose ends are temporally aligned.
This way, motion blur registers temporally continuous information on the scene
motion that, combined with the sharp reference, enables more precise motion
sampling within a single camera shot. We demonstrate that this facilitates a
more complex motion reconstruction in the VFI task, as well as HDR frame
reconstruction that so far has been considered only for the originally captured
frames, not in-between interpolated frames. We design a neural network trained
in these tasks that clearly outperforms existing solutions. We also propose a
metric for scene motion complexity that provides important insights into the
performance of VFI methods at the test time.
Related papers
- CMTA: Cross-Modal Temporal Alignment for Event-guided Video Deblurring [44.30048301161034]
Video deblurring aims to enhance the quality of restored results in motion-red videos by gathering information from adjacent video frames.
We propose two modules: 1) Intra-frame feature enhancement operates within the exposure time of a single blurred frame, and 2) Inter-frame temporal feature alignment gathers valuable long-range temporal information to target frames.
We demonstrate that our proposed methods outperform state-of-the-art frame-based and event-based motion deblurring methods through extensive experiments conducted on both synthetic and real-world deblurring datasets.
arXiv Detail & Related papers (2024-08-27T10:09:17Z) - Motion-aware Latent Diffusion Models for Video Frame Interpolation [51.78737270917301]
Motion estimation between neighboring frames plays a crucial role in avoiding motion ambiguity.
We propose a novel diffusion framework, motion-aware latent diffusion models (MADiff)
Our method achieves state-of-the-art performance significantly outperforming existing approaches.
arXiv Detail & Related papers (2024-04-21T05:09:56Z) - Out of the Room: Generalizing Event-Based Dynamic Motion Segmentation
for Complex Scenes [10.936350433952668]
Rapid and reliable identification of dynamic scene parts, also known as motion segmentation, is a key challenge for mobile sensors.
Event cameras have the potential to overcome these limitations, but corresponding methods have only been demonstrated in smaller-scale indoor environments.
This work presents an event-based method for class-agnostic motion segmentation that can successfully be deployed across complex large-scale outdoor environments too.
arXiv Detail & Related papers (2024-03-07T14:59:34Z) - Event-Based Motion Magnification [28.057537257958963]
We propose a dual-camera system consisting of an event camera and a conventional RGB camera for video motion magnification.
This innovative combination enables a broad and cost-effective amplification of high-frequency motions.
We demonstrate the effectiveness and accuracy of our dual-camera system and network, offering a cost-effective and flexible solution for motion detection and magnification.
arXiv Detail & Related papers (2024-02-19T08:59:58Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - Zooming SlowMo: An Efficient One-Stage Framework for Space-Time Video
Super-Resolution [100.11355888909102]
Space-time video super-resolution aims at generating a high-resolution (HR) slow-motion video from a low-resolution (LR) and low frame rate (LFR) video sequence.
We present a one-stage space-time video super-resolution framework, which can directly reconstruct an HR slow-motion video sequence from an input LR and LFR video.
arXiv Detail & Related papers (2021-04-15T17:59:23Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video
Super-Resolution [95.26202278535543]
A simple solution is to split it into two sub-tasks: video frame (VFI) and video super-resolution (VSR)
temporalsynthesis and spatial super-resolution are intra-related in this task.
We propose a one-stage space-time video super-resolution framework, which directly synthesizes an HR slow-motion video from an LFR, LR video.
arXiv Detail & Related papers (2020-02-26T16:59:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.