DeepHS-HDRVideo: Deep High Speed High Dynamic Range Video Reconstruction
- URL: http://arxiv.org/abs/2210.04429v1
- Date: Mon, 10 Oct 2022 04:27:45 GMT
- Title: DeepHS-HDRVideo: Deep High Speed High Dynamic Range Video Reconstruction
- Authors: Zeeshan Khan, Parth Shettiwar, Mukul Khanna, Shanmuganathan Raman
- Abstract summary: We propose to align the input LDR frames using a pre-trained video frame network.
This results in better alignment of LDR frames, since we circumvent the error-prone exposure matching step.
We also present the first method to generate high FPS HDR videos.
- Score: 23.341594337637545
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to hardware constraints, standard off-the-shelf digital cameras suffers
from low dynamic range (LDR) and low frame per second (FPS) outputs. Previous
works in high dynamic range (HDR) video reconstruction uses sequence of
alternating exposure LDR frames as input, and align the neighbouring frames
using optical flow based networks. However, these methods often result in
motion artifacts in challenging situations. This is because, the alternate
exposure frames have to be exposure matched in order to apply alignment using
optical flow. Hence, over-saturation and noise in the LDR frames results in
inaccurate alignment. To this end, we propose to align the input LDR frames
using a pre-trained video frame interpolation network. This results in better
alignment of LDR frames, since we circumvent the error-prone exposure matching
step, and directly generate intermediate missing frames from the same exposure
inputs. Furthermore, it allows us to generate high FPS HDR videos by
recursively interpolating the intermediate frames. Through this work, we
propose to use video frame interpolation for HDR video reconstruction, and
present the first method to generate high FPS HDR videos. Experimental results
demonstrate the efficacy of the proposed framework against optical flow based
alignment methods, with an absolute improvement of 2.4 PSNR value on standard
HDR video datasets [1], [2] and further benchmark our method for high FPS HDR
video generation.
Related papers
- Exposure Completing for Temporally Consistent Neural High Dynamic Range Video Rendering [17.430726543786943]
We propose a novel paradigm to render HDR frames via completing the absent exposure information.
Our approach involves interpolating neighbor LDR frames in the time dimension to reconstruct LDR frames for the absent exposures.
This benefits the fusing process for HDR results, reducing noise and ghosting artifacts therefore improving temporal consistency.
arXiv Detail & Related papers (2024-07-18T09:13:08Z) - LAN-HDR: Luminance-based Alignment Network for High Dynamic Range Video
Reconstruction [20.911738532410766]
We propose an end-to-end HDR video composition framework, which aligns LDR frames in feature space and then merges aligned features into an HDR frame.
In training, we adopt a temporal loss, in addition to frame reconstruction losses, to enhance temporal consistency and thus reduce flickering.
arXiv Detail & Related papers (2023-08-22T01:43:00Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - SJ-HD^2R: Selective Joint High Dynamic Range and Denoising Imaging for
Dynamic Scenes [17.867412310873732]
Ghosting artifacts, motion blur, and low fidelity in highlight are the main challenges in High Dynamic Range (LDR) imaging.
We propose a joint HDR and denoising pipeline, containing two sub-networks.
We create the first joint HDR and denoising benchmark dataset.
arXiv Detail & Related papers (2022-06-20T07:49:56Z) - Context-Aware Video Reconstruction for Rolling Shutter Cameras [52.28710992548282]
In this paper, we propose a context-aware GS video reconstruction architecture.
We first estimate the bilateral motion field so that the pixels of the two RS frames are warped to a common GS frame.
Then, a refinement scheme is proposed to guide the GS frame synthesis along with bilateral occlusion masks to produce high-fidelity GS video frames.
arXiv Detail & Related papers (2022-05-25T17:05:47Z) - Snapshot HDR Video Construction Using Coded Mask [25.12198906401246]
This study utilize 3D-CNNs to perform a joint demosaicking, denoising, and HDR video reconstruction from coded LDR video.
The obtained results are promising and could lead to affordable HDR video capture using conventional cameras.
arXiv Detail & Related papers (2021-12-05T09:32:11Z) - HDRVideo-GAN: Deep Generative HDR Video Reconstruction [19.837271879354184]
We propose an end-to-end GAN-based framework for HDR video reconstruction from LDR sequences with alternating exposures.
We first extract clean LDR frames from noisy LDR video with alternating exposures with a denoising network trained in a self-supervised setting.
We then align the neighboring alternating-exposure frames to a reference frame and then reconstruct high-quality HDR frames in a complete adversarial setting.
arXiv Detail & Related papers (2021-10-22T14:02:03Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - Zooming SlowMo: An Efficient One-Stage Framework for Space-Time Video
Super-Resolution [100.11355888909102]
Space-time video super-resolution aims at generating a high-resolution (HR) slow-motion video from a low-resolution (LR) and low frame rate (LFR) video sequence.
We present a one-stage space-time video super-resolution framework, which can directly reconstruct an HR slow-motion video sequence from an input LR and LFR video.
arXiv Detail & Related papers (2021-04-15T17:59:23Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video
Super-Resolution [95.26202278535543]
A simple solution is to split it into two sub-tasks: video frame (VFI) and video super-resolution (VSR)
temporalsynthesis and spatial super-resolution are intra-related in this task.
We propose a one-stage space-time video super-resolution framework, which directly synthesizes an HR slow-motion video from an LFR, LR video.
arXiv Detail & Related papers (2020-02-26T16:59:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.