HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world
Benchmark Dataset
- URL: http://arxiv.org/abs/2103.14943v1
- Date: Sat, 27 Mar 2021 16:40:05 GMT
- Title: HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world
Benchmark Dataset
- Authors: Guanying Chen, Chaofeng Chen, Shi Guo, Zhetong Liang, Kwan-Yee K.
Wong, Lei Zhang
- Abstract summary: We introduce a coarse-to-fine deep learning framework for HDR video reconstruction.
Firstly, we perform coarse alignment and pixel blending in the image space to estimate the coarse HDR video.
Secondly, we conduct more sophisticated alignment and temporal fusion in the feature space of the coarse HDR video to produce better reconstruction.
- Score: 30.249052175655606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High dynamic range (HDR) video reconstruction from sequences captured with
alternating exposures is a very challenging problem. Existing methods often
align low dynamic range (LDR) input sequence in the image space using optical
flow, and then merge the aligned images to produce HDR output. However,
accurate alignment and fusion in the image space are difficult due to the
missing details in the over-exposed regions and noise in the under-exposed
regions, resulting in unpleasing ghosting artifacts. To enable more accurate
alignment and HDR fusion, we introduce a coarse-to-fine deep learning framework
for HDR video reconstruction. Firstly, we perform coarse alignment and pixel
blending in the image space to estimate the coarse HDR video. Secondly, we
conduct more sophisticated alignment and temporal fusion in the feature space
of the coarse HDR video to produce better reconstruction. Considering the fact
that there is no publicly available dataset for quantitative and comprehensive
evaluation of HDR video reconstruction methods, we collect such a benchmark
dataset, which contains $97$ sequences of static scenes and 184 testing pairs
of dynamic scenes. Extensive experiments show that our method outperforms
previous state-of-the-art methods. Our dataset, code and model will be made
publicly available.
Related papers
- A Cycle Ride to HDR: Semantics Aware Self-Supervised Framework for Unpaired LDR-to-HDR Image Translation [0.0]
Low Dynamic Range (LDR) to High Dynamic Range () image translation is an important computer vision problem.
Most current state-of-the-art methods require high-quality paired LDR, datasets for model training.
We propose a modified cycle-consistent adversarial architecture and utilize unpaired LDR, datasets for training.
arXiv Detail & Related papers (2024-10-19T11:11:58Z) - Diffusion-Promoted HDR Video Reconstruction [45.73396977607666]
High dynamic range (LDR) video reconstruction aims to generate HDR videos from low dynamic range (LDR) frames captured with alternating exposures.
Most existing works solely rely on the regression-based paradigm, leading to adverse effects such as ghosting artifacts and missing details in saturated regions.
We propose a diffusion-promoted method for HDR video reconstruction, termed HDR-V-Diff, which incorporates a diffusion model to capture the HDR distribution.
arXiv Detail & Related papers (2024-06-12T13:38:10Z) - HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting [76.5908492298286]
Existing HDR NVS methods are mainly based on NeRF.
They suffer from long training time and slow inference speed.
We propose a new framework, High Dynamic Range Gaussian Splatting (-GS)
arXiv Detail & Related papers (2024-05-24T00:46:58Z) - Towards Real-World HDR Video Reconstruction: A Large-Scale Benchmark Dataset and A Two-Stage Alignment Network [16.39592423564326]
Existing methods are mostly trained on synthetic datasets, which perform poorly in real scenes.
We present Real-V, a large-scale real-world benchmark dataset for HDR video reconstruction.
arXiv Detail & Related papers (2024-04-30T23:29:26Z) - Pano-NeRF: Synthesizing High Dynamic Range Novel Views with Geometry
from Sparse Low Dynamic Range Panoramic Images [82.1477261107279]
We propose the irradiance fields from sparse LDR panoramic images to increase the observation counts for faithful geometry recovery.
Experiments demonstrate that the irradiance fields outperform state-of-the-art methods on both geometry recovery and HDR reconstruction.
arXiv Detail & Related papers (2023-12-26T08:10:22Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - RawHDR: High Dynamic Range Image Reconstruction from a Single Raw Image [36.17182977927645]
High dynamic range (RGB) images capture much more intensity levels than standard ones.
Current methods predominantly generate HDR images from 8-bit low dynamic range (LDR) s images that have been degraded by the camera processing pipeline.
Unlike existing methods, the core idea of this work is to incorporate more informative Raw sensor data to generate HDR images.
arXiv Detail & Related papers (2023-09-05T07:58:21Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - HDRVideo-GAN: Deep Generative HDR Video Reconstruction [19.837271879354184]
We propose an end-to-end GAN-based framework for HDR video reconstruction from LDR sequences with alternating exposures.
We first extract clean LDR frames from noisy LDR video with alternating exposures with a denoising network trained in a self-supervised setting.
We then align the neighboring alternating-exposure frames to a reference frame and then reconstruct high-quality HDR frames in a complete adversarial setting.
arXiv Detail & Related papers (2021-10-22T14:02:03Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.