HDRUNet: Single Image HDR Reconstruction with Denoising and
Dequantization
- URL: http://arxiv.org/abs/2105.13084v1
- Date: Thu, 27 May 2021 12:12:34 GMT
- Title: HDRUNet: Single Image HDR Reconstruction with Denoising and
Dequantization
- Authors: Xiangyu Chen, Yihao Liu, Zhengwen Zhang, Yu Qiao and Chao Dong
- Abstract summary: We propose a novel learning-based approach using a spatially dynamic encoder-decoder network, HDRUNet, to learn an end-to-end mapping for single image HDR reconstruction.
Our method achieves the state-of-the-art performance in quantitative comparisons and visual quality.
- Score: 39.82945546614887
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most consumer-grade digital cameras can only capture a limited range of
luminance in real-world scenes due to sensor constraints. Besides, noise and
quantization errors are often introduced in the imaging process. In order to
obtain high dynamic range (HDR) images with excellent visual quality, the most
common solution is to combine multiple images with different exposures.
However, it is not always feasible to obtain multiple images of the same scene
and most HDR reconstruction methods ignore the noise and quantization loss. In
this work, we propose a novel learning-based approach using a spatially dynamic
encoder-decoder network, HDRUNet, to learn an end-to-end mapping for single
image HDR reconstruction with denoising and dequantization. The network
consists of a UNet-style base network to make full use of the hierarchical
multi-scale information, a condition network to perform pattern-specific
modulation and a weighting network for selectively retaining information.
Moreover, we propose a Tanh_L1 loss function to balance the impact of
over-exposed values and well-exposed values on the network learning. Our method
achieves the state-of-the-art performance in quantitative comparisons and
visual quality. The proposed HDRUNet model won the second place in the single
frame track of NITRE2021 High Dynamic Range Challenge.
Related papers
- Intrinsic Single-Image HDR Reconstruction [0.6554326244334868]
We introduce a physically-inspired remodeling of the HDR reconstruction problem in the intrinsic domain.
We show that dividing the problem into two simpler sub-tasks improves performance in a wide variety of photographs.
arXiv Detail & Related papers (2024-09-20T17:56:51Z) - Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - Towards High-quality HDR Deghosting with Conditional Diffusion Models [88.83729417524823]
High Dynamic Range (LDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques.
DNNs still generate ghosting artifacts when LDR images have saturation and large motion.
We formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition.
arXiv Detail & Related papers (2023-11-02T01:53:55Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked
Autoencoders [97.64072440883392]
We propose a novel semi-supervised approach to realize few-shot HDR imaging via two stages of training, called SSHDR.
Unlikely previous methods, directly recovering content and removing ghosts simultaneously, which is hard to achieve optimum.
Experiments demonstrate that SSHDR outperforms state-of-the-art methods quantitatively and qualitatively within and across different datasets.
arXiv Detail & Related papers (2023-04-14T03:42:51Z) - Single-Image HDR Reconstruction by Multi-Exposure Generation [8.656080193351581]
We propose a weakly supervised learning method that inverts the physical image formation process for HDR reconstruction.
Our neural network can invert the camera response to reconstruct pixel irradiance before synthesizing multiple exposures.
Our experiments show that our proposed model can effectively reconstruct HDR images.
arXiv Detail & Related papers (2022-10-28T05:12:56Z) - Deep Progressive Feature Aggregation Network for High Dynamic Range
Imaging [24.94466716276423]
We propose a deep progressive feature aggregation network for improving HDR imaging quality in dynamic scenes.
Our method implicitly samples high-correspondence features and aggregates them in a coarse-to-fine manner for alignment.
Experiments show that our proposed method can achieve state-of-the-art performance under different scenes.
arXiv Detail & Related papers (2022-08-04T04:37:35Z) - SJ-HD^2R: Selective Joint High Dynamic Range and Denoising Imaging for
Dynamic Scenes [17.867412310873732]
Ghosting artifacts, motion blur, and low fidelity in highlight are the main challenges in High Dynamic Range (LDR) imaging.
We propose a joint HDR and denoising pipeline, containing two sub-networks.
We create the first joint HDR and denoising benchmark dataset.
arXiv Detail & Related papers (2022-06-20T07:49:56Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.