Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline
- URL: http://arxiv.org/abs/2004.01179v1
- Date: Thu, 2 Apr 2020 17:59:04 GMT
- Title: Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline
- Authors: Yu-Lun Liu, Wei-Sheng Lai, Yu-Sheng Chen, Yi-Lung Kao, Ming-Hsuan
Yang, Yung-Yu Chuang, and Jia-Bin Huang
- Abstract summary: We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
- Score: 100.5353614588565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recovering a high dynamic range (HDR) image from a single low dynamic range
(LDR) input image is challenging due to missing details in under-/over-exposed
regions caused by quantization and saturation of camera sensors. In contrast to
existing learning-based methods, our core idea is to incorporate the domain
knowledge of the LDR image formation pipeline into our model. We model the
HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2)
non-linear mapping from a camera response function, and (3) quantization. We
then propose to learn three specialized CNNs to reverse these steps. By
decomposing the problem into specific sub-tasks, we impose effective physical
constraints to facilitate the training of individual sub-networks. Finally, we
jointly fine-tune the entire model end-to-end to reduce error accumulation.
With extensive quantitative and qualitative experiments on diverse image
datasets, we demonstrate that the proposed method performs favorably against
state-of-the-art single-image HDR reconstruction algorithms.
Related papers
- Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - Towards High-quality HDR Deghosting with Conditional Diffusion Models [88.83729417524823]
High Dynamic Range (LDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques.
DNNs still generate ghosting artifacts when LDR images have saturation and large motion.
We formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition.
arXiv Detail & Related papers (2023-11-02T01:53:55Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - Single Image LDR to HDR Conversion using Conditional Diffusion [18.466814193413487]
Digital imaging aims to replicate realistic scenes, but Low Dynamic Range (LDR) cameras cannot represent the wide dynamic range of real scenes.
This paper presents a deep learning-based approach for recovering intricate details from shadows and highlights.
We incorporate a deep-based autoencoder in our proposed framework to enhance the quality of the latent representation of LDR image used for conditioning.
arXiv Detail & Related papers (2023-07-06T07:19:47Z) - SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked
Autoencoders [97.64072440883392]
We propose a novel semi-supervised approach to realize few-shot HDR imaging via two stages of training, called SSHDR.
Unlikely previous methods, directly recovering content and removing ghosts simultaneously, which is hard to achieve optimum.
Experiments demonstrate that SSHDR outperforms state-of-the-art methods quantitatively and qualitatively within and across different datasets.
arXiv Detail & Related papers (2023-04-14T03:42:51Z) - Single-Image HDR Reconstruction by Multi-Exposure Generation [8.656080193351581]
We propose a weakly supervised learning method that inverts the physical image formation process for HDR reconstruction.
Our neural network can invert the camera response to reconstruct pixel irradiance before synthesizing multiple exposures.
Our experiments show that our proposed model can effectively reconstruct HDR images.
arXiv Detail & Related papers (2022-10-28T05:12:56Z) - Rank-Enhanced Low-Dimensional Convolution Set for Hyperspectral Image
Denoising [50.039949798156826]
This paper tackles the challenging problem of hyperspectral (HS) image denoising.
We propose rank-enhanced low-dimensional convolution set (Re-ConvSet)
We then incorporate Re-ConvSet into the widely-used U-Net architecture to construct an HS image denoising method.
arXiv Detail & Related papers (2022-07-09T13:35:12Z) - Unpaired Learning for High Dynamic Range Image Tone Mapping [3.867363075280544]
We describe a new tone-mapping approach guided by the distinct goal of producing low dynamic range (LDR) renditions.
This goal enables the use of an unpaired adversarial training based on unrelated sets of HDR and LDR images.
arXiv Detail & Related papers (2021-10-30T09:58:55Z) - HDRUNet: Single Image HDR Reconstruction with Denoising and
Dequantization [39.82945546614887]
We propose a novel learning-based approach using a spatially dynamic encoder-decoder network, HDRUNet, to learn an end-to-end mapping for single image HDR reconstruction.
Our method achieves the state-of-the-art performance in quantitative comparisons and visual quality.
arXiv Detail & Related papers (2021-05-27T12:12:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.