A Two-stage Deep Network for High Dynamic Range Image Reconstruction
- URL: http://arxiv.org/abs/2104.09386v1
- Date: Mon, 19 Apr 2021 15:19:17 GMT
- Title: A Two-stage Deep Network for High Dynamic Range Image Reconstruction
- Authors: SMA Sharif, Rizwan Ali Naqvi, Mithun Biswas, and Kim Sungjun
- Abstract summary: This study tackles the challenges of single-shot LDR to HDR mapping by proposing a novel two-stage deep network.
Notably, our proposed method aims to reconstruct an HDR image without knowing hardware information, including camera response function (CRF) and exposure settings.
- Score: 0.883717274344425
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mapping a single exposure low dynamic range (LDR) image into a high dynamic
range (HDR) is considered among the most strenuous image to image translation
tasks due to exposure-related missing information. This study tackles the
challenges of single-shot LDR to HDR mapping by proposing a novel two-stage
deep network. Notably, our proposed method aims to reconstruct an HDR image
without knowing hardware information, including camera response function (CRF)
and exposure settings. Therefore, we aim to perform image enhancement task like
denoising, exposure correction, etc., in the first stage. Additionally, the
second stage of our deep network learns tone mapping and bit-expansion from a
convex set of data samples. The qualitative and quantitative comparisons
demonstrate that the proposed method can outperform the existing LDR to HDR
works with a marginal difference. Apart from that, we collected an LDR image
dataset incorporating different camera systems. The evaluation with our
collected real-world LDR images illustrates that the proposed method can
reconstruct plausible HDR images without presenting any visual artefacts. Code
available: https://github. com/sharif-apu/twostageHDR_NTIRE21.
Related papers
- A Cycle Ride to HDR: Semantics Aware Self-Supervised Framework for Unpaired LDR-to-HDR Image Translation [0.0]
Low Dynamic Range (LDR) to High Dynamic Range () image translation is an important computer vision problem.
Most current state-of-the-art methods require high-quality paired LDR, datasets for model training.
We propose a modified cycle-consistent adversarial architecture and utilize unpaired LDR, datasets for training.
arXiv Detail & Related papers (2024-10-19T11:11:58Z) - HistoHDR-Net: Histogram Equalization for Single LDR to HDR Image
Translation [12.45632443397018]
High Dynamic Range ( HDR) imaging aims to replicate the high visual quality and clarity of real-world scenes.
The literature offers various data-driven methods for HDR image reconstruction from Low Dynamic Range (LDR) counterparts.
A common limitation of these approaches is missing details in regions of the reconstructed HDR images.
We propose a simple and effective method, Histo-Net, to recover the fine details.
arXiv Detail & Related papers (2024-02-08T20:14:46Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked
Autoencoders [97.64072440883392]
We propose a novel semi-supervised approach to realize few-shot HDR imaging via two stages of training, called SSHDR.
Unlikely previous methods, directly recovering content and removing ghosts simultaneously, which is hard to achieve optimum.
Experiments demonstrate that SSHDR outperforms state-of-the-art methods quantitatively and qualitatively within and across different datasets.
arXiv Detail & Related papers (2023-04-14T03:42:51Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - HDR-cGAN: Single LDR to HDR Image Translation using Conditional GAN [24.299931323012757]
Low Dynamic Range (LDR) cameras are incapable of representing the wide dynamic range of the real-world scene.
We propose a deep learning based approach to recover details in the saturated areas while reconstructing the HDR image.
We present a novel conditional GAN (cGAN) based framework trained in an end-to-end fashion over the HDR-REAL and HDR-SYNTH datasets.
arXiv Detail & Related papers (2021-10-04T18:50:35Z) - Luminance Attentive Networks for HDR Image and Panorama Reconstruction [37.364335148790005]
It is difficult to reconstruct a high inverse range from a low dynamic range (LDR) image as an ill-posed problem.
This paper proposes a attentive luminance network named LANet for HDR reconstruction from a single LDR image.
arXiv Detail & Related papers (2021-09-14T13:44:34Z) - NTIRE 2021 Challenge on High Dynamic Range Imaging: Dataset, Methods and
Results [56.932867490888015]
This paper reviews the first challenge on high-dynamic range imaging that was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2021.
The challenge aims at estimating a HDR image from one or multiple respective low-dynamic range (LDR) observations, which might suffer from under- or over-exposed regions and different sources of noise.
arXiv Detail & Related papers (2021-06-02T19:45:16Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.