Learning Continuous Exposure Value Representations for Single-Image HDR
Reconstruction
- URL: http://arxiv.org/abs/2309.03900v1
- Date: Thu, 7 Sep 2023 17:59:03 GMT
- Title: Learning Continuous Exposure Value Representations for Single-Image HDR
Reconstruction
- Authors: Su-Kai Chen, Hung-Lin Yen, Yu-Lun Liu, Min-Hung Chen, Hou-Ning Hu,
Wen-Hsiao Peng, Yen-Yu Lin
- Abstract summary: LDR stack-based methods are used for single-image HDR reconstruction, generating an HDR image from a deep learning-generated LDR stack.
Current methods generate the stack with predetermined exposure values (EVs), which may limit the quality of HDR reconstruction.
We propose the continuous exposure value representation (CEVR), which uses an implicit function to generate LDR images with arbitrary EVs.
- Score: 23.930923461672894
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning is commonly used to reconstruct HDR images from LDR images. LDR
stack-based methods are used for single-image HDR reconstruction, generating an
HDR image from a deep learning-generated LDR stack. However, current methods
generate the stack with predetermined exposure values (EVs), which may limit
the quality of HDR reconstruction. To address this, we propose the continuous
exposure value representation (CEVR), which uses an implicit function to
generate LDR images with arbitrary EVs, including those unseen during training.
Our approach generates a continuous stack with more images containing diverse
EVs, significantly improving HDR reconstruction. We use a cycle training
strategy to supervise the model in generating continuous EV LDR images without
corresponding ground truths. Our CEVR model outperforms existing methods, as
demonstrated by experimental results.
Related papers
- A Cycle Ride to HDR: Semantics Aware Self-Supervised Framework for Unpaired LDR-to-HDR Image Translation [0.0]
Low Dynamic Range (LDR) to High Dynamic Range () image translation is an important computer vision problem.
Most current state-of-the-art methods require high-quality paired LDR, datasets for model training.
We propose a modified cycle-consistent adversarial architecture and utilize unpaired LDR, datasets for training.
arXiv Detail & Related papers (2024-10-19T11:11:58Z) - Exposure Diffusion: HDR Image Generation by Consistent LDR denoising [29.45922922270381]
We seek inspiration from the HDR image capture literature that traditionally fuses sets of LDR images, called "brackets", to produce a single HDR image.
We operate multiple denoising processes to generate multiple LDR brackets that together form a valid HDR result.
arXiv Detail & Related papers (2024-05-23T08:24:22Z) - Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - Pano-NeRF: Synthesizing High Dynamic Range Novel Views with Geometry
from Sparse Low Dynamic Range Panoramic Images [82.1477261107279]
We propose the irradiance fields from sparse LDR panoramic images to increase the observation counts for faithful geometry recovery.
Experiments demonstrate that the irradiance fields outperform state-of-the-art methods on both geometry recovery and HDR reconstruction.
arXiv Detail & Related papers (2023-12-26T08:10:22Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - Single-Image HDR Reconstruction by Multi-Exposure Generation [8.656080193351581]
We propose a weakly supervised learning method that inverts the physical image formation process for HDR reconstruction.
Our neural network can invert the camera response to reconstruct pixel irradiance before synthesizing multiple exposures.
Our experiments show that our proposed model can effectively reconstruct HDR images.
arXiv Detail & Related papers (2022-10-28T05:12:56Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.