FlexHDR: Modelling Alignment and Exposure Uncertainties for Flexible HDR
Imaging
- URL: http://arxiv.org/abs/2201.02625v1
- Date: Fri, 7 Jan 2022 14:27:17 GMT
- Title: FlexHDR: Modelling Alignment and Exposure Uncertainties for Flexible HDR
Imaging
- Authors: Sibi Catley-Chandar, Thomas Tanay, Lucas Vandroux, Ale\v{s} Leonardis,
Gregory Slabaugh, Eduardo P\'erez-Pellitero
- Abstract summary: We present a new HDR imaging technique that models alignment and exposure uncertainties to produce high quality HDR results.
We introduce a strategy that learns to jointly align and assess the alignment and exposure reliability using an HDR-aware, uncertainty-driven attention map.
Experimental results show our method can produce better quality HDR images with up to 0.8dB PSNR improvement to the state-of-the-art.
- Score: 0.9185931275245008
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High dynamic range (HDR) imaging is of fundamental importance in modern
digital photography pipelines and used to produce a high-quality photograph
with well exposed regions despite varying illumination across the image. This
is typically achieved by merging multiple low dynamic range (LDR) images taken
at different exposures. However, over-exposed regions and misalignment errors
due to poorly compensated motion result in artefacts such as ghosting. In this
paper, we present a new HDR imaging technique that specifically models
alignment and exposure uncertainties to produce high quality HDR results. We
introduce a strategy that learns to jointly align and assess the alignment and
exposure reliability using an HDR-aware, uncertainty-driven attention map that
robustly merges the frames into a single high quality HDR image. Further, we
introduce a progressive, multi-stage image fusion approach that can flexibly
merge any number of LDR images in a permutation-invariant manner. Experimental
results show our method can produce better quality HDR images with up to 0.8dB
PSNR improvement to the state-of-the-art, and subjective improvements in terms
of better detail, colours, and fewer artefacts.
Related papers
- HDRT: Infrared Capture for HDR Imaging [8.208995723545502]
We propose a new approach, High Dynamic Range Thermal (HDRT), for HDR acquisition using a separate, commonly available, thermal infrared (IR) sensor.
We propose a novel deep neural method (HDRTNet) which combines IR and SDR content to generate HDR images.
We show substantial quantitative and qualitative quality improvements on both over- and under-exposed images, showing that our approach is robust to capturing in multiple different lighting conditions.
arXiv Detail & Related papers (2024-06-08T13:43:44Z) - Exposure Diffusion: HDR Image Generation by Consistent LDR denoising [29.45922922270381]
We seek inspiration from the HDR image capture literature that traditionally fuses sets of LDR images, called "brackets", to produce a single HDR image.
We operate multiple denoising processes to generate multiple LDR brackets that together form a valid HDR result.
arXiv Detail & Related papers (2024-05-23T08:24:22Z) - Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - HistoHDR-Net: Histogram Equalization for Single LDR to HDR Image
Translation [12.45632443397018]
High Dynamic Range ( HDR) imaging aims to replicate the high visual quality and clarity of real-world scenes.
The literature offers various data-driven methods for HDR image reconstruction from Low Dynamic Range (LDR) counterparts.
A common limitation of these approaches is missing details in regions of the reconstructed HDR images.
We propose a simple and effective method, Histo-Net, to recover the fine details.
arXiv Detail & Related papers (2024-02-08T20:14:46Z) - Towards High-quality HDR Deghosting with Conditional Diffusion Models [88.83729417524823]
High Dynamic Range (LDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques.
DNNs still generate ghosting artifacts when LDR images have saturation and large motion.
We formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition.
arXiv Detail & Related papers (2023-11-02T01:53:55Z) - Perceptual Assessment and Optimization of HDR Image Rendering [25.72195917050074]
High dynamic range rendering has the ability to faithfully reproduce the wide luminance ranges in natural scenes.
Existing quality models are mostly designed for low dynamic range (LDR) images, and do not align well with human perception of HDR image quality.
We propose a family of HDR quality metrics, in which the key step is employing a simple inverse display model to decompose an HDR image into a stack of LDR images with varying exposures.
arXiv Detail & Related papers (2023-10-19T16:32:18Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked
Autoencoders [97.64072440883392]
We propose a novel semi-supervised approach to realize few-shot HDR imaging via two stages of training, called SSHDR.
Unlikely previous methods, directly recovering content and removing ghosts simultaneously, which is hard to achieve optimum.
Experiments demonstrate that SSHDR outperforms state-of-the-art methods quantitatively and qualitatively within and across different datasets.
arXiv Detail & Related papers (2023-04-14T03:42:51Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - Deep Snapshot HDR Imaging Using Multi-Exposure Color Filter Array [14.5106375775521]
We introduce the idea of luminance normalization that simultaneously enables effective loss and input data normalization.
Experimental results using two public HDR image datasets demonstrate that our framework outperforms other snapshot methods.
arXiv Detail & Related papers (2020-11-20T06:31:37Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.