Bracket Diffusion: HDR Image Generation by Consistent LDR Denoising
- URL: http://arxiv.org/abs/2405.14304v2
- Date: Tue, 18 Mar 2025 14:54:28 GMT
- Title: Bracket Diffusion: HDR Image Generation by Consistent LDR Denoising
- Authors: Mojtaba Bemana, Thomas Leimkühler, Karol Myszkowski, Hans-Peter Seidel, Tobias Ritschel,
- Abstract summary: We demonstrate generating HDR images using the concerted action of multiple black-box, pre-trained LDR image diffusion models.<n>We operate multiple denoising processes to generate multiple LDR brackets that together form a valid HDR result.<n>We demonstrate state-of-the-art unconditional and conditional restoration-type (LDR2) generative modeling results, yet in HDR.
- Score: 29.45922922270381
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We demonstrate generating HDR images using the concerted action of multiple black-box, pre-trained LDR image diffusion models. Relying on a pre-trained LDR generative diffusion models is vital as, first, there is no sufficiently large HDR image dataset available to re-train them, and, second, even if it was, re-training such models is impossible for most compute budgets. Instead, we seek inspiration from the HDR image capture literature that traditionally fuses sets of LDR images, called "exposure brackets'', to produce a single HDR image. We operate multiple denoising processes to generate multiple LDR brackets that together form a valid HDR result. The key to making this work is to introduce a consistency term into the diffusion process to couple the brackets such that they agree across the exposure range they share while accounting for possible differences due to the quantization error. We demonstrate state-of-the-art unconditional and conditional or restoration-type (LDR2HDR) generative modeling results, yet in HDR.
Related papers
- Diffusion-Promoted HDR Video Reconstruction [45.73396977607666]
High dynamic range (LDR) video reconstruction aims to generate HDR videos from low dynamic range (LDR) frames captured with alternating exposures.
Most existing works solely rely on the regression-based paradigm, leading to adverse effects such as ghosting artifacts and missing details in saturated regions.
We propose a diffusion-promoted method for HDR video reconstruction, termed HDR-V-Diff, which incorporates a diffusion model to capture the HDR distribution.
arXiv Detail & Related papers (2024-06-12T13:38:10Z) - HDRT: Infrared Capture for HDR Imaging [8.208995723545502]
We propose a new approach, High Dynamic Range Thermal (HDRT), for HDR acquisition using a separate, commonly available, thermal infrared (IR) sensor.
We propose a novel deep neural method (HDRTNet) which combines IR and SDR content to generate HDR images.
We show substantial quantitative and qualitative quality improvements on both over- and under-exposed images, showing that our approach is robust to capturing in multiple different lighting conditions.
arXiv Detail & Related papers (2024-06-08T13:43:44Z) - Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - Towards High-quality HDR Deghosting with Conditional Diffusion Models [88.83729417524823]
High Dynamic Range (LDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques.
DNNs still generate ghosting artifacts when LDR images have saturation and large motion.
We formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition.
arXiv Detail & Related papers (2023-11-02T01:53:55Z) - Learning Continuous Exposure Value Representations for Single-Image HDR
Reconstruction [23.930923461672894]
LDR stack-based methods are used for single-image HDR reconstruction, generating an HDR image from a deep learning-generated LDR stack.
Current methods generate the stack with predetermined exposure values (EVs), which may limit the quality of HDR reconstruction.
We propose the continuous exposure value representation (CEVR), which uses an implicit function to generate LDR images with arbitrary EVs.
arXiv Detail & Related papers (2023-09-07T17:59:03Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - FlexHDR: Modelling Alignment and Exposure Uncertainties for Flexible HDR
Imaging [0.9185931275245008]
We present a new HDR imaging technique that models alignment and exposure uncertainties to produce high quality HDR results.
We introduce a strategy that learns to jointly align and assess the alignment and exposure reliability using an HDR-aware, uncertainty-driven attention map.
Experimental results show our method can produce better quality HDR images with up to 0.8dB PSNR improvement to the state-of-the-art.
arXiv Detail & Related papers (2022-01-07T14:27:17Z) - A Two-stage Deep Network for High Dynamic Range Image Reconstruction [0.883717274344425]
This study tackles the challenges of single-shot LDR to HDR mapping by proposing a novel two-stage deep network.
Notably, our proposed method aims to reconstruct an HDR image without knowing hardware information, including camera response function (CRF) and exposure settings.
arXiv Detail & Related papers (2021-04-19T15:19:17Z) - Beyond Visual Attractiveness: Physically Plausible Single Image HDR
Reconstruction for Spherical Panoramas [60.24132321381606]
We introduce the physical illuminance constraints to our single-shot HDR reconstruction framework.
Our method can generate HDRs which are not only visually appealing but also physically plausible.
arXiv Detail & Related papers (2021-03-24T01:51:19Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.