HDR Reconstruction Boosting with Training-Free and Exposure-Consistent Diffusion
- URL: http://arxiv.org/abs/2602.19706v1
- Date: Mon, 23 Feb 2026 10:57:22 GMT
- Title: HDR Reconstruction Boosting with Training-Free and Exposure-Consistent Diffusion
- Authors: Yo-Tin Lin, Su-Kai Chen, Hou-Ning Hu, Yen-Yu Lin, Yu-Lun Liu,
- Abstract summary: We present a training-free approach that enhances existing indirect and direct HDR reconstruction methods.<n>Our method combines text-guided diffusion models with SDEdit refinement to generate plausible content in over-exposed areas.
- Score: 18.322610131900696
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Single LDR to HDR reconstruction remains challenging for over-exposed regions where traditional methods often fail due to complete information loss. We present a training-free approach that enhances existing indirect and direct HDR reconstruction methods through diffusion-based inpainting. Our method combines text-guided diffusion models with SDEdit refinement to generate plausible content in over-exposed areas while maintaining consistency across multi-exposure LDR images. Unlike previous approaches requiring extensive training, our method seamlessly integrates with existing HDR reconstruction techniques through an iterative compensation mechanism that ensures luminance coherence across multiple exposures. We demonstrate significant improvements in both perceptual quality and quantitative metrics on standard HDR datasets and in-the-wild captures. Results show that our method effectively recovers natural details in challenging scenarios while preserving the advantages of existing HDR reconstruction pipelines. Project page: https://github.com/EusdenLin/HDR-Reconstruction-Boosting
Related papers
- GMODiff: One-Step Gain Map Refinement with Diffusion Priors for HDR Reconstruction [48.881484713994496]
We introduce GMODiff, a gain map-driven one-step diffusion framework for multi-exposure HDR reconstruction.<n>Our GMO performs favorably against several state-of-the-art methods and is 100 faster than previous LDM-based methods.
arXiv Detail & Related papers (2025-12-18T09:50:25Z) - HDR Image Reconstruction using an Unsupervised Fusion Model [0.0]
High Dynamic Range (Exposure) imaging aims to reproduce the wide range of brightness levels present in natural scenes.<n>We propose a deep learning-based multi-exposure fusion approach for HDR image generation.
arXiv Detail & Related papers (2025-10-21T17:43:22Z) - Boosting HDR Image Reconstruction via Semantic Knowledge Transfer [45.738735520776004]
Leveraging scene-specific semantic priors offers a promising solution for restoring heavily degraded regions.<n>These priors are typically extracted from sRGB Standard Dynamic Range (SDR) images.<n>We propose a general framework that transfers semantic knowledge derived from SDR domain via self-distillation to boost existing HDR reconstruction.
arXiv Detail & Related papers (2025-03-19T16:01:27Z) - Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - HistoHDR-Net: Histogram Equalization for Single LDR to HDR Image
Translation [12.45632443397018]
High Dynamic Range ( HDR) imaging aims to replicate the high visual quality and clarity of real-world scenes.
The literature offers various data-driven methods for HDR image reconstruction from Low Dynamic Range (LDR) counterparts.
A common limitation of these approaches is missing details in regions of the reconstructed HDR images.
We propose a simple and effective method, Histo-Net, to recover the fine details.
arXiv Detail & Related papers (2024-02-08T20:14:46Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - Learning Continuous Exposure Value Representations for Single-Image HDR
Reconstruction [23.930923461672894]
LDR stack-based methods are used for single-image HDR reconstruction, generating an HDR image from a deep learning-generated LDR stack.
Current methods generate the stack with predetermined exposure values (EVs), which may limit the quality of HDR reconstruction.
We propose the continuous exposure value representation (CEVR), which uses an implicit function to generate LDR images with arbitrary EVs.
arXiv Detail & Related papers (2023-09-07T17:59:03Z) - HDR Reconstruction from Bracketed Exposures and Events [12.565039752529797]
Reconstruction of high-quality HDR images is at the core of modern computational photography.
We present a multi-modal end-to-end learning-based HDR imaging system that fuses bracketed images and event in the feature domain.
Our framework exploits the higher temporal resolution of events by sub-sampling the input event streams using a sliding window.
arXiv Detail & Related papers (2022-03-28T15:04:41Z) - Beyond Visual Attractiveness: Physically Plausible Single Image HDR
Reconstruction for Spherical Panoramas [60.24132321381606]
We introduce the physical illuminance constraints to our single-shot HDR reconstruction framework.
Our method can generate HDRs which are not only visually appealing but also physically plausible.
arXiv Detail & Related papers (2021-03-24T01:51:19Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.