iHDR: Iterative HDR Imaging with Arbitrary Number of Exposures
- URL: http://arxiv.org/abs/2505.22971v1
- Date: Thu, 29 May 2025 01:20:31 GMT
- Title: iHDR: Iterative HDR Imaging with Arbitrary Number of Exposures
- Authors: Yu Yuan, Yiheng Chi, Xingguang Zhang, Stanley Chan,
- Abstract summary: High dynamic range (LDR) imaging aims to obtain a high-quality HDR image by fusing information from multiple low dynamic range (LDR) images.<n>Our framework comprises a ghost-free Dual-input fusion network (Di) and a physics-based domain mapping network (ToneNet)<n>Di estimates an intermediate HDR image, while ToneNet maps it back to the nonlinear domain and serves as the reference for the next pairwise fusion.
- Score: 1.9686770963118383
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: High dynamic range (HDR) imaging aims to obtain a high-quality HDR image by fusing information from multiple low dynamic range (LDR) images. Numerous learning-based HDR imaging methods have been proposed to achieve this for static and dynamic scenes. However, their architectures are mostly tailored for a fixed number (e.g., three) of inputs and, therefore, cannot apply directly to situations beyond the pre-defined limited scope. To address this issue, we propose a novel framework, iHDR, for iterative fusion, which comprises a ghost-free Dual-input HDR fusion network (DiHDR) and a physics-based domain mapping network (ToneNet). DiHDR leverages a pair of inputs to estimate an intermediate HDR image, while ToneNet maps it back to the nonlinear domain and serves as the reference input for the next pairwise fusion. This process is iteratively executed until all input frames are utilized. Qualitative and quantitative experiments demonstrate the effectiveness of the proposed method as compared to existing state-of-the-art HDR deghosting approaches given flexible numbers of input frames.
Related papers
- LEDiff: Latent Exposure Diffusion for HDR Generation [11.669442066168244]
LEDiff is a method that enables a generative model with HDR content generation through latent space exposure fusion techniques.<n>It also functions as an LDR-to- fusion converter, expanding the dynamic range of existing low-dynamic range images.
arXiv Detail & Related papers (2024-12-19T02:15:55Z) - LLM-HDR: Bridging LLM-based Perception and Self-Supervision for Unpaired LDR-to-HDR Image Reconstruction [10.957314050894652]
The paper proposes a method that integrates the perception of Large Language Models (LLM) into a modified semantic artifact-consistent adversarial architecture.<n>The method achieves state-of-the-art performance across several benchmark datasets and reconstructs high-quality HDR images.
arXiv Detail & Related papers (2024-10-19T11:11:58Z) - HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting [76.5908492298286]
Existing HDR NVS methods are mainly based on NeRF.
They suffer from long training time and slow inference speed.
We propose a new framework, High Dynamic Range Gaussian Splatting (-GS)
arXiv Detail & Related papers (2024-05-24T00:46:58Z) - Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - HistoHDR-Net: Histogram Equalization for Single LDR to HDR Image
Translation [12.45632443397018]
High Dynamic Range ( HDR) imaging aims to replicate the high visual quality and clarity of real-world scenes.
The literature offers various data-driven methods for HDR image reconstruction from Low Dynamic Range (LDR) counterparts.
A common limitation of these approaches is missing details in regions of the reconstructed HDR images.
We propose a simple and effective method, Histo-Net, to recover the fine details.
arXiv Detail & Related papers (2024-02-08T20:14:46Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked
Autoencoders [97.64072440883392]
We propose a novel semi-supervised approach to realize few-shot HDR imaging via two stages of training, called SSHDR.
Unlikely previous methods, directly recovering content and removing ghosts simultaneously, which is hard to achieve optimum.
Experiments demonstrate that SSHDR outperforms state-of-the-art methods quantitatively and qualitatively within and across different datasets.
arXiv Detail & Related papers (2023-04-14T03:42:51Z) - Self-supervised HDR Imaging from Motion and Exposure Cues [14.57046548797279]
We propose a novel self-supervised approach for learnable HDR estimation that alleviates the need for HDR ground-truth labels.
Experimental results show that the HDR models trained using our proposed self-supervision approach achieve performance competitive with those trained under full supervision.
arXiv Detail & Related papers (2022-03-23T10:22:03Z) - NTIRE 2021 Challenge on High Dynamic Range Imaging: Dataset, Methods and
Results [56.932867490888015]
This paper reviews the first challenge on high-dynamic range imaging that was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2021.
The challenge aims at estimating a HDR image from one or multiple respective low-dynamic range (LDR) observations, which might suffer from under- or over-exposed regions and different sources of noise.
arXiv Detail & Related papers (2021-06-02T19:45:16Z) - A Two-stage Deep Network for High Dynamic Range Image Reconstruction [0.883717274344425]
This study tackles the challenges of single-shot LDR to HDR mapping by proposing a novel two-stage deep network.
Notably, our proposed method aims to reconstruct an HDR image without knowing hardware information, including camera response function (CRF) and exposure settings.
arXiv Detail & Related papers (2021-04-19T15:19:17Z) - HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world
Benchmark Dataset [30.249052175655606]
We introduce a coarse-to-fine deep learning framework for HDR video reconstruction.
Firstly, we perform coarse alignment and pixel blending in the image space to estimate the coarse HDR video.
Secondly, we conduct more sophisticated alignment and temporal fusion in the feature space of the coarse HDR video to produce better reconstruction.
arXiv Detail & Related papers (2021-03-27T16:40:05Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.