UltraLED: Learning to See Everything in Ultra-High Dynamic Range Scenes
- URL: http://arxiv.org/abs/2510.07741v1
- Date: Thu, 09 Oct 2025 03:29:39 GMT
- Title: UltraLED: Learning to See Everything in Ultra-High Dynamic Range Scenes
- Authors: Yuang Meng, Xin Jin, Lina Lei, Chun-Le Guo, Chongyi Li,
- Abstract summary: Ultra-high dynamic range scenes exhibit significant exposure disparities between bright and dark regions.<n> RGB-based bracketing methods can capture details at both ends using short-long exposure pairs, but are susceptible to misalignment and ghosting artifacts.<n>In this study, we rely solely on a single short-exposure frame, which inherently avoids ghosting and motion blur.
- Score: 57.974462689166394
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Ultra-high dynamic range (UHDR) scenes exhibit significant exposure disparities between bright and dark regions. Such conditions are commonly encountered in nighttime scenes with light sources. Even with standard exposure settings, a bimodal intensity distribution with boundary peaks often emerges, making it difficult to preserve both highlight and shadow details simultaneously. RGB-based bracketing methods can capture details at both ends using short-long exposure pairs, but are susceptible to misalignment and ghosting artifacts. We found that a short-exposure image already retains sufficient highlight detail. The main challenge of UHDR reconstruction lies in denoising and recovering information in dark regions. In comparison to the RGB images, RAW images, thanks to their higher bit depth and more predictable noise characteristics, offer greater potential for addressing this challenge. This raises a key question: can we learn to see everything in UHDR scenes using only a single short-exposure RAW image? In this study, we rely solely on a single short-exposure frame, which inherently avoids ghosting and motion blur, making it particularly robust in dynamic scenes. To achieve that, we introduce UltraLED, a two-stage framework that performs exposure correction via a ratio map to balance dynamic range, followed by a brightness-aware RAW denoiser to enhance detail recovery in dark regions. To support this setting, we design a 9-stop bracketing pipeline to synthesize realistic UHDR images and contribute a corresponding dataset based on diverse scenes, using only the shortest exposure as input for reconstruction. Extensive experiments show that UltraLED significantly outperforms existing single-frame approaches. Our code and dataset are made publicly available at https://srameo.github.io/projects/ultraled.
Related papers
- Dark-EvGS: Event Camera as an Eye for Radiance Field in the Dark [51.68144172958247]
We propose Dark-EvGS, the first event-assisted 3D GS framework that enables the reconstruction of bright frames from arbitrary viewpoints.<n>Our method achieves better results than existing methods, conquering radiance field reconstruction under challenging low-light conditions.
arXiv Detail & Related papers (2025-07-16T05:54:33Z) - UltraFusion: Ultra High Dynamic Imaging using Exposure Fusion [16.915597001287964]
Capturing high dynamic range scenes is one of the most important issues in camera design.<n>We propose model, the first exposure fusion technique that can merge inputs with 9 stops differences.<n>Our approach outperforms HDR-Transformer on latest HDR benchmarks.
arXiv Detail & Related papers (2025-01-20T14:45:07Z) - Dynamic EventNeRF: Reconstructing General Dynamic Scenes from Multi-view RGB and Event Streams [69.65147723239153]
Volumetric reconstruction of dynamic scenes is an important problem in computer vision.<n>It is especially challenging in poor lighting and with fast motion.<n>We propose the first method totemporally reconstruct a scene from sparse multi-view event streams and sparse RGB frames.
arXiv Detail & Related papers (2024-12-09T18:56:18Z) - Exposure Completing for Temporally Consistent Neural High Dynamic Range Video Rendering [17.430726543786943]
We propose a novel paradigm to render HDR frames via completing the absent exposure information.
Our approach involves interpolating neighbor LDR frames in the time dimension to reconstruct LDR frames for the absent exposures.
This benefits the fusing process for HDR results, reducing noise and ghosting artifacts therefore improving temporal consistency.
arXiv Detail & Related papers (2024-07-18T09:13:08Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - RawHDR: High Dynamic Range Image Reconstruction from a Single Raw Image [36.17182977927645]
High dynamic range (RGB) images capture much more intensity levels than standard ones.
Current methods predominantly generate HDR images from 8-bit low dynamic range (LDR) s images that have been degraded by the camera processing pipeline.
Unlike existing methods, the core idea of this work is to incorporate more informative Raw sensor data to generate HDR images.
arXiv Detail & Related papers (2023-09-05T07:58:21Z) - Online Overexposed Pixels Hallucination in Videos with Adaptive
Reference Frame Selection [90.35085487641773]
Low dynamic range (LDR) cameras cannot deal with wide dynamic range inputs, frequently leading to local overexposure issues.
We present a learning-based system to reduce these artifacts without resorting to complex processing mechanisms.
arXiv Detail & Related papers (2023-08-29T17:40:57Z) - Progressively Optimized Local Radiance Fields for Robust View Synthesis [76.55036080270347]
We present an algorithm for reconstructing the radiance field of a large-scale scene from a single casually captured video.
For handling unknown poses, we jointly estimate the camera poses with radiance field in a progressive manner.
For handling large unbounded scenes, we dynamically allocate new local radiance fields trained with frames within a temporal window.
arXiv Detail & Related papers (2023-03-24T04:03:55Z) - Multi-Exposure HDR Composition by Gated Swin Transformer [8.619880437958525]
This paper provides a novel multi-exposure fusion model based on Swin Transformer.
We exploit the long distance contextual dependency in the exposure-space pyramid by the self-attention mechanism.
Experiments show that our model achieves the accuracy on par with current top performing multi-exposure HDR imaging models.
arXiv Detail & Related papers (2023-03-15T15:38:43Z) - HDR-cGAN: Single LDR to HDR Image Translation using Conditional GAN [24.299931323012757]
Low Dynamic Range (LDR) cameras are incapable of representing the wide dynamic range of the real-world scene.
We propose a deep learning based approach to recover details in the saturated areas while reconstructing the HDR image.
We present a novel conditional GAN (cGAN) based framework trained in an end-to-end fashion over the HDR-REAL and HDR-SYNTH datasets.
arXiv Detail & Related papers (2021-10-04T18:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.