Reconstructing 3D Scenes in Native High Dynamic Range
- URL: http://arxiv.org/abs/2511.12895v1
- Date: Mon, 17 Nov 2025 02:33:31 GMT
- Title: Reconstructing 3D Scenes in Native High Dynamic Range
- Authors: Kaixuan Zhang, Minxian Li, Mingwu Ren, Jiankang Deng, Xiatian Zhu,
- Abstract summary: We present the first method for 3D scene reconstruction that directly models native HDR observations.<n>We propose bf Native High dynamic range 3D Gaussian Splatting (NH-3DGS), which preserves the full dynamic range throughout the reconstruction pipeline.<n>We demonstrate on both synthetic and real multi-view HDR datasets that NH-3DGS significantly outperforms existing methods in reconstruction quality and dynamic range preservation.
- Score: 82.90064638813185
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: High Dynamic Range (HDR) imaging is essential for professional digital media creation, e.g., filmmaking, virtual production, and photorealistic rendering. However, 3D scene reconstruction has primarily focused on Low Dynamic Range (LDR) data, limiting its applicability to professional workflows. Existing approaches that reconstruct HDR scenes from LDR observations rely on multi-exposure fusion or inverse tone-mapping, which increase capture complexity and depend on synthetic supervision. With the recent emergence of cameras that directly capture native HDR data in a single exposure, we present the first method for 3D scene reconstruction that directly models native HDR observations. We propose {\bf Native High dynamic range 3D Gaussian Splatting (NH-3DGS)}, which preserves the full dynamic range throughout the reconstruction pipeline. Our key technical contribution is a novel luminance-chromaticity decomposition of the color representation that enables direct optimization from native HDR camera data. We demonstrate on both synthetic and real multi-view HDR datasets that NH-3DGS significantly outperforms existing methods in reconstruction quality and dynamic range preservation, enabling professional-grade 3D reconstruction directly from native HDR captures. Code and datasets will be made available.
Related papers
- Dynamic Novel View Synthesis in High Dynamic Range [78.72910306733607]
Current methods primarily focus on static scenes, implicitly assuming all scene elements remain stationary and non-living.<n>We introduce HDR-4DGS, a Gaussian Splatting-based architecture featured with an innovative dynamic tone-mapping module.<n>Experiments demonstrate that HDR-4DGS surpasses existing state-of-the-art methods in both quantitative performance and visual fidelity.
arXiv Detail & Related papers (2025-09-26T04:29:22Z) - High Dynamic Range Novel View Synthesis with Single Exposure [43.50001955428593]
High Dynamic Range Novel View Synthesis (NV-NVS) aims to establish a 3D scene HDR model from Low Dynamic Range (LDR) imagery.<n>For the first time, single exposure LDR images are available during training.
arXiv Detail & Related papers (2025-05-02T12:04:38Z) - S2R-HDR: A Large-Scale Rendered Dataset for HDR Fusion [4.684215759472536]
S2R- is the first large-scale high-quality synthetic dataset for HDR fusion, with 24,000 HDR samples.<n>We design a diverse set of realistic HDR scenes that encompass various dynamic elements, motion types, high dynamic range scenes, and lighting.<n>We introduce S2R-Adapter, a domain adaptation designed to bridge the gap between synthetic and real-world data.
arXiv Detail & Related papers (2025-04-10T11:39:56Z) - LEDiff: Latent Exposure Diffusion for HDR Generation [11.669442066168244]
LEDiff is a method that enables a generative model with HDR content generation through latent space exposure fusion techniques.<n>It also functions as an LDR-to- fusion converter, expanding the dynamic range of existing low-dynamic range images.
arXiv Detail & Related papers (2024-12-19T02:15:55Z) - HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting [76.5908492298286]
Existing HDR NVS methods are mainly based on NeRF.
They suffer from long training time and slow inference speed.
We propose a new framework, High Dynamic Range Gaussian Splatting (-GS)
arXiv Detail & Related papers (2024-05-24T00:46:58Z) - Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - RawHDR: High Dynamic Range Image Reconstruction from a Single Raw Image [36.17182977927645]
High dynamic range (RGB) images capture much more intensity levels than standard ones.
Current methods predominantly generate HDR images from 8-bit low dynamic range (LDR) s images that have been degraded by the camera processing pipeline.
Unlike existing methods, the core idea of this work is to incorporate more informative Raw sensor data to generate HDR images.
arXiv Detail & Related papers (2023-09-05T07:58:21Z) - Efficient HDR Reconstruction from Real-World Raw Images [16.54071503000866]
High-definition screens on edge devices stimulate a strong demand for efficient high dynamic range ( HDR) algorithms.
Many existing HDR methods either deliver unsatisfactory results or consume too much computational and memory resources.
In this work, we discover an excellent opportunity for HDR reconstructing directly from raw images and investigating novel neural network structures.
arXiv Detail & Related papers (2023-06-17T10:10:15Z) - HDR-cGAN: Single LDR to HDR Image Translation using Conditional GAN [24.299931323012757]
Low Dynamic Range (LDR) cameras are incapable of representing the wide dynamic range of the real-world scene.
We propose a deep learning based approach to recover details in the saturated areas while reconstructing the HDR image.
We present a novel conditional GAN (cGAN) based framework trained in an end-to-end fashion over the HDR-REAL and HDR-SYNTH datasets.
arXiv Detail & Related papers (2021-10-04T18:50:35Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.