PanDORA: Casual HDR Radiance Acquisition for Indoor Scenes
- URL: http://arxiv.org/abs/2407.06150v2
- Date: Sat, 04 Oct 2025 00:52:10 GMT
- Title: PanDORA: Casual HDR Radiance Acquisition for Indoor Scenes
- Authors: Mohammad Reza Karimi Dastjerdi, Dominique Tanguay-Gaudreau, Frédéric Fortier-Chouinard, Yannick Hold-Geoffroy, Claude Demers, Nima Kalantari, Jean-François Lalonde,
- Abstract summary: We introduce PanDORA: PANoramic Dual-Observer Radiance Acquisition, a system designed for the casual, high quality capture of indoor environments.<n>Our approach uses two 360deg cameras mounted on a portable monopod to simultaneously record two panoramic 360deg videos.<n>The resulting video data is processed by a proposed two-stage NeRF-based algorithm, including an algorithm for the fine alignment of the fast- and well-exposed frames.
- Score: 19.24921332561405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most novel view synthesis methods-including Neural Radiance Fields (NeRF)-struggle to capture the true high dynamic range (HDR) radiance of scenes. This is primarily due to their dependence on low dynamic range (LDR) images from conventional cameras. Exposure bracketing techniques aim to address this challenge, but they introduce a considerable time burden during the acquisition process. In this work, we introduce PanDORA: PANoramic Dual-Observer Radiance Acquisition, a system designed for the casual, high quality HDR capture of indoor environments. Our approach uses two 360{\deg} cameras mounted on a portable monopod to simultaneously record two panoramic 360{\deg} videos: one with standard exposure and another at fast shutter speed. The resulting video data is processed by a proposed two-stage NeRF-based algorithm, including an algorithm for the fine alignment of the fast- and well-exposed frames, generating non-saturated HDR radiance maps. Compared to existing methods on a novel dataset of real indoor scenes captured with our apparatus and including HDR ground truth lighting, PanDORA achieves superior visual fidelity and provides a scalable solution for capturing real environments in HDR.
Related papers
- CasualHDRSplat: Robust High Dynamic Range 3D Gaussian Splatting from Casually Captured Videos [15.52886867095313]
Photo-realistic novel view rendering from multi-view images, such as neural radiance field (NeRF) and 3D Splatting (3DGS), have garnered widespread attention due to their superior performance.
textbfSplat contains a unified differentiable physical imaging model which applies continuous-time trajectory constraint to imaging process.
Experiments demonstrate that our approach outperforms existing methods in terms of robustness and quality.
arXiv Detail & Related papers (2025-04-24T16:42:37Z) - Exposure Completing for Temporally Consistent Neural High Dynamic Range Video Rendering [17.430726543786943]
We propose a novel paradigm to render HDR frames via completing the absent exposure information.
Our approach involves interpolating neighbor LDR frames in the time dimension to reconstruct LDR frames for the absent exposures.
This benefits the fusing process for HDR results, reducing noise and ghosting artifacts therefore improving temporal consistency.
arXiv Detail & Related papers (2024-07-18T09:13:08Z) - Cinematic Gaussians: Real-Time HDR Radiance Fields with Depth of Field [23.92087253022495]
Radiance field methods represent the state of the art in reconstructing complex scenes from multi-view photos.
Their reliance on a pinhole camera model, assuming all scene elements are in focus in the input images, presents practical challenges and complicates refocusing during novel-view synthesis.
We present a lightweight analytical based on 3D Gaussian Splatting that utilizes multi-view LDR images on varying exposure times, radiance of apertures, and focus distances as input to reconstruct a high-dynamic-range scene.
arXiv Detail & Related papers (2024-06-11T15:00:24Z) - HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting [76.5908492298286]
Existing HDR NVS methods are mainly based on NeRF.
They suffer from long training time and slow inference speed.
We propose a new framework, High Dynamic Range Gaussian Splatting (-GS)
arXiv Detail & Related papers (2024-05-24T00:46:58Z) - Event-based Asynchronous HDR Imaging by Temporal Incident Light Modulation [54.64335350932855]
We propose a Pixel-Asynchronous HDR imaging system, based on key insights into the challenges in HDR imaging.
Our proposed Asyn system integrates the Dynamic Vision Sensors (DVS) with a set of LCD panels.
The LCD panels modulate the irradiance incident upon the DVS by altering their transparency, thereby triggering the pixel-independent event streams.
arXiv Detail & Related papers (2024-03-14T13:45:09Z) - Fast High Dynamic Range Radiance Fields for Dynamic Scenes [39.3304365600248]
We propose a dynamic HDR NeRF framework, named HDR-HexPlane, which can learn 3D scenes from dynamic 2D images captured with various exposures.
With the proposed model, high-quality novel-view images at any time point can be rendered with any desired exposure.
arXiv Detail & Related papers (2024-01-11T17:15:16Z) - Pano-NeRF: Synthesizing High Dynamic Range Novel Views with Geometry
from Sparse Low Dynamic Range Panoramic Images [82.1477261107279]
We propose the irradiance fields from sparse LDR panoramic images to increase the observation counts for faithful geometry recovery.
Experiments demonstrate that the irradiance fields outperform state-of-the-art methods on both geometry recovery and HDR reconstruction.
arXiv Detail & Related papers (2023-12-26T08:10:22Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - Casual Indoor HDR Radiance Capture from Omnidirectional Images [6.558757117312684]
We present Pano-NeRF, a novel pipeline to casually capture a plausible full HDR radiance field of a large indoor scene.
The resulting Pano-NeRF pipeline can estimate full HDR panoramas from any location of the scene.
arXiv Detail & Related papers (2022-08-16T18:45:27Z) - HDR-Plenoxels: Self-Calibrating High Dynamic Range Radiance Fields [15.32264927462068]
We learn a plenoptic function of 3D HDR radiance fields, geometry information, and varying camera settings inherent in 2D low dynamic range (LDR) images.
Our voxel-based volume rendering pipeline reconstructs HDR radiance fields with only multi-view LDR images.
Our experiments show that HDR-Plenoxels can express detail and high-quality HDR novel views from only LDR images with various cameras.
arXiv Detail & Related papers (2022-08-14T06:12:22Z) - StyleLight: HDR Panorama Generation for Lighting Estimation and Editing [98.20167223076756]
We present a new lighting estimation and editing framework to generate high-dynamic-range (GAN) indoor panorama lighting from a single field-of-view (LFOV) image.
Our framework achieves superior performance over state-of-the-art methods on indoor lighting estimation.
arXiv Detail & Related papers (2022-07-29T17:58:58Z) - Self-supervised HDR Imaging from Motion and Exposure Cues [14.57046548797279]
We propose a novel self-supervised approach for learnable HDR estimation that alleviates the need for HDR ground-truth labels.
Experimental results show that the HDR models trained using our proposed self-supervision approach achieve performance competitive with those trained under full supervision.
arXiv Detail & Related papers (2022-03-23T10:22:03Z) - Multi-Bracket High Dynamic Range Imaging with Event Cameras [46.81570594990517]
We propose the first multi-bracket HDR pipeline combining a standard camera with an event camera.
Our results show better overall robustness when using events, with improvements in PSNR by up to 5dB on synthetic data and up to 0.7dB on real-world data.
arXiv Detail & Related papers (2022-03-13T11:10:47Z) - Neural Radiance Fields for Outdoor Scene Relighting [70.97747511934705]
We present NeRF-OSR, the first approach for outdoor scene relighting based on neural radiance fields.
In contrast to the prior art, our technique allows simultaneous editing of both scene illumination and camera viewpoint.
It also includes a dedicated network for shadow reproduction, which is crucial for high-quality outdoor scene relighting.
arXiv Detail & Related papers (2021-12-09T18:59:56Z) - HDR-NeRF: High Dynamic Range Neural Radiance Fields [70.80920996881113]
We present High Dynamic Range Neural Radiance Fields (-NeRF) to recover an HDR radiance field from a set of low dynamic range (LDR) views with different exposures.
We are able to generate both novel HDR views and novel LDR views under different exposures.
arXiv Detail & Related papers (2021-11-29T11:06:39Z) - Luminance Attentive Networks for HDR Image and Panorama Reconstruction [37.364335148790005]
It is difficult to reconstruct a high inverse range from a low dynamic range (LDR) image as an ill-posed problem.
This paper proposes a attentive luminance network named LANet for HDR reconstruction from a single LDR image.
arXiv Detail & Related papers (2021-09-14T13:44:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.