HDR-Plenoxels: Self-Calibrating High Dynamic Range Radiance Fields
- URL: http://arxiv.org/abs/2208.06787v1
- Date: Sun, 14 Aug 2022 06:12:22 GMT
- Title: HDR-Plenoxels: Self-Calibrating High Dynamic Range Radiance Fields
- Authors: Kim Jun-Seong, Kim Yu-Ji, Moon Ye-Bin, Tae-Hyun Oh
- Abstract summary: We learn a plenoptic function of 3D HDR radiance fields, geometry information, and varying camera settings inherent in 2D low dynamic range (LDR) images.
Our voxel-based volume rendering pipeline reconstructs HDR radiance fields with only multi-view LDR images.
Our experiments show that HDR-Plenoxels can express detail and high-quality HDR novel views from only LDR images with various cameras.
- Score: 15.32264927462068
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose high dynamic range radiance (HDR) fields, HDR-Plenoxels, that
learn a plenoptic function of 3D HDR radiance fields, geometry information, and
varying camera settings inherent in 2D low dynamic range (LDR) images. Our
voxel-based volume rendering pipeline reconstructs HDR radiance fields with
only multi-view LDR images taken from varying camera settings in an end-to-end
manner and has a fast convergence speed. To deal with various cameras in
real-world scenarios, we introduce a tone mapping module that models the
digital in-camera imaging pipeline (ISP) and disentangles radiometric settings.
Our tone mapping module allows us to render by controlling the radiometric
settings of each novel view. Finally, we build a multi-view dataset with
varying camera conditions, which fits our problem setting. Our experiments show
that HDR-Plenoxels can express detail and high-quality HDR novel views from
only LDR images with various cameras.
Related papers
- Cinematic Gaussians: Real-Time HDR Radiance Fields with Depth of Field [23.92087253022495]
Radiance field methods represent the state of the art in reconstructing complex scenes from multi-view photos.
Their reliance on a pinhole camera model, assuming all scene elements are in focus in the input images, presents practical challenges and complicates refocusing during novel-view synthesis.
We present a lightweight analytical based on 3D Gaussian Splatting that utilizes multi-view LDR images on varying exposure times, radiance of apertures, and focus distances as input to reconstruct a high-dynamic-range scene.
arXiv Detail & Related papers (2024-06-11T15:00:24Z) - HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting [76.5908492298286]
Existing HDR NVS methods are mainly based on NeRF.
They suffer from long training time and slow inference speed.
We propose a new framework, High Dynamic Range Gaussian Splatting (-GS)
arXiv Detail & Related papers (2024-05-24T00:46:58Z) - Event-based Asynchronous HDR Imaging by Temporal Incident Light Modulation [54.64335350932855]
We propose a Pixel-Asynchronous HDR imaging system, based on key insights into the challenges in HDR imaging.
Our proposed Asyn system integrates the Dynamic Vision Sensors (DVS) with a set of LCD panels.
The LCD panels modulate the irradiance incident upon the DVS by altering their transparency, thereby triggering the pixel-independent event streams.
arXiv Detail & Related papers (2024-03-14T13:45:09Z) - Pano-NeRF: Synthesizing High Dynamic Range Novel Views with Geometry
from Sparse Low Dynamic Range Panoramic Images [82.1477261107279]
We propose the irradiance fields from sparse LDR panoramic images to increase the observation counts for faithful geometry recovery.
Experiments demonstrate that the irradiance fields outperform state-of-the-art methods on both geometry recovery and HDR reconstruction.
arXiv Detail & Related papers (2023-12-26T08:10:22Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - Casual Indoor HDR Radiance Capture from Omnidirectional Images [6.558757117312684]
We present Pano-NeRF, a novel pipeline to casually capture a plausible full HDR radiance field of a large indoor scene.
The resulting Pano-NeRF pipeline can estimate full HDR panoramas from any location of the scene.
arXiv Detail & Related papers (2022-08-16T18:45:27Z) - StyleLight: HDR Panorama Generation for Lighting Estimation and Editing [98.20167223076756]
We present a new lighting estimation and editing framework to generate high-dynamic-range (GAN) indoor panorama lighting from a single field-of-view (LFOV) image.
Our framework achieves superior performance over state-of-the-art methods on indoor lighting estimation.
arXiv Detail & Related papers (2022-07-29T17:58:58Z) - Self-supervised HDR Imaging from Motion and Exposure Cues [14.57046548797279]
We propose a novel self-supervised approach for learnable HDR estimation that alleviates the need for HDR ground-truth labels.
Experimental results show that the HDR models trained using our proposed self-supervision approach achieve performance competitive with those trained under full supervision.
arXiv Detail & Related papers (2022-03-23T10:22:03Z) - HDR-NeRF: High Dynamic Range Neural Radiance Fields [70.80920996881113]
We present High Dynamic Range Neural Radiance Fields (-NeRF) to recover an HDR radiance field from a set of low dynamic range (LDR) views with different exposures.
We are able to generate both novel HDR views and novel LDR views under different exposures.
arXiv Detail & Related papers (2021-11-29T11:06:39Z) - Luminance Attentive Networks for HDR Image and Panorama Reconstruction [37.364335148790005]
It is difficult to reconstruct a high inverse range from a low dynamic range (LDR) image as an ill-posed problem.
This paper proposes a attentive luminance network named LANet for HDR reconstruction from a single LDR image.
arXiv Detail & Related papers (2021-09-14T13:44:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.