Cinematic Gaussians: Real-Time HDR Radiance Fields with Depth of Field
- URL: http://arxiv.org/abs/2406.07329v3
- Date: Fri, 23 Aug 2024 22:56:57 GMT
- Title: Cinematic Gaussians: Real-Time HDR Radiance Fields with Depth of Field
- Authors: Chao Wang, Krzysztof Wolski, Bernhard Kerbl, Ana Serrano, Mojtaba Bemana, Hans-Peter Seidel, Karol Myszkowski, Thomas Leimkühler,
- Abstract summary: Radiance field methods represent the state of the art in reconstructing complex scenes from multi-view photos.
Their reliance on a pinhole camera model, assuming all scene elements are in focus in the input images, presents practical challenges and complicates refocusing during novel-view synthesis.
We present a lightweight analytical based on 3D Gaussian Splatting that utilizes multi-view LDR images on varying exposure times, radiance of apertures, and focus distances as input to reconstruct a high-dynamic-range scene.
- Score: 23.92087253022495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Radiance field methods represent the state of the art in reconstructing complex scenes from multi-view photos. However, these reconstructions often suffer from one or both of the following limitations: First, they typically represent scenes in low dynamic range (LDR), which restricts their use to evenly lit environments and hinders immersive viewing experiences. Secondly, their reliance on a pinhole camera model, assuming all scene elements are in focus in the input images, presents practical challenges and complicates refocusing during novel-view synthesis. Addressing these limitations, we present a lightweight method based on 3D Gaussian Splatting that utilizes multi-view LDR images of a scene with varying exposure times, apertures, and focus distances as input to reconstruct a high-dynamic-range (HDR) radiance field. By incorporating analytical convolutions of Gaussians based on a thin-lens camera model as well as a tonemapping module, our reconstructions enable the rendering of HDR content with flexible refocusing capabilities. We demonstrate that our combined treatment of HDR and depth of field facilitates real-time cinematic rendering, outperforming the state of the art.
Related papers
- Exposure Completing for Temporally Consistent Neural High Dynamic Range Video Rendering [17.430726543786943]
We propose a novel paradigm to render HDR frames via completing the absent exposure information.
Our approach involves interpolating neighbor LDR frames in the time dimension to reconstruct LDR frames for the absent exposures.
This benefits the fusing process for HDR results, reducing noise and ghosting artifacts therefore improving temporal consistency.
arXiv Detail & Related papers (2024-07-18T09:13:08Z) - Radiance Fields from Photons [18.15183252935672]
We introduce quanta radiance fields, a class of neural radiance fields that are trained at the granularity of individual photons using single-photon cameras (SPCs)
We demonstrate, both via simulations and a prototype SPC hardware, high-fidelity reconstructions under high-speed motion, in low light, and for extreme dynamic range settings.
arXiv Detail & Related papers (2024-07-12T16:06:51Z) - IRIS: Inverse Rendering of Indoor Scenes from Low Dynamic Range Images [32.83096814910201]
We present a method that recovers the physically based material properties and lighting of a scene from multi-view, low-dynamic-range (LDR) images.
Our method outperforms existing methods taking LDR images as input, and allows for highly realistic relighting and object insertion.
arXiv Detail & Related papers (2024-01-23T18:59:56Z) - Pano-NeRF: Synthesizing High Dynamic Range Novel Views with Geometry
from Sparse Low Dynamic Range Panoramic Images [82.1477261107279]
We propose the irradiance fields from sparse LDR panoramic images to increase the observation counts for faithful geometry recovery.
Experiments demonstrate that the irradiance fields outperform state-of-the-art methods on both geometry recovery and HDR reconstruction.
arXiv Detail & Related papers (2023-12-26T08:10:22Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Progressively Optimized Local Radiance Fields for Robust View Synthesis [76.55036080270347]
We present an algorithm for reconstructing the radiance field of a large-scale scene from a single casually captured video.
For handling unknown poses, we jointly estimate the camera poses with radiance field in a progressive manner.
For handling large unbounded scenes, we dynamically allocate new local radiance fields trained with frames within a temporal window.
arXiv Detail & Related papers (2023-03-24T04:03:55Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - Self-supervised HDR Imaging from Motion and Exposure Cues [14.57046548797279]
We propose a novel self-supervised approach for learnable HDR estimation that alleviates the need for HDR ground-truth labels.
Experimental results show that the HDR models trained using our proposed self-supervision approach achieve performance competitive with those trained under full supervision.
arXiv Detail & Related papers (2022-03-23T10:22:03Z) - HDR-NeRF: High Dynamic Range Neural Radiance Fields [70.80920996881113]
We present High Dynamic Range Neural Radiance Fields (-NeRF) to recover an HDR radiance field from a set of low dynamic range (LDR) views with different exposures.
We are able to generate both novel HDR views and novel LDR views under different exposures.
arXiv Detail & Related papers (2021-11-29T11:06:39Z) - HDR-cGAN: Single LDR to HDR Image Translation using Conditional GAN [24.299931323012757]
Low Dynamic Range (LDR) cameras are incapable of representing the wide dynamic range of the real-world scene.
We propose a deep learning based approach to recover details in the saturated areas while reconstructing the HDR image.
We present a novel conditional GAN (cGAN) based framework trained in an end-to-end fashion over the HDR-REAL and HDR-SYNTH datasets.
arXiv Detail & Related papers (2021-10-04T18:50:35Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.