Radiance Fields from Photons
- URL: http://arxiv.org/abs/2407.09386v1
- Date: Fri, 12 Jul 2024 16:06:51 GMT
- Title: Radiance Fields from Photons
- Authors: Sacha Jungerman, Mohit Gupta,
- Abstract summary: We introduce quanta radiance fields, a class of neural radiance fields that are trained at the granularity of individual photons using single-photon cameras (SPCs)
We demonstrate, both via simulations and a prototype SPC hardware, high-fidelity reconstructions under high-speed motion, in low light, and for extreme dynamic range settings.
- Score: 18.15183252935672
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural radiance fields, or NeRFs, have become the de facto approach for high-quality view synthesis from a collection of images captured from multiple viewpoints. However, many issues remain when capturing images in-the-wild under challenging conditions, such as low light, high dynamic range, or rapid motion leading to smeared reconstructions with noticeable artifacts. In this work, we introduce quanta radiance fields, a novel class of neural radiance fields that are trained at the granularity of individual photons using single-photon cameras (SPCs). We develop theory and practical computational techniques for building radiance fields and estimating dense camera poses from unconventional, stochastic, and high-speed binary frame sequences captured by SPCs. We demonstrate, both via simulations and a SPC hardware prototype, high-fidelity reconstructions under high-speed motion, in low light, and for extreme dynamic range settings.
Related papers
- BRDF-NeRF: Neural Radiance Fields with Optical Satellite Images and BRDF Modelling [0.0]
We introduce BRDF-NeRF, which incorporates the physically-based semi-empirical Rahman-Pinty-Verstraete (RPV) BRDF model.
BRDF-NeRF successfully synthesizes novel views from unseen angles and generates high-quality digital surface models.
arXiv Detail & Related papers (2024-09-18T14:28:52Z) - Cinematic Gaussians: Real-Time HDR Radiance Fields with Depth of Field [23.92087253022495]
Radiance field methods represent the state of the art in reconstructing complex scenes from multi-view photos.
Their reliance on a pinhole camera model, assuming all scene elements are in focus in the input images, presents practical challenges and complicates refocusing during novel-view synthesis.
We present a lightweight analytical based on 3D Gaussian Splatting that utilizes multi-view LDR images on varying exposure times, radiance of apertures, and focus distances as input to reconstruct a high-dynamic-range scene.
arXiv Detail & Related papers (2024-06-11T15:00:24Z) - Pano-NeRF: Synthesizing High Dynamic Range Novel Views with Geometry
from Sparse Low Dynamic Range Panoramic Images [82.1477261107279]
We propose the irradiance fields from sparse LDR panoramic images to increase the observation counts for faithful geometry recovery.
Experiments demonstrate that the irradiance fields outperform state-of-the-art methods on both geometry recovery and HDR reconstruction.
arXiv Detail & Related papers (2023-12-26T08:10:22Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.
Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.
We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - Panoramas from Photons [22.437940699523082]
We present a method capable of estimating extreme scene motion under challenging conditions, such as low light or high dynamic range.
Our method relies on grouping and aggregating frames after-the-fact, in a stratified manner.
We demonstrate the creation of high-quality panoramas under fast motion and extremely low light, and super-resolution results using a custom single-photon camera prototype.
arXiv Detail & Related papers (2023-09-07T16:07:31Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Progressively Optimized Local Radiance Fields for Robust View Synthesis [76.55036080270347]
We present an algorithm for reconstructing the radiance field of a large-scale scene from a single casually captured video.
For handling unknown poses, we jointly estimate the camera poses with radiance field in a progressive manner.
For handling large unbounded scenes, we dynamically allocate new local radiance fields trained with frames within a temporal window.
arXiv Detail & Related papers (2023-03-24T04:03:55Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - Photon-Starved Scene Inference using Single Photon Cameras [14.121328731553868]
We propose photon scale-space a collection of high-SNR images spanning a wide range of photons-per-pixel (PPP) levels.
We develop training techniques that push images with different illumination levels closer to each other in feature representation space.
Based on the proposed approach, we demonstrate, via simulations and real experiments with a SPAD camera, high-performance on various inference tasks.
arXiv Detail & Related papers (2021-07-23T02:27:03Z) - Quanta Burst Photography [15.722085082004934]
Single-photon avalanche diodes (SPADs) are an emerging sensor technology capable of detecting individual incident photons.
We present quanta burst photography, a computational photography technique that leverages SPCs as passive imaging devices for photography in challenging conditions.
arXiv Detail & Related papers (2020-06-21T16:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.