Neural Point Light Fields
- URL: http://arxiv.org/abs/2112.01473v1
- Date: Thu, 2 Dec 2021 18:20:10 GMT
- Title: Neural Point Light Fields
- Authors: Julian Ost, Issam Laradji, Alejandro Newell, Yuval Bahat, Felix Heide
- Abstract summary: We introduce Neural Point Light Fields that represent scenes implicitly with a light field living on a sparse point cloud.
These point light fields are as a function of the ray direction, and local point feature neighborhood, allowing us to interpolate the light field conditioned training images without dense object coverage and parallax.
- Score: 80.98651520818785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce Neural Point Light Fields that represent scenes implicitly with
a light field living on a sparse point cloud. Combining differentiable volume
rendering with learned implicit density representations has made it possible to
synthesize photo-realistic images for novel views of small scenes. As neural
volumetric rendering methods require dense sampling of the underlying
functional scene representation, at hundreds of samples along a ray cast
through the volume, they are fundamentally limited to small scenes with the
same objects projected to hundreds of training views. Promoting sparse point
clouds to neural implicit light fields allows us to represent large scenes
effectively with only a single implicit sampling operation per ray. These point
light fields are as a function of the ray direction, and local point feature
neighborhood, allowing us to interpolate the light field conditioned training
images without dense object coverage and parallax. We assess the proposed
method for novel view synthesis on large driving scenarios, where we synthesize
realistic unseen views that existing implicit approaches fail to represent. We
validate that Neural Point Light Fields make it possible to predict videos
along unseen trajectories previously only feasible to generate by explicitly
modeling the scene.
Related papers
- Sampling for View Synthesis: From Local Light Field Fusion to Neural Radiance Fields and Beyond [27.339452004523082]
Local light field fusion proposes an algorithm for practical view synthesis from an irregular grid of sampled views.
We achieve the perceptual quality of Nyquist rate view sampling while using up to 4000x fewer views.
We reprise some of the recent results on sparse and even single image view synthesis.
arXiv Detail & Related papers (2024-08-08T16:56:03Z) - Adaptive Shells for Efficient Neural Radiance Field Rendering [92.18962730460842]
We propose a neural radiance formulation that smoothly transitions between- and surface-based rendering.
Our approach enables efficient rendering at very high fidelity.
We also demonstrate that the extracted envelope enables downstream applications such as animation and simulation.
arXiv Detail & Related papers (2023-11-16T18:58:55Z) - PDF: Point Diffusion Implicit Function for Large-scale Scene Neural
Representation [24.751481680565803]
We propose a Point implicit Function, PDF, for large-scale scene neural representation.
The core of our method is a large-scale point cloud super-resolution diffusion module.
The region sampling based on Mip-NeRF 360 is employed to model the background representation.
arXiv Detail & Related papers (2023-11-03T08:19:47Z) - Pointersect: Neural Rendering with Cloud-Ray Intersection [30.485621062087585]
We propose a novel method that renders point clouds as if they are surfaces.
The proposed method is differentiable and requires no scene-specific optimization.
arXiv Detail & Related papers (2023-04-24T18:36:49Z) - Unsupervised Discovery and Composition of Object Light Fields [57.198174741004095]
We propose to represent objects in an object-centric, compositional scene representation as light fields.
We propose a novel light field compositor module that enables reconstructing the global light field from a set of object-centric light fields.
arXiv Detail & Related papers (2022-05-08T17:50:35Z) - Learning Neural Transmittance for Efficient Rendering of Reflectance
Fields [43.24427791156121]
We propose a novel method based on precomputed Neural Transmittance Functions to accelerate rendering of neural reflectance fields.
Results on real and synthetic scenes demonstrate almost two order of magnitude speedup for renderings under environment maps with minimal accuracy loss.
arXiv Detail & Related papers (2021-10-25T21:12:25Z) - Object-Centric Neural Scene Rendering [19.687759175741824]
We present a method for composing photorealistic scenes from captured images of objects.
Our work builds upon neural radiance fields (NeRFs), which implicitly model the volumetric density and directionally-emitted radiance of a scene.
We learn object-centric neural scattering functions (OSFs), a representation that models per-object light transport implicitly using a lighting- and view-dependent neural network.
arXiv Detail & Related papers (2020-12-15T18:55:02Z) - Space-time Neural Irradiance Fields for Free-Viewpoint Video [54.436478702701244]
We present a method that learns a neural irradiance field for dynamic scenes from a single video.
Our learned representation enables free-view rendering of the input video.
arXiv Detail & Related papers (2020-11-25T18:59:28Z) - Neural Scene Graphs for Dynamic Scenes [57.65413768984925]
We present the first neural rendering method that decomposes dynamic scenes into scene graphs.
We learn implicitly encoded scenes, combined with a jointly learned latent representation to describe objects with a single implicit function.
arXiv Detail & Related papers (2020-11-20T12:37:10Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.