Unsupervised Discovery and Composition of Object Light Fields
- URL: http://arxiv.org/abs/2205.03923v2
- Date: Sat, 15 Jul 2023 21:30:46 GMT
- Title: Unsupervised Discovery and Composition of Object Light Fields
- Authors: Cameron Smith, Hong-Xing Yu, Sergey Zakharov, Fredo Durand, Joshua B.
Tenenbaum, Jiajun Wu, Vincent Sitzmann
- Abstract summary: We propose to represent objects in an object-centric, compositional scene representation as light fields.
We propose a novel light field compositor module that enables reconstructing the global light field from a set of object-centric light fields.
- Score: 57.198174741004095
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural scene representations, both continuous and discrete, have recently
emerged as a powerful new paradigm for 3D scene understanding. Recent efforts
have tackled unsupervised discovery of object-centric neural scene
representations. However, the high cost of ray-marching, exacerbated by the
fact that each object representation has to be ray-marched separately, leads to
insufficiently sampled radiance fields and thus, noisy renderings, poor
framerates, and high memory and time complexity during training and rendering.
Here, we propose to represent objects in an object-centric, compositional scene
representation as light fields. We propose a novel light field compositor
module that enables reconstructing the global light field from a set of
object-centric light fields. Dubbed Compositional Object Light Fields (COLF),
our method enables unsupervised learning of object-centric neural scene
representations, state-of-the-art reconstruction and novel view synthesis
performance on standard datasets, and rendering and training speeds at orders
of magnitude faster than existing 3D approaches.
Related papers
- Slot-guided Volumetric Object Radiance Fields [13.996432950674045]
We present a novel framework for 3D object-centric representation learning.
Our approach effectively decomposes complex scenes into individual objects from a single image in an unsupervised fashion.
arXiv Detail & Related papers (2024-01-04T12:52:48Z) - LANe: Lighting-Aware Neural Fields for Compositional Scene Synthesis [65.20672798704128]
We present Lighting-Aware Neural Field (LANe) for compositional synthesis of driving scenes.
We learn a scene representation that disentangles the static background and transient elements into a world-NeRF and class-specific object-NeRFs.
We demonstrate the performance of our model on a synthetic dataset of diverse lighting conditions rendered with the CARLA simulator.
arXiv Detail & Related papers (2023-04-06T17:59:25Z) - Object Scene Representation Transformer [56.40544849442227]
We introduce Object Scene Representation Transformer (OSRT), a 3D-centric model in which individual object representations naturally emerge through novel view synthesis.
OSRT scales to significantly more complex scenes with larger diversity of objects and backgrounds than existing methods.
It is multiple orders of magnitude faster at compositional rendering thanks to its light field parametrization and the novel Slot Mixer decoder.
arXiv Detail & Related papers (2022-06-14T15:40:47Z) - Neural Point Light Fields [80.98651520818785]
We introduce Neural Point Light Fields that represent scenes implicitly with a light field living on a sparse point cloud.
These point light fields are as a function of the ray direction, and local point feature neighborhood, allowing us to interpolate the light field conditioned training images without dense object coverage and parallax.
arXiv Detail & Related papers (2021-12-02T18:20:10Z) - Learning Object-Compositional Neural Radiance Field for Editable Scene
Rendering [42.37007176376849]
We present a novel neural scene rendering system, which learns an object-compositional neural radiance field and produces realistic rendering for a clustered and real-world scene.
To survive the training in heavily cluttered scenes, we propose a scene-guided training strategy to solve the 3D space ambiguity in the occluded regions and learn sharp boundaries for each object.
arXiv Detail & Related papers (2021-09-04T11:37:18Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z) - Object-Centric Neural Scene Rendering [19.687759175741824]
We present a method for composing photorealistic scenes from captured images of objects.
Our work builds upon neural radiance fields (NeRFs), which implicitly model the volumetric density and directionally-emitted radiance of a scene.
We learn object-centric neural scattering functions (OSFs), a representation that models per-object light transport implicitly using a lighting- and view-dependent neural network.
arXiv Detail & Related papers (2020-12-15T18:55:02Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.