ReLight My NeRF: A Dataset for Novel View Synthesis and Relighting of
Real World Objects
- URL: http://arxiv.org/abs/2304.10448v1
- Date: Thu, 20 Apr 2023 16:43:58 GMT
- Title: ReLight My NeRF: A Dataset for Novel View Synthesis and Relighting of
Real World Objects
- Authors: Marco Toschi, Riccardo De Matteo, Riccardo Spezialetti, Daniele De
Gregorio, Luigi Di Stefano, Samuele Salti
- Abstract summary: We introduce a novel dataset, dubbed ReNe (Relighting NeRF), framing real world objects under one-light-at-time (OLAT) conditions.
We release a total of 20 scenes depicting a variety of objects with complex geometry and challenging materials.
Each scene includes 2000 images, acquired from 50 different points of views under 40 different OLAT conditions.
- Score: 14.526827265012045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we focus on the problem of rendering novel views from a Neural
Radiance Field (NeRF) under unobserved light conditions. To this end, we
introduce a novel dataset, dubbed ReNe (Relighting NeRF), framing real world
objects under one-light-at-time (OLAT) conditions, annotated with accurate
ground-truth camera and light poses. Our acquisition pipeline leverages two
robotic arms holding, respectively, a camera and an omni-directional point-wise
light source. We release a total of 20 scenes depicting a variety of objects
with complex geometry and challenging materials. Each scene includes 2000
images, acquired from 50 different points of views under 40 different OLAT
conditions. By leveraging the dataset, we perform an ablation study on the
relighting capability of variants of the vanilla NeRF architecture and identify
a lightweight architecture that can render novel views of an object under novel
light conditions, which we use to establish a non-trivial baseline for the
dataset. Dataset and benchmark are available at
https://eyecan-ai.github.io/rene.
Related papers
- Objects With Lighting: A Real-World Dataset for Evaluating Reconstruction and Rendering for Object Relighting [16.938779241290735]
Reconstructing an object from photos and placing it virtually in a new environment goes beyond the standard novel view synthesis task.
This work presents a real-world dataset for measuring the reconstruction and rendering of objects for relighting.
arXiv Detail & Related papers (2024-01-17T11:02:52Z) - Neural Relighting with Subsurface Scattering by Learning the Radiance
Transfer Gradient [73.52585139592398]
We propose a novel framework for learning the radiance transfer field via volume rendering.
We will release our code and a novel light stage dataset of objects with subsurface scattering effects publicly available.
arXiv Detail & Related papers (2023-06-15T17:56:04Z) - LANe: Lighting-Aware Neural Fields for Compositional Scene Synthesis [65.20672798704128]
We present Lighting-Aware Neural Field (LANe) for compositional synthesis of driving scenes.
We learn a scene representation that disentangles the static background and transient elements into a world-NeRF and class-specific object-NeRFs.
We demonstrate the performance of our model on a synthetic dataset of diverse lighting conditions rendered with the CARLA simulator.
arXiv Detail & Related papers (2023-04-06T17:59:25Z) - A Large-Scale Outdoor Multi-modal Dataset and Benchmark for Novel View
Synthesis and Implicit Scene Reconstruction [26.122654478946227]
Neural Radiance Fields (NeRF) has achieved impressive results in single object scene reconstruction and novel view synthesis.
There is no unified outdoor scene dataset for large-scale NeRF evaluation due to expensive data acquisition and calibration costs.
In this paper, we propose a large-scale outdoor multi-modal dataset, OMMO dataset, containing complex land objects and scenes with calibrated images, point clouds and prompt annotations.
arXiv Detail & Related papers (2023-01-17T10:15:32Z) - SPARF: Neural Radiance Fields from Sparse and Noisy Poses [58.528358231885846]
We introduce Sparse Pose Adjusting Radiance Field (SPARF) to address the challenge of novel-view synthesis.
Our approach exploits multi-view geometry constraints in order to jointly learn the NeRF and refine the camera poses.
arXiv Detail & Related papers (2022-11-21T18:57:47Z) - RTMV: A Ray-Traced Multi-View Synthetic Dataset for Novel View Synthesis [104.53930611219654]
We present a large-scale synthetic dataset for novel view synthesis consisting of 300k images rendered from nearly 2000 complex scenes.
The dataset is orders of magnitude larger than existing synthetic datasets for novel view synthesis.
Using 4 distinct sources of high-quality 3D meshes, the scenes of our dataset exhibit challenging variations in camera views, lighting, shape, materials, and textures.
arXiv Detail & Related papers (2022-05-14T13:15:32Z) - Unsupervised Discovery and Composition of Object Light Fields [57.198174741004095]
We propose to represent objects in an object-centric, compositional scene representation as light fields.
We propose a novel light field compositor module that enables reconstructing the global light field from a set of object-centric light fields.
arXiv Detail & Related papers (2022-05-08T17:50:35Z) - NeLF: Practical Novel View Synthesis with Neural Light Field [93.41020940730915]
We present a practical and robust deep learning solution for the novel view synthesis of complex scenes.
In our approach, a continuous scene is represented as a light field, i.e., a set of rays, each of which has a corresponding color.
Our method achieves state-of-the-art novel view synthesis results while maintaining an interactive frame rate.
arXiv Detail & Related papers (2021-05-15T01:20:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.