Objects With Lighting: A Real-World Dataset for Evaluating Reconstruction and Rendering for Object Relighting
- URL: http://arxiv.org/abs/2401.09126v2
- Date: Sat, 13 Apr 2024 16:43:01 GMT
- Title: Objects With Lighting: A Real-World Dataset for Evaluating Reconstruction and Rendering for Object Relighting
- Authors: Benjamin Ummenhofer, Sanskar Agrawal, Rene Sepulveda, Yixing Lao, Kai Zhang, Tianhang Cheng, Stephan Richter, Shenlong Wang, German Ros,
- Abstract summary: Reconstructing an object from photos and placing it virtually in a new environment goes beyond the standard novel view synthesis task.
This work presents a real-world dataset for measuring the reconstruction and rendering of objects for relighting.
- Score: 16.938779241290735
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reconstructing an object from photos and placing it virtually in a new environment goes beyond the standard novel view synthesis task as the appearance of the object has to not only adapt to the novel viewpoint but also to the new lighting conditions and yet evaluations of inverse rendering methods rely on novel view synthesis data or simplistic synthetic datasets for quantitative analysis. This work presents a real-world dataset for measuring the reconstruction and rendering of objects for relighting. To this end, we capture the environment lighting and ground truth images of the same objects in multiple environments allowing to reconstruct the objects from images taken in one environment and quantify the quality of the rendered views for the unseen lighting environments. Further, we introduce a simple baseline composed of off-the-shelf methods and test several state-of-the-art methods on the relighting task and show that novel view synthesis is not a reliable proxy to measure performance. Code and dataset are available at https://github.com/isl-org/objects-with-lighting .
Related papers
- Relighting from a Single Image: Datasets and Deep Intrinsic-based Architecture [0.7499722271664147]
Single image scene relighting aims to generate a realistic new version of an input image so that it appears to be illuminated by a new target light condition.
We propose two new datasets: a synthetic dataset with the ground truth of intrinsic components and a real dataset collected under laboratory conditions.
Our method outperforms the state-of-the-art methods in performance, as tested on both existing datasets and our newly developed datasets.
arXiv Detail & Related papers (2024-09-27T14:15:02Z) - Sparse multi-view hand-object reconstruction for unseen environments [31.604141859402187]
We train our model on a synthetic hand-object dataset and evaluate directly on a real world recorded hand-object dataset with unseen objects.
We show that while reconstruction of unseen hands and objects from RGB is challenging, additional views can help improve the reconstruction quality.
arXiv Detail & Related papers (2024-05-02T15:01:25Z) - Neural Microfacet Fields for Inverse Rendering [54.15870869037466]
We present a method for recovering materials, geometry, and environment illumination from images of a scene.
Our method uses a microfacet reflectance model within a volumetric setting by treating each sample along the ray as a (potentially non-opaque) surface.
arXiv Detail & Related papers (2023-03-31T05:38:13Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Discovering Objects that Can Move [55.743225595012966]
We study the problem of object discovery -- separating objects from the background without manual labels.
Existing approaches utilize appearance cues, such as color, texture, and location, to group pixels into object-like regions.
We choose to focus on dynamic objects -- entities that can move independently in the world.
arXiv Detail & Related papers (2022-03-18T21:13:56Z) - NeROIC: Neural Rendering of Objects from Online Image Collections [42.02832046768925]
We present a novel method to acquire object representations from online image collections, capturing high-quality geometry and material properties of arbitrary objects.
This enables various object-centric rendering applications such as novel-view synthesis, relighting, and harmonized background composition.
arXiv Detail & Related papers (2022-01-07T16:45:15Z) - Salient Objects in Clutter [130.63976772770368]
This paper identifies and addresses a serious design bias of existing salient object detection (SOD) datasets.
This design bias has led to a saturation in performance for state-of-the-art SOD models when evaluated on existing datasets.
We propose a new high-quality dataset and update the previous saliency benchmark.
arXiv Detail & Related papers (2021-05-07T03:49:26Z) - Unsupervised Learning of 3D Object Categories from Videos in the Wild [75.09720013151247]
We focus on learning a model from multiple views of a large collection of object instances.
We propose a new neural network design, called warp-conditioned ray embedding (WCR), which significantly improves reconstruction.
Our evaluation demonstrates performance improvements over several deep monocular reconstruction baselines on existing benchmarks.
arXiv Detail & Related papers (2021-03-30T17:57:01Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z) - Exploring Image Enhancement for Salient Object Detection in Low Light
Images [27.61080096436953]
We propose an image enhancement approach to facilitate the salient object detection in low light images.
The proposed model embeds the physical lighting model into the deep neural network to describe the degradation of low light images.
We construct a low light Images dataset with pixel-level human-labeled ground-truth annotations and report promising results.
arXiv Detail & Related papers (2020-07-31T15:09:03Z) - IllumiNet: Transferring Illumination from Planar Surfaces to Virtual
Objects in Augmented Reality [38.83696624634213]
This paper presents an illumination estimation method for virtual objects in real environment by learning.
Given a single RGB image, our method directly infers the relit virtual object by transferring the illumination features extracted from planar surfaces in the scene to the desired geometries.
arXiv Detail & Related papers (2020-07-12T13:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.