Learning Relighting and Intrinsic Decomposition in Neural Radiance Fields
- URL: http://arxiv.org/abs/2406.11077v1
- Date: Sun, 16 Jun 2024 21:10:37 GMT
- Title: Learning Relighting and Intrinsic Decomposition in Neural Radiance Fields
- Authors: Yixiong Yang, Shilin Hu, Haoyu Wu, Ramon Baldrich, Dimitris Samaras, Maria Vanrell,
- Abstract summary: Our research introduces a method that combines relighting with intrinsic decomposition.
By leveraging light variations in scenes to generate pseudo labels, our method provides guidance for intrinsic decomposition.
We validate our method on both synthetic and real-world datasets, achieving convincing results.
- Score: 21.057216351934688
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The task of extracting intrinsic components, such as reflectance and shading, from neural radiance fields is of growing interest. However, current methods largely focus on synthetic scenes and isolated objects, overlooking the complexities of real scenes with backgrounds. To address this gap, our research introduces a method that combines relighting with intrinsic decomposition. By leveraging light variations in scenes to generate pseudo labels, our method provides guidance for intrinsic decomposition without requiring ground truth data. Our method, grounded in physical constraints, ensures robustness across diverse scene types and reduces the reliance on pre-trained models or hand-crafted priors. We validate our method on both synthetic and real-world datasets, achieving convincing results. Furthermore, the applicability of our method to image editing tasks demonstrates promising outcomes.
Related papers
- Relighting from a Single Image: Datasets and Deep Intrinsic-based Architecture [0.7499722271664147]
Single image scene relighting aims to generate a realistic new version of an input image so that it appears to be illuminated by a new target light condition.
We propose two new datasets: a synthetic dataset with the ground truth of intrinsic components and a real dataset collected under laboratory conditions.
Our method outperforms the state-of-the-art methods in performance, as tested on both existing datasets and our newly developed datasets.
arXiv Detail & Related papers (2024-09-27T14:15:02Z) - Perceptual Artifacts Localization for Image Synthesis Tasks [59.638307505334076]
We introduce a novel dataset comprising 10,168 generated images, each annotated with per-pixel perceptual artifact labels.
A segmentation model, trained on our proposed dataset, effectively localizes artifacts across a range of tasks.
We propose an innovative zoom-in inpainting pipeline that seamlessly rectifies perceptual artifacts in the generated images.
arXiv Detail & Related papers (2023-10-09T10:22:08Z) - Diffusion-based Holistic Texture Rectification and Synthesis [26.144666226217062]
Traditional texture synthesis approaches focus on generating textures from pristine samples.
We propose a framework that synthesizes holistic textures from degraded samples in natural images.
arXiv Detail & Related papers (2023-09-26T08:44:46Z) - Neural Relighting with Subsurface Scattering by Learning the Radiance
Transfer Gradient [73.52585139592398]
We propose a novel framework for learning the radiance transfer field via volume rendering.
We will release our code and a novel light stage dataset of objects with subsurface scattering effects publicly available.
arXiv Detail & Related papers (2023-06-15T17:56:04Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable
Novel View Synthesis [90.03590032170169]
We present intrinsic neural radiance fields, dubbed IntrinsicNeRF, which introduce intrinsic decomposition into the NeRF-based neural rendering method.
Our experiments and editing samples on both object-specific/room-scale scenes and synthetic/real-word data demonstrate that we can obtain consistent intrinsic decomposition results.
arXiv Detail & Related papers (2022-10-02T22:45:11Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Partially fake it till you make it: mixing real and fake thermal images
for improved object detection [29.13557322147509]
We show the performance of the proposed system in the context of object detection in thermal videos.
Our single-modality detector achieves state-of-the-art results on the FLIR ADAS dataset.
arXiv Detail & Related papers (2021-06-25T12:56:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.