SIRfyN: Single Image Relighting from your Neighbors
- URL: http://arxiv.org/abs/2112.04497v1
- Date: Wed, 8 Dec 2021 17:05:57 GMT
- Title: SIRfyN: Single Image Relighting from your Neighbors
- Authors: D.A. Forsyth, Anand Bhattad, Pranav Asthana, Yuanyi Zhong, Yuxiong
Wang
- Abstract summary: We show how to relight a scene depicted in a single image, such that (a) the overall shading has changed and (b) the resulting image looks like a natural image of that scene.
- Score: 14.601975066158394
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We show how to relight a scene, depicted in a single image, such that (a) the
overall shading has changed and (b) the resulting image looks like a natural
image of that scene. Applications for such a procedure include generating
training data and building authoring environments. Naive methods for doing this
fail. One reason is that shading and albedo are quite strongly related; for
example, sharp boundaries in shading tend to appear at depth discontinuities,
which usually apparent in albedo. The same scene can be lit in different ways,
and established theory shows the different lightings form a cone (the
illumination cone). Novel theory shows that one can use similar scenes to
estimate the different lightings that apply to a given scene, with bounded
expected error. Our method exploits this theory to estimate a representation of
the available lighting fields in the form of imputed generators of the
illumination cone. Our procedure does not require expensive "inverse graphics"
datasets, and sees no ground truth data of any kind.
Qualitative evaluation suggests the method can erase and restore soft indoor
shadows, and can "steer" light around a scene. We offer a summary quantitative
evaluation of the method with a novel application of the FID. An extension of
the FID allows per-generated-image evaluation. Furthermore, we offer
qualitative evaluation with a user study, and show that our method produces
images that can successfully be used for data augmentation.
Related papers
- Neural Gaffer: Relighting Any Object via Diffusion [43.87941408722868]
We propose a novel end-to-end 2D relighting diffusion model, called Neural Gaffer.
Our model takes a single image of any object and can synthesize an accurate, high-quality relit image under any novel lighting condition.
We evaluate our model on both synthetic and in-the-wild Internet imagery and demonstrate its advantages in terms of generalization and accuracy.
arXiv Detail & Related papers (2024-06-11T17:50:15Z) - Latent Intrinsics Emerge from Training to Relight [21.766083733177652]
This paper describes a relighting method that is entirely data-driven, where intrinsics and lighting are each represented as latent variables.
We show that albedo can be recovered from our latent intrinsics without using any example albedos, and that the albedos recovered are competitive with SOTA methods.
arXiv Detail & Related papers (2024-05-31T17:59:12Z) - SIRe-IR: Inverse Rendering for BRDF Reconstruction with Shadow and
Illumination Removal in High-Illuminance Scenes [51.50157919750782]
We present SIRe-IR, an implicit neural rendering inverse approach that decomposes the scene into environment map, albedo, and roughness.
By accurately modeling the indirect radiance field, normal, visibility, and direct light simultaneously, we are able to remove both shadows and indirect illumination.
Even in the presence of intense illumination, our method recovers high-quality albedo and roughness with no shadow interference.
arXiv Detail & Related papers (2023-10-19T10:44:23Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Spatio-Temporal Outdoor Lighting Aggregation on Image Sequences using
Transformer Networks [23.6427456783115]
In this work, we focus on outdoor lighting estimation by aggregating individual noisy estimates from images.
Recent work based on deep neural networks has shown promising results for single image lighting estimation, but suffers from robustness.
We tackle this problem by combining lighting estimates from several image views sampled in the angular and temporal domain of an image sequence.
arXiv Detail & Related papers (2022-02-18T14:11:16Z) - Neural Radiance Fields for Outdoor Scene Relighting [70.97747511934705]
We present NeRF-OSR, the first approach for outdoor scene relighting based on neural radiance fields.
In contrast to the prior art, our technique allows simultaneous editing of both scene illumination and camera viewpoint.
It also includes a dedicated network for shadow reproduction, which is crucial for high-quality outdoor scene relighting.
arXiv Detail & Related papers (2021-12-09T18:59:56Z) - Self-supervised Outdoor Scene Relighting [92.20785788740407]
We propose a self-supervised approach for relighting.
Our approach is trained only on corpora of images collected from the internet without any user-supervision.
Results show the ability of our technique to produce photo-realistic and physically plausible results, that generalizes to unseen scenes.
arXiv Detail & Related papers (2021-07-07T09:46:19Z) - Relighting Images in the Wild with a Self-Supervised Siamese
Auto-Encoder [62.580345486483886]
We propose a self-supervised method for image relighting of single view images in the wild.
The method is based on an auto-encoder which deconstructs an image into two separate encodings.
We train our model on large-scale datasets such as Youtube 8M and CelebA.
arXiv Detail & Related papers (2020-12-11T16:08:50Z) - Intrinsic Image Decomposition using Paradigms [0.0]
Best modern image methods learn a map from image to albedo using rendered models and human judgements.
This paper describes a method that learns intrinsic image decomposition without seeing W annotations, rendered data, or ground truth data.
arXiv Detail & Related papers (2020-11-20T17:10:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.