Physically-Based Editing of Indoor Scene Lighting from a Single Image
- URL: http://arxiv.org/abs/2205.09343v1
- Date: Thu, 19 May 2022 06:44:37 GMT
- Title: Physically-Based Editing of Indoor Scene Lighting from a Single Image
- Authors: Zhengqin Li, Jia Shi, Sai Bi, Rui Zhu, Kalyan Sunkavalli, Milo\v{s}
Ha\v{s}an, Zexiang Xu, Ravi Ramamoorthi, Manmohan Chandraker
- Abstract summary: We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
- Score: 106.60252793395104
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a method to edit complex indoor lighting from a single image with
its predicted depth and light source segmentation masks. This is an extremely
challenging problem that requires modeling complex light transport, and
disentangling HDR lighting from material and geometry with only a partial LDR
observation of the scene. We tackle this problem using two novel components: 1)
a holistic scene reconstruction method that estimates scene reflectance and
parametric 3D lighting, and 2) a neural rendering framework that re-renders the
scene from our predictions. We use physically-based indoor light
representations that allow for intuitive editing, and infer both visible and
invisible light sources. Our neural rendering framework combines
physically-based direct illumination and shadow rendering with deep networks to
approximate global illumination. It can capture challenging lighting effects,
such as soft shadows, directional lighting, specular materials, and
interreflections. Previous single image inverse rendering methods usually
entangle scene lighting and geometry and only support applications like object
insertion. Instead, by combining parametric 3D lighting estimation with neural
scene rendering, we demonstrate the first automatic method to achieve full
scene relighting, including light source insertion, removal, and replacement,
from a single image. All source code and data will be publicly released.
Related papers
- All-frequency Full-body Human Image Relighting [1.529342790344802]
Relighting of human images enables post-photography editing of lighting effects in portraits.
The current mainstream approach uses neural networks to approximate lighting effects without explicitly accounting for the principle of physical shading.
We propose a two-stage relighting method that can reproduce physically-based shadows and shading from low to high frequencies.
arXiv Detail & Related papers (2024-11-01T04:45:48Z) - IllumiNeRF: 3D Relighting Without Inverse Rendering [25.642960820693947]
We show how to relight each input image using an image diffusion model conditioned on target environment lighting and estimated object geometry.
We reconstruct a Neural Radiance Field (NeRF) with these relit images, from which we render novel views under the target lighting.
We demonstrate that this strategy is surprisingly competitive and achieves state-of-the-art results on multiple relighting benchmarks.
arXiv Detail & Related papers (2024-06-10T17:59:59Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - WildLight: In-the-wild Inverse Rendering with a Flashlight [77.31815397135381]
We propose a practical photometric solution for in-the-wild inverse rendering under unknown ambient lighting.
Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques.
arXiv Detail & Related papers (2023-03-24T17:59:56Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z) - Self-supervised Outdoor Scene Relighting [92.20785788740407]
We propose a self-supervised approach for relighting.
Our approach is trained only on corpora of images collected from the internet without any user-supervision.
Results show the ability of our technique to produce photo-realistic and physically plausible results, that generalizes to unseen scenes.
arXiv Detail & Related papers (2021-07-07T09:46:19Z) - Free-viewpoint Indoor Neural Relighting from Multi-view Stereo [5.306819482496464]
We introduce a neural relighting algorithm for captured indoors scenes, that allows interactive free-viewpoint navigation.
Our method allows illumination to be changed synthetically, while coherently rendering cast shadows and complex glossy materials.
arXiv Detail & Related papers (2021-06-24T20:09:40Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.