Spatially-Varying Outdoor Lighting Estimation from Intrinsics
- URL: http://arxiv.org/abs/2104.04160v1
- Date: Fri, 9 Apr 2021 02:28:54 GMT
- Title: Spatially-Varying Outdoor Lighting Estimation from Intrinsics
- Authors: Yongjie Zhu, Yinda Zhang, Si Li, Boxin Shi
- Abstract summary: We present SOLID-Net, a neural network for spatially-varying outdoor lighting estimation.
We generate spatially-varying local lighting environment maps by combining global sky environment map with warped image information.
Experiments on both synthetic and real datasets show that SOLID-Net significantly outperforms previous methods.
- Score: 66.04683041837784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present SOLID-Net, a neural network for spatially-varying outdoor lighting
estimation from a single outdoor image for any 2D pixel location. Previous work
has used a unified sky environment map to represent outdoor lighting. Instead,
we generate spatially-varying local lighting environment maps by combining
global sky environment map with warped image information according to geometric
information estimated from intrinsics. As no outdoor dataset with image and
local lighting ground truth is readily available, we introduce the SOLID-Img
dataset with physically-based rendered images and their corresponding intrinsic
and lighting information. We train a deep neural network to regress intrinsic
cues with physically-based constraints and use them to conduct global and local
lightings estimation. Experiments on both synthetic and real datasets show that
SOLID-Net significantly outperforms previous methods.
Related papers
- SplitNeRF: Split Sum Approximation Neural Field for Joint Geometry,
Illumination, and Material Estimation [65.99344783327054]
We present a novel approach for digitizing real-world objects by estimating their geometry, material properties, and lighting.
Our method incorporates into Radiance Neural Field (NeRF) pipelines the split sum approximation used with image-based lighting for real-time physical-based rendering.
Our method is capable of attaining state-of-the-art relighting quality after only $sim1$ hour of training in a single NVIDIA A100 GPU.
arXiv Detail & Related papers (2023-11-28T10:36:36Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - Neural Light Field Estimation for Street Scenes with Differentiable
Virtual Object Insertion [129.52943959497665]
Existing works on outdoor lighting estimation typically simplify the scene lighting into an environment map.
We propose a neural approach that estimates the 5D HDR light field from a single image.
We show the benefits of our AR object insertion in an autonomous driving application.
arXiv Detail & Related papers (2022-08-19T17:59:16Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - Spatially and color consistent environment lighting estimation using
deep neural networks for mixed reality [1.1470070927586016]
This paper presents a CNN-based model to estimate complex lighting for mixed reality environments.
We propose a new CNN architecture that inputs an RGB image and recognizes, in real-time, the environment lighting.
We show in experiments that the CNN architecture can predict the environment lighting with an average mean squared error (MSE) of num7.85e-04 when comparing SH lighting coefficients.
arXiv Detail & Related papers (2021-08-17T23:03:55Z) - PX-NET: Simple and Efficient Pixel-Wise Training of Photometric Stereo
Networks [26.958763133729846]
Retrieving accurate 3D reconstructions of objects from the way they reflect light is a very challenging task in computer vision.
We propose a novel pixel-wise training procedure for normal prediction by replacing the training data (observation maps) of globally rendered images with independent per-pixel generated data.
Our network, PX-NET, achieves the state-of-the-art performance compared to other pixelwise methods on synthetic datasets.
arXiv Detail & Related papers (2020-08-11T18:03:13Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z) - Deep Lighting Environment Map Estimation from Spherical Panoramas [0.0]
We present a data-driven model that estimates an HDR lighting environment map from a single LDR monocular spherical panorama.
We exploit the availability of surface geometry to employ image-based relighting as a data generator and supervision mechanism.
arXiv Detail & Related papers (2020-05-16T14:23:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.