EMLight: Lighting Estimation via Spherical Distribution Approximation
- URL: http://arxiv.org/abs/2012.11116v1
- Date: Mon, 21 Dec 2020 04:54:08 GMT
- Title: EMLight: Lighting Estimation via Spherical Distribution Approximation
- Authors: Fangneng Zhan, Changgong Zhang, Yingchen Yu, Yuan Chang, Shijian Lu,
Feiying Ma, Xuansong Xie
- Abstract summary: We propose an illumination estimation framework that leverages a regression network and a neural projector for accurate illumination estimation.
We decompose the illumination map into spherical light distribution, light intensity and the ambient term.
Under the guidance of the predicted spherical distribution, light intensity and ambient term, the neural projector synthesizes panoramic illumination maps with realistic light frequency.
- Score: 33.26530733479459
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Illumination estimation from a single image is critical in 3D rendering and
it has been investigated extensively in the computer vision and computer
graphic research community. On the other hand, existing works estimate
illumination by either regressing light parameters or generating illumination
maps that are often hard to optimize or tend to produce inaccurate predictions.
We propose Earth Mover Light (EMLight), an illumination estimation framework
that leverages a regression network and a neural projector for accurate
illumination estimation. We decompose the illumination map into spherical light
distribution, light intensity and the ambient term, and define the illumination
estimation as a parameter regression task for the three illumination
components. Motivated by the Earth Mover distance, we design a novel spherical
mover's loss that guides to regress light distribution parameters accurately by
taking advantage of the subtleties of spherical distribution. Under the
guidance of the predicted spherical distribution, light intensity and ambient
term, the neural projector synthesizes panoramic illumination maps with
realistic light frequency. Extensive experiments show that EMLight achieves
accurate illumination estimation and the generated relighting in 3D object
embedding exhibits superior plausibility and fidelity as compared with
state-of-the-art methods.
Related papers
- MixLight: Borrowing the Best of both Spherical Harmonics and Gaussian Models [69.39388799906409]
Existing works estimate illumination by generating illumination maps or regressing illumination parameters.
This paper presents MixLight, a joint model that utilizes the complementary characteristics of SH and SG to achieve a more complete illumination representation.
arXiv Detail & Related papers (2024-04-19T10:17:10Z) - GIR: 3D Gaussian Inverse Rendering for Relightable Scene Factorization [62.13932669494098]
This paper presents a 3D Gaussian Inverse Rendering (GIR) method, employing 3D Gaussian representations to factorize the scene into material properties, light, and geometry.
We compute the normal of each 3D Gaussian using the shortest eigenvector, with a directional masking scheme forcing accurate normal estimation without external supervision.
We adopt an efficient voxel-based indirect illumination tracing scheme that stores direction-aware outgoing radiance in each 3D Gaussian to disentangle secondary illumination for approximating multi-bounce light transport.
arXiv Detail & Related papers (2023-12-08T16:05:15Z) - Physics-based Indirect Illumination for Inverse Rendering [70.27534648770057]
We present a physics-based inverse rendering method that learns the illumination, geometry, and materials of a scene from posed multi-view RGB images.
As a side product, our physics-based inverse rendering model also facilitates flexible and realistic material editing as well as relighting.
arXiv Detail & Related papers (2022-12-09T07:33:49Z) - A CNN Based Approach for the Point-Light Photometric Stereo Problem [26.958763133729846]
We propose a CNN-based approach capable of handling realistic assumptions by leveraging recent improvements of deep neural networks for far-field Photometric Stereo.
Our approach outperforms the state-of-the-art on the DiLiGenT real world dataset.
In order to measure the performance of our approach for near-field point-light source PS data, we introduce LUCES the first real-world 'dataset for near-fieLd point light soUrCe photomEtric Stereo'
arXiv Detail & Related papers (2022-10-10T12:57:12Z) - Sparse Needlets for Lighting Estimation with Spherical Transport Loss [89.52531416604774]
NeedleLight is a new lighting estimation model that represents illumination with needlets and allows lighting estimation in both frequency domain and spatial domain jointly.
Extensive experiments show that NeedleLight achieves superior lighting estimation consistently across multiple evaluation metrics as compared with state-of-the-art methods.
arXiv Detail & Related papers (2021-06-24T15:19:42Z) - GMLight: Lighting Estimation via Geometric Distribution Approximation [86.95367898017358]
This paper presents a lighting estimation framework that employs a regression network and a generative projector for effective illumination estimation.
We parameterize illumination scenes in terms of the geometric light distribution, light intensity, ambient term, and auxiliary depth, and estimate them as a pure regression task.
With the estimated lighting parameters, the generative projector synthesizes panoramic illumination maps with realistic appearance and frequency.
arXiv Detail & Related papers (2021-02-20T03:31:52Z) - Deep Lighting Environment Map Estimation from Spherical Panoramas [0.0]
We present a data-driven model that estimates an HDR lighting environment map from a single LDR monocular spherical panorama.
We exploit the availability of surface geometry to employ image-based relighting as a data generator and supervision mechanism.
arXiv Detail & Related papers (2020-05-16T14:23:05Z) - Lighthouse: Predicting Lighting Volumes for Spatially-Coherent
Illumination [84.00096195633793]
We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair.
Our model is trained without any ground truth 3D data and only requires a held-out perspective view near the input stereo pair and a spherical panorama taken within each scene as supervision.
We demonstrate that our method can predict consistent spatially-varying lighting that is convincing enough to plausibly relight and insert highly specular virtual objects into real images.
arXiv Detail & Related papers (2020-03-18T17:46:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.