LightOctree: Lightweight 3D Spatially-Coherent Indoor Lighting Estimation
- URL: http://arxiv.org/abs/2404.03925v1
- Date: Fri, 5 Apr 2024 07:15:06 GMT
- Title: LightOctree: Lightweight 3D Spatially-Coherent Indoor Lighting Estimation
- Authors: Xuecan Wang, Shibang Xiao, Xiaohui Liang,
- Abstract summary: We present a lightweight solution for estimating spatially-coherent indoor lighting from a single RGB image.
We introduce a unified, voxel octree-based illumination estimation framework to produce 3D spatially-coherent lighting.
- Score: 4.079873017864992
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a lightweight solution for estimating spatially-coherent indoor lighting from a single RGB image. Previous methods for estimating illumination using volumetric representations have overlooked the sparse distribution of light sources in space, necessitating substantial memory and computational resources for achieving high-quality results. We introduce a unified, voxel octree-based illumination estimation framework to produce 3D spatially-coherent lighting. Additionally, a differentiable voxel octree cone tracing rendering layer is proposed to eliminate regular volumetric representation throughout the entire process and ensure the retention of features across different frequency domains. This reduction significantly decreases spatial usage and required floating-point operations without substantially compromising precision. Experimental results demonstrate that our approach achieves high-quality coherent estimation with minimal cost compared to previous methods.
Related papers
- Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering [62.92985004295714]
We present a method that avoids approximations that introduce bias into the renderings and, more importantly, the gradients used for optimization.
We show that by removing these biases our approach improves the generality of radiance cache based inverse rendering, as well as increasing quality in the presence of challenging light transport effects such as specular reflections.
arXiv Detail & Related papers (2024-09-09T17:59:57Z) - MixLight: Borrowing the Best of both Spherical Harmonics and Gaussian Models [69.39388799906409]
Existing works estimate illumination by generating illumination maps or regressing illumination parameters.
This paper presents MixLight, a joint model that utilizes the complementary characteristics of SH and SG to achieve a more complete illumination representation.
arXiv Detail & Related papers (2024-04-19T10:17:10Z) - GIR: 3D Gaussian Inverse Rendering for Relightable Scene Factorization [62.13932669494098]
This paper presents a 3D Gaussian Inverse Rendering (GIR) method, employing 3D Gaussian representations to factorize the scene into material properties, light, and geometry.
We compute the normal of each 3D Gaussian using the shortest eigenvector, with a directional masking scheme forcing accurate normal estimation without external supervision.
We adopt an efficient voxel-based indirect illumination tracing scheme that stores direction-aware outgoing radiance in each 3D Gaussian to disentangle secondary illumination for approximating multi-bounce light transport.
arXiv Detail & Related papers (2023-12-08T16:05:15Z) - High-Resolution Volumetric Reconstruction for Clothed Humans [27.900514732877827]
We present a novel method for reconstructing clothed humans from a sparse set of, e.g., 1 to 6 RGB images.
Our method significantly reduces the mean point-to-surface (P2S) precision of state-of-the-art methods by more than 50% to achieve approximately 2mm accuracy with a 512 volume resolution.
arXiv Detail & Related papers (2023-07-25T06:37:50Z) - Towards Scalable Multi-View Reconstruction of Geometry and Materials [27.660389147094715]
We propose a novel method for joint recovery of camera pose, object geometry and spatially-varying Bidirectional Reflectance Distribution Function (svBRDF) of 3D scenes.
The input are high-resolution RGBD images captured by a mobile, hand-held capture system with point lights for active illumination.
arXiv Detail & Related papers (2023-06-06T15:07:39Z) - Sparse Needlets for Lighting Estimation with Spherical Transport Loss [89.52531416604774]
NeedleLight is a new lighting estimation model that represents illumination with needlets and allows lighting estimation in both frequency domain and spatial domain jointly.
Extensive experiments show that NeedleLight achieves superior lighting estimation consistently across multiple evaluation metrics as compared with state-of-the-art methods.
arXiv Detail & Related papers (2021-06-24T15:19:42Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - EMLight: Lighting Estimation via Spherical Distribution Approximation [33.26530733479459]
We propose an illumination estimation framework that leverages a regression network and a neural projector for accurate illumination estimation.
We decompose the illumination map into spherical light distribution, light intensity and the ambient term.
Under the guidance of the predicted spherical distribution, light intensity and ambient term, the neural projector synthesizes panoramic illumination maps with realistic light frequency.
arXiv Detail & Related papers (2020-12-21T04:54:08Z) - PointAR: Efficient Lighting Estimation for Mobile Augmented Reality [7.58114840374767]
We propose an efficient lighting estimation pipeline that is suitable to run on modern mobile devices.
PointAR takes a single RGB-D image captured from the mobile camera and a 2D location in that image, and estimates 2nd order spherical harmonics coefficients.
arXiv Detail & Related papers (2020-03-30T19:13:26Z) - Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution [82.1739023587565]
We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
arXiv Detail & Related papers (2020-01-14T03:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.