Geometry-Aware Global Feature Aggregation for Real-Time Indirect Illumination
- URL: http://arxiv.org/abs/2508.08826v2
- Date: Wed, 15 Oct 2025 04:06:55 GMT
- Title: Geometry-Aware Global Feature Aggregation for Real-Time Indirect Illumination
- Authors: Meng Gai, Guoping Wang, Sheng Li,
- Abstract summary: We present a learning-based estimator to predict diffuse indirect illumination in screen space.<n>It is combined with direct illumination to synthesize globally-illuminated high dynamic range results.<n>Our approach excels at handling complex lighting such as varying-colored lighting and environment lighting.
- Score: 16.592953713673506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-time rendering with global illumination is crucial to afford the user realistic experience in virtual environments. We present a learning-based estimator to predict diffuse indirect illumination in screen space, which then is combined with direct illumination to synthesize globally-illuminated high dynamic range (HDR) results. Our approach tackles the challenges of capturing long-range/long-distance indirect illumination when employing neural networks and is generalized to handle complex lighting and scenarios. From the neural network thinking of the solver to the rendering equation, we present a novel network architecture to predict indirect illumination. Our network is equipped with a modified attention mechanism that aggregates global information guided by spacial geometry features, as well as a monochromatic design that encodes each color channel individually. We conducted extensive evaluations, and the experimental results demonstrate our superiority over previous learning-based techniques. Our approach excels at handling complex lighting such as varying-colored lighting and environment lighting. It can successfully capture distant indirect illumination and simulates the interreflections between textured surfaces well (i.e., color bleeding effects); it can also effectively handle new scenes that are not present in the training dataset.
Related papers
- Relightable Holoported Characters: Capturing and Relighting Dynamic Human Performance from Sparse Views [82.15089065452081]
We present Relightable Holoported Characters (RHC), a person-specific method for free-view rendering and relighting of full-body and highly dynamic humans.<n>Our transformer-based RelightNet predicts relit appearance within a single network pass, avoiding costly OLAT-basis capture and generation.<n>Experiments demonstrate our method's superior visual fidelity and lighting reproduction compared to state-of-the-art approaches.
arXiv Detail & Related papers (2025-11-29T00:17:34Z) - A Generalizable Light Transport 3D Embedding for Global Illumination [30.088406137167997]
We propose a generalizable 3D light transport embedding that approximates global illumination directly from 3D scene configurations.<n>A scalable transformer models global point-to-point interactions to encode these features into neural primitives.<n>We demonstrate results on diffuse global illumination prediction across diverse indoor scenes with varying layouts, geometry, and materials.
arXiv Detail & Related papers (2025-10-21T00:29:09Z) - LuxDiT: Lighting Estimation with Video Diffusion Transformer [66.60450792095901]
Estimating scene lighting from a single image or video remains a longstanding challenge in computer vision and graphics.<n>We propose LuxDiT, a novel data-driven approach that fine-tunes a video diffusion transformer to generate HDR environment maps conditioned on visual input.
arXiv Detail & Related papers (2025-09-03T19:59:20Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - RelightableHands: Efficient Neural Relighting of Articulated Hand Models [46.60594572471557]
We present the first neural relighting approach for rendering high-fidelity personalized hands that can be animated in real-time under novel illumination.
Our approach adopts a teacher-student framework, where the teacher learns appearance under a single point light from images captured in a light-stage.
Using images rendered by the teacher model as training data, an efficient student model directly predicts appearance under natural illuminations in real-time.
arXiv Detail & Related papers (2023-02-09T18:59:48Z) - Learning-based Inverse Rendering of Complex Indoor Scenes with
Differentiable Monte Carlo Raytracing [27.96634370355241]
This work presents an end-to-end, learning-based inverse rendering framework incorporating differentiable Monte Carlo raytracing with importance sampling.
The framework takes a single image as input to jointly recover the underlying geometry, spatially-varying lighting, and photorealistic materials.
arXiv Detail & Related papers (2022-11-06T03:34:26Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Spatially and color consistent environment lighting estimation using
deep neural networks for mixed reality [1.1470070927586016]
This paper presents a CNN-based model to estimate complex lighting for mixed reality environments.
We propose a new CNN architecture that inputs an RGB image and recognizes, in real-time, the environment lighting.
We show in experiments that the CNN architecture can predict the environment lighting with an average mean squared error (MSE) of num7.85e-04 when comparing SH lighting coefficients.
arXiv Detail & Related papers (2021-08-17T23:03:55Z) - Spatially-Varying Outdoor Lighting Estimation from Intrinsics [66.04683041837784]
We present SOLID-Net, a neural network for spatially-varying outdoor lighting estimation.
We generate spatially-varying local lighting environment maps by combining global sky environment map with warped image information.
Experiments on both synthetic and real datasets show that SOLID-Net significantly outperforms previous methods.
arXiv Detail & Related papers (2021-04-09T02:28:54Z) - PX-NET: Simple and Efficient Pixel-Wise Training of Photometric Stereo
Networks [26.958763133729846]
Retrieving accurate 3D reconstructions of objects from the way they reflect light is a very challenging task in computer vision.
We propose a novel pixel-wise training procedure for normal prediction by replacing the training data (observation maps) of globally rendered images with independent per-pixel generated data.
Our network, PX-NET, achieves the state-of-the-art performance compared to other pixelwise methods on synthetic datasets.
arXiv Detail & Related papers (2020-08-11T18:03:13Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.