Vertex Features for Neural Global Illumination
- URL: http://arxiv.org/abs/2508.07852v1
- Date: Mon, 11 Aug 2025 11:10:19 GMT
- Title: Vertex Features for Neural Global Illumination
- Authors: Rui Su, Honghao Dong, Haojie Jin, Yisong Chen, Guoping Wang, Sheng Li,
- Abstract summary: We present neural features, a generalized formulation of learnable representation for neural rendering tasks involving explicit mesh surfaces.<n>We validate our neural representation across diverse neural rendering tasks, with a specific emphasis on neural radiosity.
- Score: 21.57826395764302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research on learnable neural representations has been widely adopted in the field of 3D scene reconstruction and neural rendering applications. However, traditional feature grid representations often suffer from substantial memory footprint, posing a significant bottleneck for modern parallel computing hardware. In this paper, we present neural vertex features, a generalized formulation of learnable representation for neural rendering tasks involving explicit mesh surfaces. Instead of uniformly distributing neural features throughout 3D space, our method stores learnable features directly at mesh vertices, leveraging the underlying geometry as a compact and structured representation for neural processing. This not only optimizes memory efficiency, but also improves feature representation by aligning compactly with the surface using task-specific geometric priors. We validate our neural representation across diverse neural rendering tasks, with a specific emphasis on neural radiosity. Experimental results demonstrate that our method reduces memory consumption to only one-fifth (or even less) of grid-based representations, while maintaining comparable rendering quality and lowering inference overhead.
Related papers
- Splat the Net: Radiance Fields with Splattable Neural Primitives [64.84677516748998]
splattable neural primitives reconcile expressivity of neural models with the efficiency of primitive-based splatting.<n>Our representation supports integration along view rays without the need for costly ray marching.
arXiv Detail & Related papers (2025-10-09T17:31:11Z) - Marching Neurons: Accurate Surface Extraction for Neural Implicit Shapes [14.372634421912094]
We introduce a novel approach for analytically extracting surfaces from neural implicit functions.<n>Our method operates in parallel and can navigate large neural architectures.<n>The resulting meshes faithfully capture the full geometric information from the network without ad-hoc spatial discretization.
arXiv Detail & Related papers (2025-09-25T11:06:42Z) - Optimizing 3D Geometry Reconstruction from Implicit Neural Representations [2.3940819037450987]
Implicit neural representations have emerged as a powerful tool in learning 3D geometry.
We present a novel approach that both reduces computational expenses and enhances the capture of fine details.
arXiv Detail & Related papers (2024-10-16T16:36:23Z) - N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Explicifying Neural Implicit Fields for Efficient Dynamic Human Avatar
Modeling via a Neural Explicit Surface [10.604108229704336]
Implicit neural fields have advantages over traditional explicit representations in modeling dynamic 3D content.
The paper proposes utilizing Neural Explicit Surface (NES) to explicitly represent implicit neural fields.
NES performs similarly to previous 3D approaches, with greatly improved rendering speed and reduced memory cost.
arXiv Detail & Related papers (2023-08-07T07:17:18Z) - Masked Wavelet Representation for Compact Neural Radiance Fields [5.279919461008267]
Using a multi-layer perceptron to represent a 3D scene or object requires enormous computational resources and time.
We present a method to reduce the size without compromising the advantages of having additional data structures.
With our proposed mask and compression pipeline, we achieved state-of-the-art performance within a memory budget of 2 MB.
arXiv Detail & Related papers (2022-12-18T11:43:32Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering [60.02806355570514]
Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence.
We propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field.
Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or based on volumetrics.
arXiv Detail & Related papers (2021-06-04T17:54:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.