Real-Time Neural BRDF with Spherically Distributed Primitives
- URL: http://arxiv.org/abs/2310.08332v1
- Date: Thu, 12 Oct 2023 13:46:36 GMT
- Title: Real-Time Neural BRDF with Spherically Distributed Primitives
- Authors: Yishun Dou, Zhong Zheng, Qiaoqiao Jin, Bingbing Ni, Yugang Chen and
Junxiang Ke
- Abstract summary: We propose a novel neural BRDF offering highly versatile material representation, yet with very-light memory and neural computation consumption.
Results show that our system achieves real-time rendering with a wide variety of appearances.
- Score: 35.09149879060455
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel compact and efficient neural BRDF offering highly
versatile material representation, yet with very-light memory and neural
computation consumption towards achieving real-time rendering. The results in
Figure 1, rendered at full HD resolution on a current desktop machine, show
that our system achieves real-time rendering with a wide variety of
appearances, which is approached by the following two designs. On the one hand,
noting that bidirectional reflectance is distributed in a very sparse
high-dimensional subspace, we propose to project the BRDF into two
low-dimensional components, i.e., two hemisphere feature-grids for incoming and
outgoing directions, respectively. On the other hand, learnable neural
reflectance primitives are distributed on our highly-tailored spherical surface
grid, which offer informative features for each component and alleviate the
conventional heavy feature learning network to a much smaller one, leading to
very fast evaluation. These primitives are centrally stored in a codebook and
can be shared across multiple grids and even across materials, based on the
low-cost indices stored in material-specific spherical surface grids. Our
neural BRDF, which is agnostic to the material, provides a unified framework
that can represent a variety of materials in consistent manner. Comprehensive
experimental results on measured BRDF compression, Monte Carlo simulated BRDF
acceleration, and extension to spatially varying effect demonstrate the
superior quality and generalizability achieved by the proposed scheme.
Related papers
- RISE-SDF: a Relightable Information-Shared Signed Distance Field for Glossy Object Inverse Rendering [26.988572852463815]
In this paper, we propose a novel end-to-end relightable neural inverse rendering system.
Our algorithm achieves state-of-the-art performance in inverse rendering and relighting.
Our experiments demonstrate that our algorithm achieves state-of-the-art performance in inverse rendering and relighting.
arXiv Detail & Related papers (2024-09-30T09:42:10Z) - NeuS-PIR: Learning Relightable Neural Surface using Pre-Integrated Rendering [23.482941494283978]
This paper presents a method, namely NeuS-PIR, for recovering relightable neural surfaces from multi-view images or video.
Unlike methods based on NeRF and discrete meshes, our method utilizes implicit neural surface representation to reconstruct high-quality geometry.
Our method enables advanced applications such as relighting, which can be seamlessly integrated with modern graphics engines.
arXiv Detail & Related papers (2023-06-13T09:02:57Z) - Fast Monocular Scene Reconstruction with Global-Sparse Local-Dense Grids [84.90863397388776]
We propose to directly use signed distance function (SDF) in sparse voxel block grids for fast and accurate scene reconstruction without distances.
Our globally sparse and locally dense data structure exploits surfaces' spatial sparsity, enables cache-friendly queries, and allows direct extensions to multi-modal data.
Experiments show that our approach is 10x faster in training and 100x faster in rendering while achieving comparable accuracy to state-of-the-art neural implicit methods.
arXiv Detail & Related papers (2023-05-22T16:50:19Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - Neural BRDFs: Representation and Operations [25.94375378662899]
Bidirectional reflectance distribution functions (BRDFs) are pervasively used in computer graphics to produce realistic physically-based appearance.
We present a form of "Neural BRDF algebra", and focus on both representation and operations of BRDFs at the same time.
arXiv Detail & Related papers (2021-11-06T03:50:02Z) - Neural BRDF Representation and Importance Sampling [79.84316447473873]
We present a compact neural network-based representation of reflectance BRDF data.
We encode BRDFs as lightweight networks, and propose a training scheme with adaptive angular sampling.
We evaluate encoding results on isotropic and anisotropic BRDFs from multiple real-world datasets.
arXiv Detail & Related papers (2021-02-11T12:00:24Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.