Neural BRDFs: Representation and Operations
- URL: http://arxiv.org/abs/2111.03797v1
- Date: Sat, 6 Nov 2021 03:50:02 GMT
- Title: Neural BRDFs: Representation and Operations
- Authors: Jiahui Fan and Beibei Wang and Milo\v{s} Ha\v{s}an and Jian Yang and
Ling-Qi Yan
- Abstract summary: Bidirectional reflectance distribution functions (BRDFs) are pervasively used in computer graphics to produce realistic physically-based appearance.
We present a form of "Neural BRDF algebra", and focus on both representation and operations of BRDFs at the same time.
- Score: 25.94375378662899
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bidirectional reflectance distribution functions (BRDFs) are pervasively used
in computer graphics to produce realistic physically-based appearance. In
recent years, several works explored using neural networks to represent BRDFs,
taking advantage of neural networks' high compression rate and their ability to
fit highly complex functions. However, once represented, the BRDFs will be
fixed and therefore lack flexibility to take part in follow-up operations. In
this paper, we present a form of "Neural BRDF algebra", and focus on both
representation and operations of BRDFs at the same time. We propose a
representation neural network to compress BRDFs into latent vectors, which is
able to represent BRDFs accurately. We further propose several operations that
can be applied solely in the latent space, such as layering and interpolation.
Spatial variation is straightforward to achieve by using textures of latent
vectors. Furthermore, our representation can be efficiently evaluated and
sampled, providing a competitive solution to more expensive Monte Carlo
layering approaches.
Related papers
- VDNA-PR: Using General Dataset Representations for Robust Sequential Visual Place Recognition [17.393105901701098]
This paper adapts a general dataset representation technique to produce robust Visual Place Recognition (VPR) descriptors.
Our experiments show that our representation can allow for better robustness than current solutions to serious domain shifts away from the training data distribution.
arXiv Detail & Related papers (2024-03-14T01:30:28Z) - Real-Time Neural BRDF with Spherically Distributed Primitives [35.09149879060455]
We propose a novel neural BRDF offering highly versatile material representation, yet with very-light memory and neural computation consumption.
Results show that our system achieves real-time rendering with a wide variety of appearances.
arXiv Detail & Related papers (2023-10-12T13:46:36Z) - Polynomial Neural Fields for Subband Decomposition and Manipulation [78.2401411189246]
We propose a new class of neural fields called neural fields (PNFs)
The key advantage of a PNF is that it can represent a signal as a composition of manipulable and interpretable components without losing the merits of neural fields.
We empirically demonstrate that Fourier PNFs enable signal manipulation applications such as texture transfer and scale-space.
arXiv Detail & Related papers (2023-02-09T18:59:04Z) - Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Towards Evaluating and Training Verifiably Robust Neural Networks [81.39994285743555]
We study the relationship between IBP and CROWN, and prove that CROWN is always tighter than IBP when choosing appropriate bounding lines.
We propose a relaxed version of CROWN, linear bound propagation (LBP), that can be used to verify large networks to obtain lower verified errors.
arXiv Detail & Related papers (2021-04-01T13:03:48Z) - Neural BRDF Representation and Importance Sampling [79.84316447473873]
We present a compact neural network-based representation of reflectance BRDF data.
We encode BRDFs as lightweight networks, and propose a training scheme with adaptive angular sampling.
We evaluate encoding results on isotropic and anisotropic BRDFs from multiple real-world datasets.
arXiv Detail & Related papers (2021-02-11T12:00:24Z) - Invertible Neural BRDF for Object Inverse Rendering [27.86441556552318]
We introduce a novel neural network-based BRDF model and a Bayesian framework for object inverse rendering.
We experimentally validate the accuracy of the invertible neural BRDF model on a large number of measured data.
Results show new ways in which deep neural networks can help solve challenging inverse problems.
arXiv Detail & Related papers (2020-08-10T11:27:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.