Compact Neural Graphics Primitives with Learned Hash Probing
- URL: http://arxiv.org/abs/2312.17241v1
- Date: Thu, 28 Dec 2023 18:58:45 GMT
- Title: Compact Neural Graphics Primitives with Learned Hash Probing
- Authors: Towaki Takikawa, Thomas M\"uller, Merlin Nimier-David, Alex Evans,
Sanja Fidler, Alec Jacobson, Alexander Keller
- Abstract summary: We show that a hash table with learned probes has neither disadvantage, resulting in a favorable combination of size and speed.
Inference is faster than unprobed hash tables at equal quality while training is only 1.2-2.6x slower.
- Score: 100.07267906666293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural graphics primitives are faster and achieve higher quality when their
neural networks are augmented by spatial data structures that hold trainable
features arranged in a grid. However, existing feature grids either come with a
large memory footprint (dense or factorized grids, trees, and hash tables) or
slow performance (index learning and vector quantization). In this paper, we
show that a hash table with learned probes has neither disadvantage, resulting
in a favorable combination of size and speed. Inference is faster than unprobed
hash tables at equal quality while training is only 1.2-2.6x slower,
significantly outperforming prior index learning approaches. We arrive at this
formulation by casting all feature grids into a common framework: they each
correspond to a lookup function that indexes into a table of feature vectors.
In this framework, the lookup functions of existing data structures can be
combined by simple arithmetic combinations of their indices, resulting in
Pareto optimal compression and speed.
Related papers
- Neural Topological Ordering for Computation Graphs [23.225391263047364]
We propose an end-to-end machine learning based approach for topological ordering using an encoder-decoder framework.
We show that our model outperforms, or is on-par, with several topological ordering baselines while being significantly faster on synthetic graphs with up to 2k nodes.
arXiv Detail & Related papers (2022-07-13T00:12:02Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Instant Neural Graphics Primitives with a Multiresolution Hash Encoding [67.33850633281803]
We present a versatile new input encoding that permits the use of a smaller network without sacrificing quality.
A small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through a gradient descent.
We achieve a combined speed of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds.
arXiv Detail & Related papers (2022-01-16T07:22:47Z) - Convergent Boosted Smoothing for Modeling Graph Data with Tabular Node
Features [46.052312251801]
We propose a framework for iterating boosting with graph propagation steps.
Our approach is anchored in a principled meta loss function.
Across a variety of non-iid graph datasets, our method achieves comparable or superior performance.
arXiv Detail & Related papers (2021-10-26T04:53:12Z) - Towards Efficient Graph Convolutional Networks for Point Cloud Handling [181.59146413326056]
We aim at improving the computational efficiency of graph convolutional networks (GCNs) for learning on point clouds.
A series of experiments show that optimized networks have reduced computational complexity, decreased memory consumption, and accelerated inference speed.
arXiv Detail & Related papers (2021-04-12T17:59:16Z) - Ramanujan Bipartite Graph Products for Efficient Block Sparse Neural
Networks [2.4235475271758076]
We propose framework for generating structured multi level block sparse neural networks by using the theory of Graph products.
We also propose to use products of Ramanujan graphs which gives the best connectivity for a given level of sparsity.
We benchmark our approach by experimenting on image classification task over CIFAR dataset using VGG19 and WideResnet-40-4 networks.
arXiv Detail & Related papers (2020-06-24T05:08:17Z) - Online Sequential Extreme Learning Machines: Features Combined From
Hundreds of Midlayers [0.0]
In this paper, we develop an algorithm called hierarchal online sequential learning algorithm (H-OS-ELM)
The algorithm can learn chunk by chunk with fixed or varying block size.
arXiv Detail & Related papers (2020-06-12T00:50:04Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.