Variable Bitrate Neural Fields
- URL: http://arxiv.org/abs/2206.07707v1
- Date: Wed, 15 Jun 2022 17:58:34 GMT
- Title: Variable Bitrate Neural Fields
- Authors: Towaki Takikawa and Alex Evans and Jonathan Tremblay and Thomas
M\"uller and Morgan McGuire and Alec Jacobson and Sanja Fidler
- Abstract summary: We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
- Score: 75.24672452527795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural approximations of scalar and vector fields, such as signed distance
functions and radiance fields, have emerged as accurate, high-quality
representations. State-of-the-art results are obtained by conditioning a neural
approximation with a lookup from trainable feature grids that take on part of
the learning task and allow for smaller, more efficient neural networks.
Unfortunately, these feature grids usually come at the cost of significantly
increased memory consumption compared to stand-alone neural network models. We
present a dictionary method for compressing such feature grids, reducing their
memory consumption by up to 100x and permitting a multiresolution
representation which can be useful for out-of-core streaming. We formulate the
dictionary optimization as a vector-quantized auto-decoder problem which lets
us learn end-to-end discrete neural representations in a space where no direct
supervision is available and with dynamic topology and structure. Our source
code will be available at https://github.com/nv-tlabs/vqad.
Related papers
- RelChaNet: Neural Network Feature Selection using Relative Change Scores [0.0]
We introduce RelChaNet, a novel and lightweight feature selection algorithm that uses neuron pruning and regrowth in the input layer of a dense neural network.
Our approach generally outperforms the current state-of-the-art methods, and in particular improves the average accuracy by 2% on the MNIST dataset.
arXiv Detail & Related papers (2024-10-03T09:56:39Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Generalizable Neural Fields as Partially Observed Neural Processes [16.202109517569145]
We propose a new paradigm that views the large-scale training of neural representations as a part of a partially-observed neural process framework.
We demonstrate that this approach outperforms both state-of-the-art gradient-based meta-learning approaches and hypernetwork approaches.
arXiv Detail & Related papers (2023-09-13T01:22:16Z) - Masked Wavelet Representation for Compact Neural Radiance Fields [5.279919461008267]
Using a multi-layer perceptron to represent a 3D scene or object requires enormous computational resources and time.
We present a method to reduce the size without compromising the advantages of having additional data structures.
With our proposed mask and compression pipeline, we achieved state-of-the-art performance within a memory budget of 2 MB.
arXiv Detail & Related papers (2022-12-18T11:43:32Z) - OLLA: Decreasing the Memory Usage of Neural Networks by Optimizing the
Lifetime and Location of Arrays [6.418232942455968]
OLLA is an algorithm that optimize the lifetime and memory location of the tensors used to train neural networks.
We present several techniques to simplify the encoding of the problem, and enable our approach to scale to the size of state-of-the-art neural networks.
arXiv Detail & Related papers (2022-10-24T02:39:13Z) - Gluing Neural Networks Symbolically Through Hyperdimensional Computing [8.209945970790741]
We explore the notion of using binary hypervectors to encode the final, classifying output signals of neural networks.
This allows multiple neural networks to work together to solve a problem, with little additional overhead.
We find that this outperforms the state of the art, or is on a par with it, while using very little overhead.
arXiv Detail & Related papers (2022-05-31T04:44:02Z) - COIN++: Data Agnostic Neural Compression [55.27113889737545]
COIN++ is a neural compression framework that seamlessly handles a wide range of data modalities.
We demonstrate the effectiveness of our method by compressing various data modalities.
arXiv Detail & Related papers (2022-01-30T20:12:04Z) - Instant Neural Graphics Primitives with a Multiresolution Hash Encoding [67.33850633281803]
We present a versatile new input encoding that permits the use of a smaller network without sacrificing quality.
A small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through a gradient descent.
We achieve a combined speed of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds.
arXiv Detail & Related papers (2022-01-16T07:22:47Z) - Meta-Learning Sparse Implicit Neural Representations [69.15490627853629]
Implicit neural representations are a promising new avenue of representing general signals.
Current approach is difficult to scale for a large number of signals or a data set.
We show that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models.
arXiv Detail & Related papers (2021-10-27T18:02:53Z) - Neural Sparse Representation for Image Restoration [116.72107034624344]
Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
arXiv Detail & Related papers (2020-06-08T05:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.