D'OH: Decoder-Only random Hypernetworks for Implicit Neural Representations
- URL: http://arxiv.org/abs/2403.19163v1
- Date: Thu, 28 Mar 2024 06:18:12 GMT
- Title: D'OH: Decoder-Only random Hypernetworks for Implicit Neural Representations
- Authors: Cameron Gordon, Lachlan Ewen MacDonald, Hemanth Saratchandran, Simon Lucey,
- Abstract summary: We explore the hypothesis that additional compression can be achieved by leveraging the redundancies that exist between layers.
We propose to use a novel run-time decoder-only hypernetwork that uses no offline training data.
By directly changing the dimension of a latent code to approximate a target implicit neural architecture, we provide a natural way to vary the memory footprint of neural representations.
- Score: 24.57801400001629
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep implicit functions have been found to be an effective tool for efficiently encoding all manner of natural signals. Their attractiveness stems from their ability to compactly represent signals with little to no off-line training data. Instead, they leverage the implicit bias of deep networks to decouple hidden redundancies within the signal. In this paper, we explore the hypothesis that additional compression can be achieved by leveraging the redundancies that exist between layers. We propose to use a novel run-time decoder-only hypernetwork - that uses no offline training data - to better model this cross-layer parameter redundancy. Previous applications of hyper-networks with deep implicit functions have applied feed-forward encoder/decoder frameworks that rely on large offline datasets that do not generalize beyond the signals they were trained on. We instead present a strategy for the initialization of run-time deep implicit functions for single-instance signals through a Decoder-Only randomly projected Hypernetwork (D'OH). By directly changing the dimension of a latent code to approximate a target implicit neural architecture, we provide a natural way to vary the memory footprint of neural representations without the costly need for neural architecture search on a space of alternative low-rate structures.
Related papers
- "Lossless" Compression of Deep Neural Networks: A High-dimensional
Neural Tangent Kernel Approach [49.744093838327615]
We provide a novel compression approach to wide and fully-connected emphdeep neural nets.
Experiments on both synthetic and real-world data are conducted to support the advantages of the proposed compression scheme.
arXiv Detail & Related papers (2024-03-01T03:46:28Z) - Traversing Between Modes in Function Space for Fast Ensembling [15.145136272169946]
"Bridge" is a lightweight network that takes minimal features from the original network and predicts outputs for the low-loss subspace without forward passes through the original network.
We empirically demonstrate that we can indeed train such bridge networks and significantly reduce inference costs with the help of bridge networks.
arXiv Detail & Related papers (2023-06-20T05:52:26Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Gluing Neural Networks Symbolically Through Hyperdimensional Computing [8.209945970790741]
We explore the notion of using binary hypervectors to encode the final, classifying output signals of neural networks.
This allows multiple neural networks to work together to solve a problem, with little additional overhead.
We find that this outperforms the state of the art, or is on a par with it, while using very little overhead.
arXiv Detail & Related papers (2022-05-31T04:44:02Z) - a novel attention-based network for fast salient object detection [14.246237737452105]
In the current salient object detection network, the most popular method is using U-shape structure.
We propose a new deep convolution network architecture with three contributions.
Results demonstrate that the proposed method can compress the model to 1/3 of the original size nearly without losing the accuracy.
arXiv Detail & Related papers (2021-12-20T12:30:20Z) - Meta-Learning Sparse Implicit Neural Representations [69.15490627853629]
Implicit neural representations are a promising new avenue of representing general signals.
Current approach is difficult to scale for a large number of signals or a data set.
We show that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models.
arXiv Detail & Related papers (2021-10-27T18:02:53Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Supervised Learning with First-to-Spike Decoding in Multilayer Spiking
Neural Networks [0.0]
We propose a new supervised learning method that can train multilayer spiking neural networks to solve classification problems.
The proposed learning rule supports multiple spikes fired by hidden neurons, and yet is stable by relying on firstspike responses generated by a deterministic output layer.
We also explore several distinct spike-based encoding strategies in order to form compact representations of input data.
arXiv Detail & Related papers (2020-08-16T15:34:48Z) - Resolution Adaptive Networks for Efficient Inference [53.04907454606711]
We propose a novel Resolution Adaptive Network (RANet), which is inspired by the intuition that low-resolution representations are sufficient for classifying "easy" inputs.
In RANet, the input images are first routed to a lightweight sub-network that efficiently extracts low-resolution representations.
High-resolution paths in the network maintain the capability to recognize the "hard" samples.
arXiv Detail & Related papers (2020-03-16T16:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.