Meta-Learning Sparse Implicit Neural Representations
- URL: http://arxiv.org/abs/2110.14678v1
- Date: Wed, 27 Oct 2021 18:02:53 GMT
- Title: Meta-Learning Sparse Implicit Neural Representations
- Authors: Jaeho Lee, Jihoon Tack, Namhoon Lee, Jinwoo Shin
- Abstract summary: Implicit neural representations are a promising new avenue of representing general signals.
Current approach is difficult to scale for a large number of signals or a data set.
We show that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models.
- Score: 69.15490627853629
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit neural representations are a promising new avenue of representing
general signals by learning a continuous function that, parameterized as a
neural network, maps the domain of a signal to its codomain; the mapping from
spatial coordinates of an image to its pixel values, for example. Being capable
of conveying fine details in a high dimensional signal, unboundedly of its
domain, implicit neural representations ensure many advantages over
conventional discrete representations. However, the current approach is
difficult to scale for a large number of signals or a data set, since learning
a neural representation -- which is parameter heavy by itself -- for each
signal individually requires a lot of memory and computations. To address this
issue, we propose to leverage a meta-learning approach in combination with
network compression under a sparsity constraint, such that it renders a
well-initialized sparse parameterization that evolves quickly to represent a
set of unseen signals in the subsequent training. We empirically demonstrate
that meta-learned sparse neural representations achieve a much smaller loss
than dense meta-learned models with the same number of parameters, when trained
to fit each signal using the same number of optimization steps.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Generalizable Neural Fields as Partially Observed Neural Processes [16.202109517569145]
We propose a new paradigm that views the large-scale training of neural representations as a part of a partially-observed neural process framework.
We demonstrate that this approach outperforms both state-of-the-art gradient-based meta-learning approaches and hypernetwork approaches.
arXiv Detail & Related papers (2023-09-13T01:22:16Z) - Random Weight Factorization Improves the Training of Continuous Neural
Representations [1.911678487931003]
Continuous neural representations have emerged as a powerful and flexible alternative to classical discretized representations of signals.
We propose random weight factorization as a simple drop-in replacement for parameterizing and initializing conventional linear layers.
We show how this factorization alters the underlying loss landscape and effectively enables each neuron in the network to learn using its own self-adaptive learning rate.
arXiv Detail & Related papers (2022-10-03T23:48:48Z) - Sobolev Training for Implicit Neural Representations with Approximated
Image Derivatives [12.71676484494428]
Implicit Neural Representations (INRs) parameterized by neural networks have emerged as a powerful tool to represent different kinds of signals.
We propose a training paradigm for INRs whose target output is image pixels, to encode image derivatives in addition to image values in the neural network.
We show how the training paradigm can be leveraged to solve typical INRs problems, i.e., image regression and inverse rendering.
arXiv Detail & Related papers (2022-07-21T10:12:41Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - MINER: Multiscale Implicit Neural Representations [43.36327238440042]
We introduce a new neural signal representation designed for the efficient high-resolution representation of large-scale signals.
The key innovation in our multiscale implicit neural representation (MINER) is an internal representation via a Laplacian pyramid.
We demonstrate that it requires fewer than 25% of the parameters, 33% of the memory footprint, and 10% of the time of competing techniques such as ACORN to reach the same representation error.
arXiv Detail & Related papers (2022-02-07T21:49:33Z) - Learned Initializations for Optimizing Coordinate-Based Neural
Representations [47.408295381897815]
Coordinate-based neural representations have shown significant promise as an alternative to discrete, array-based representations.
We propose applying standard meta-learning algorithms to learn the initial weight parameters for these fully-connected networks.
We explore these benefits across a variety of tasks, including representing 2D images, reconstructing CT scans, and recovering 3D shapes and scenes from 2D image observations.
arXiv Detail & Related papers (2020-12-03T18:59:52Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.