Generalizable Neural Fields as Partially Observed Neural Processes
- URL: http://arxiv.org/abs/2309.06660v1
- Date: Wed, 13 Sep 2023 01:22:16 GMT
- Title: Generalizable Neural Fields as Partially Observed Neural Processes
- Authors: Jeffrey Gu, Kuan-Chieh Wang, Serena Yeung
- Abstract summary: We propose a new paradigm that views the large-scale training of neural representations as a part of a partially-observed neural process framework.
We demonstrate that this approach outperforms both state-of-the-art gradient-based meta-learning approaches and hypernetwork approaches.
- Score: 16.202109517569145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural fields, which represent signals as a function parameterized by a
neural network, are a promising alternative to traditional discrete vector or
grid-based representations. Compared to discrete representations, neural
representations both scale well with increasing resolution, are continuous, and
can be many-times differentiable. However, given a dataset of signals that we
would like to represent, having to optimize a separate neural field for each
signal is inefficient, and cannot capitalize on shared information or
structures among signals. Existing generalization methods view this as a
meta-learning problem and employ gradient-based meta-learning to learn an
initialization which is then fine-tuned with test-time optimization, or learn
hypernetworks to produce the weights of a neural field. We instead propose a
new paradigm that views the large-scale training of neural representations as a
part of a partially-observed neural process framework, and leverage neural
process algorithms to solve this task. We demonstrate that this approach
outperforms both state-of-the-art gradient-based meta-learning approaches and
hypernetwork approaches.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Random Weight Factorization Improves the Training of Continuous Neural
Representations [1.911678487931003]
Continuous neural representations have emerged as a powerful and flexible alternative to classical discretized representations of signals.
We propose random weight factorization as a simple drop-in replacement for parameterizing and initializing conventional linear layers.
We show how this factorization alters the underlying loss landscape and effectively enables each neuron in the network to learn using its own self-adaptive learning rate.
arXiv Detail & Related papers (2022-10-03T23:48:48Z) - Gaussian Process Surrogate Models for Neural Networks [6.8304779077042515]
In science and engineering, modeling is a methodology used to understand complex systems whose internal processes are opaque.
We construct a class of surrogate models for neural networks using Gaussian processes.
We demonstrate our approach captures existing phenomena related to the spectral bias of neural networks, and then show that our surrogate models can be used to solve practical problems.
arXiv Detail & Related papers (2022-08-11T20:17:02Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Meta-Learning Sparse Implicit Neural Representations [69.15490627853629]
Implicit neural representations are a promising new avenue of representing general signals.
Current approach is difficult to scale for a large number of signals or a data set.
We show that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models.
arXiv Detail & Related papers (2021-10-27T18:02:53Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.