Versatile Neural Processes for Learning Implicit Neural Representations
- URL: http://arxiv.org/abs/2301.08883v1
- Date: Sat, 21 Jan 2023 04:08:46 GMT
- Title: Versatile Neural Processes for Learning Implicit Neural Representations
- Authors: Zongyu Guo, Cuiling Lan, Zhizheng Zhang, Zhibo Chen, Yan Lu
- Abstract summary: We propose Versatile Neural Processes (VNP), which largely increases the capability of approximating functions.
Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost.
We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
- Score: 57.090658265140384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Representing a signal as a continuous function parameterized by neural
network (a.k.a. Implicit Neural Representations, INRs) has attracted increasing
attention in recent years. Neural Processes (NPs), which model the
distributions over functions conditioned on partial observations (context set),
provide a practical solution for fast inference of continuous functions.
However, existing NP architectures suffer from inferior modeling capability for
complex signals. In this paper, we propose an efficient NP framework dubbed
Versatile Neural Processes (VNP), which largely increases the capability of
approximating functions. Specifically, we introduce a bottleneck encoder that
produces fewer and informative context tokens, relieving the high computational
cost while providing high modeling capability. At the decoder side, we
hierarchically learn multiple global latent variables that jointly model the
global structure and the uncertainty of a function, enabling our model to
capture the distribution of complex signals. We demonstrate the effectiveness
of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
Particularly, our method shows promise in learning accurate INRs w.r.t. a 3D
scene without further finetuning.
Related papers
- Implicit Neural Representations with Fourier Kolmogorov-Arnold Networks [4.499833362998488]
Implicit neural representations (INRs) use neural networks to provide continuous and resolution-independent representations of complex signals.
The proposed FKAN utilizes learnable activation functions modeled as Fourier series in the first layer to effectively control and learn the task-specific frequency components.
Experimental results show that our proposed FKAN model outperforms three state-of-the-art baseline schemes.
arXiv Detail & Related papers (2024-09-14T05:53:33Z) - Convolutional Conditional Neural Processes [6.532867867011488]
This thesis advances neural processes in three ways.
ConvNPs improve data efficiency by building in a symmetry called translationvariance.
GNPs directly parametrise dependencies in the predictions of a neural process.
AR CNPs train a neural process without any modifications to the model or training procedure and, at test time, roll out the model in an autoregressive fashion.
arXiv Detail & Related papers (2024-08-18T19:53:38Z) - Attention Beats Linear for Fast Implicit Neural Representation Generation [13.203243059083533]
We propose Attention-based Localized INR (ANR) composed of a localized attention layer (LAL) and a global representation vector.
With instance-specific representation and instance-agnostic ANR parameters, the target signals are well reconstructed as a continuous function.
arXiv Detail & Related papers (2024-07-22T03:52:18Z) - Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - Neural Implicit Dictionary via Mixture-of-Expert Training [111.08941206369508]
We present a generic INR framework that achieves both data and training efficiency by learning a Neural Implicit Dictionary (NID)
Our NID assembles a group of coordinate-based Impworks which are tuned to span the desired function space.
Our experiments show that, NID can improve reconstruction of 2D images or 3D scenes by 2 orders of magnitude faster with up to 98% less input data.
arXiv Detail & Related papers (2022-07-08T05:07:19Z) - Neural Diffusion Processes [12.744250155946503]
We propose Neural Diffusion Processes (NDPs), a novel approach that learns to sample from a rich distribution over functions through its finite marginals.
We empirically show that NDPs can capture functional distributions close to the true Bayesian posterior.
NDPs enable a variety of downstream tasks, including regression, implicit hyper marginalisation, non-Gaussian posterior prediction and global optimisation.
arXiv Detail & Related papers (2022-06-08T16:13:04Z) - CDiNN -Convex Difference Neural Networks [0.8122270502556374]
Neural networks with ReLU activation function have been shown to be universal function approximators learn function mapping as non-smooth functions.
New neural network architecture called ICNNs learn the output as a convex input.
arXiv Detail & Related papers (2021-03-31T17:31:16Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Deep Polynomial Neural Networks [77.70761658507507]
$Pi$Nets are a new class of function approximators based on expansions.
$Pi$Nets produce state-the-art results in three challenging tasks, i.e. image generation, face verification and 3D mesh representation learning.
arXiv Detail & Related papers (2020-06-20T16:23:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.