NP-PROV: Neural Processes with Position-Relevant-Only Variances
- URL: http://arxiv.org/abs/2007.00767v1
- Date: Mon, 15 Jun 2020 06:11:21 GMT
- Title: NP-PROV: Neural Processes with Position-Relevant-Only Variances
- Authors: Xuesong Wang, Lina Yao, Xianzhi Wang, Feiping Nie
- Abstract summary: We present a new member named Neural Processes with Position-Relevant-Only Variances (NP-PROV)
NP-PROV hypothesizes that a target point close to a context point has small uncertainty, regardless of the function value at that position.
Our evaluation on synthetic and real-world datasets reveals that NP-PROV can achieve state-of-the-art likelihood while retaining a bounded variance.
- Score: 113.20013269514327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Processes (NPs) families encode distributions over functions to a
latent representation, given context data, and decode posterior mean and
variance at unknown locations. Since mean and variance are derived from the
same latent space, they may fail on out-of-domain tasks where fluctuations in
function values amplify the model uncertainty. We present a new member named
Neural Processes with Position-Relevant-Only Variances (NP-PROV). NP-PROV
hypothesizes that a target point close to a context point has small
uncertainty, regardless of the function value at that position. The resulting
approach derives mean and variance from a function-value-related space and a
position-related-only latent space separately. Our evaluation on synthetic and
real-world datasets reveals that NP-PROV can achieve state-of-the-art
likelihood while retaining a bounded variance when drifts exist in the function
value.
Related papers
- Martingale Posterior Neural Processes [14.913697718688931]
A Neural Process (NP) estimates a process implicitly defined with neural networks given a stream of data.
We take a different approach based on the martingale posterior, a recently developed alternative to Bayesian inference.
We show that the uncertainty in the generated future data actually corresponds to the uncertainty of the implicitly defined Bayesian posteriors.
arXiv Detail & Related papers (2023-04-19T05:58:18Z) - Versatile Neural Processes for Learning Implicit Neural Representations [57.090658265140384]
We propose Versatile Neural Processes (VNP), which largely increases the capability of approximating functions.
Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost.
We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
arXiv Detail & Related papers (2023-01-21T04:08:46Z) - Transformer Neural Processes: Uncertainty-Aware Meta Learning Via
Sequence Modeling [26.377099481072992]
We propose Transformer Neural Processes (TNPs) for uncertainty-aware meta learning.
We learn TNPs via an autoregressive likelihood-based objective and instantiate it with a novel transformer-based architecture.
We show that TNPs achieve state-of-the-art performance on various benchmark problems.
arXiv Detail & Related papers (2022-07-09T02:28:58Z) - Global Convolutional Neural Processes [52.85558458799716]
We build a member GloBal Convolutional Neural Process(GBCoNP) that achieves the SOTA log-likelihood in latent NPFs.
It designs a global uncertainty representation p(z) which is an aggregation on a discretized input space.
The learnt prior is analyzed on a variety of scenarios, including 1D, 2D, and a newly proposed spatial-temporal COVID dataset.
arXiv Detail & Related papers (2021-09-02T03:32:50Z) - Federated Functional Gradient Boosting [75.06942944563572]
We study functional minimization in Federated Learning.
For both FFGB.C and FFGB.L, the radii of convergence shrink to zero as the feature distributions become more homogeneous.
arXiv Detail & Related papers (2021-03-11T21:49:19Z) - Probabilistic Numeric Convolutional Neural Networks [80.42120128330411]
Continuous input signals like images and time series that are irregularly sampled or have missing values are challenging for existing deep learning methods.
We propose Probabilistic Convolutional Neural Networks which represent features as Gaussian processes (GPs)
We then define a convolutional layer as the evolution of a PDE defined on this GP, followed by a nonlinearity.
In experiments we show that our approach yields a $3times$ reduction of error from the previous state of the art on the SuperPixel-MNIST dataset and competitive performance on the medical time2012 dataset PhysioNet.
arXiv Detail & Related papers (2020-10-21T10:08:21Z) - Doubly Stochastic Variational Inference for Neural Processes with
Hierarchical Latent Variables [37.43541345780632]
We present a new variant of Neural Process (NP) model that we call Doubly Variational Neural Process (DSVNP)
This model combines the global latent variable and local latent variables for prediction. We evaluate this model in several experiments, and our results demonstrate competitive prediction performance in multi-output regression and uncertainty estimation in classification.
arXiv Detail & Related papers (2020-08-21T13:32:12Z) - Bootstrapping Neural Processes [114.97111530885093]
Neural Processes (NPs) implicitly define a broad class of processes with neural networks.
NPs still rely on an assumption that uncertainty in processes is modeled by a single latent variable.
We propose the Boostrapping Neural Process (BNP), a novel extension of the NP family using the bootstrap.
arXiv Detail & Related papers (2020-08-07T02:23:34Z) - Meta-Learning Stationary Stochastic Process Prediction with
Convolutional Neural Processes [32.02612871707347]
We propose ConvNP, which endows Neural Processes (NPs) with translation equivariance and extends convolutional conditional NPs to allow for dependencies in the predictive distribution.
We demonstrate the strong performance and generalization capabilities of ConvNPs on 1D, regression image completion, and various tasks with real-world-temporal data.
arXiv Detail & Related papers (2020-07-02T18:25:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.