Bootstrapping Neural Processes
- URL: http://arxiv.org/abs/2008.02956v2
- Date: Tue, 27 Oct 2020 04:06:35 GMT
- Title: Bootstrapping Neural Processes
- Authors: Juho Lee, Yoonho Lee, Jungtaek Kim, Eunho Yang, Sung Ju Hwang, Yee
Whye Teh
- Abstract summary: Neural Processes (NPs) implicitly define a broad class of processes with neural networks.
NPs still rely on an assumption that uncertainty in processes is modeled by a single latent variable.
We propose the Boostrapping Neural Process (BNP), a novel extension of the NP family using the bootstrap.
- Score: 114.97111530885093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unlike in the traditional statistical modeling for which a user typically
hand-specify a prior, Neural Processes (NPs) implicitly define a broad class of
stochastic processes with neural networks. Given a data stream, NP learns a
stochastic process that best describes the data. While this "data-driven" way
of learning stochastic processes has proven to handle various types of data,
NPs still rely on an assumption that uncertainty in stochastic processes is
modeled by a single latent variable, which potentially limits the flexibility.
To this end, we propose the Boostrapping Neural Process (BNP), a novel
extension of the NP family using the bootstrap. The bootstrap is a classical
data-driven technique for estimating uncertainty, which allows BNP to learn the
stochasticity in NPs without assuming a particular form. We demonstrate the
efficacy of BNP on various types of data and its robustness in the presence of
model-data mismatch.
Related papers
- In-Context In-Context Learning with Transformer Neural Processes [50.57807892496024]
We develop the in-context in-context learning pseudo-token TNP (ICICL-TNP)
The ICICL-TNP is capable of conditioning on both sets of datapoints and sets of datasets, enabling it to perform in-context in-context learning.
We demonstrate the importance of in-context in-context learning and the effectiveness of the ICICL-TNP in a number of experiments.
arXiv Detail & Related papers (2024-06-19T12:26:36Z) - Martingale Posterior Neural Processes [14.913697718688931]
A Neural Process (NP) estimates a process implicitly defined with neural networks given a stream of data.
We take a different approach based on the martingale posterior, a recently developed alternative to Bayesian inference.
We show that the uncertainty in the generated future data actually corresponds to the uncertainty of the implicitly defined Bayesian posteriors.
arXiv Detail & Related papers (2023-04-19T05:58:18Z) - Versatile Neural Processes for Learning Implicit Neural Representations [57.090658265140384]
We propose Versatile Neural Processes (VNP), which largely increases the capability of approximating functions.
Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost.
We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
arXiv Detail & Related papers (2023-01-21T04:08:46Z) - Latent Bottlenecked Attentive Neural Processes [71.18817592128207]
We present Latent Bottlenecked Attentive Neural Processes (LBANPs)
LBANPs have a querying computational complexity independent of the number of context datapoints.
We show LBANPs achieve results competitive with the state-of-the-art on meta-regression, image completion, and contextual multi-armed bandits.
arXiv Detail & Related papers (2022-11-15T19:21:41Z) - Conditional Neural Processes for Molecules [0.0]
Neural processes (NPs) are models for transfer learning with properties reminiscent of Gaussian Processes (GPs)
This paper applies the conditional neural process (CNP) to DOCKSTRING, a dataset of docking scores for benchmarking ML models.
CNPs show competitive performance in few-shot learning tasks relative to supervised learning baselines common in QSAR modelling, as well as an alternative model for transfer learning based on pre-training and refining neural network regressors.
arXiv Detail & Related papers (2022-10-17T16:10:12Z) - Transformer Neural Processes: Uncertainty-Aware Meta Learning Via
Sequence Modeling [26.377099481072992]
We propose Transformer Neural Processes (TNPs) for uncertainty-aware meta learning.
We learn TNPs via an autoregressive likelihood-based objective and instantiate it with a novel transformer-based architecture.
We show that TNPs achieve state-of-the-art performance on various benchmark problems.
arXiv Detail & Related papers (2022-07-09T02:28:58Z) - NP-Match: When Neural Processes meet Semi-Supervised Learning [133.009621275051]
Semi-supervised learning (SSL) has been widely explored in recent years, and it is an effective way of leveraging unlabeled data to reduce the reliance on labeled data.
In this work, we adjust neural processes (NPs) to the semi-supervised image classification task, resulting in a new method named NP-Match.
arXiv Detail & Related papers (2022-07-03T15:24:31Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z) - The Gaussian Neural Process [39.81327564209865]
We provide a rigorous analysis of the standard maximum-likelihood objective used to train conditional NPs.
We propose a new member to the Neural Process family called the Neural Process (GNP), which models predictive correlations, incorporates translation, provides universal approximation guarantees, and demonstrates encouraging performance.
arXiv Detail & Related papers (2021-01-10T19:15:27Z) - Message Passing Neural Processes [3.0969191504482247]
We introduce Message Passing Neural Processes (MPNPs), which explicitly makes use of relational structure within the model.
MPNPs thrive at lower sampling rates, on existing benchmarks and newly-proposed CA and Cora-Branched tasks.
We report strong generalisation over density-based CA rulesets and significant gains in challenging arbitrary-labelling and few-shot learning setups.
arXiv Detail & Related papers (2020-09-29T09:40:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.