Amortised Inference in Neural Networks for Small-Scale Probabilistic
Meta-Learning
- URL: http://arxiv.org/abs/2310.15786v1
- Date: Tue, 24 Oct 2023 12:34:25 GMT
- Title: Amortised Inference in Neural Networks for Small-Scale Probabilistic
Meta-Learning
- Authors: Matthew Ashman, Tommy Rochussen and Adrian Weller
- Abstract summary: A global inducing point variational approximation for BNNs is based on using a set of inducing inputs to construct a series of conditional distributions.
Our key insight is that these inducing inputs can be replaced by the actual data, such that the variational distribution consists of a set of approximate likelihoods for each datapoint.
By training this inference network across related datasets, we can meta-learn Bayesian inference over task-specific BNNs.
- Score: 41.85464593920907
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The global inducing point variational approximation for BNNs is based on
using a set of inducing inputs to construct a series of conditional
distributions that accurately approximate the conditionals of the true
posterior distribution. Our key insight is that these inducing inputs can be
replaced by the actual data, such that the variational distribution consists of
a set of approximate likelihoods for each datapoint. This structure lends
itself to amortised inference, in which the parameters of each approximate
likelihood are obtained by passing each datapoint through a meta-model known as
the inference network. By training this inference network across related
datasets, we can meta-learn Bayesian inference over task-specific BNNs.
Related papers
- A Note on Bayesian Networks with Latent Root Variables [56.86503578982023]
We show that the marginal distribution over the remaining, manifest, variables also factorises as a Bayesian network, which we call empirical.
A dataset of observations of the manifest variables allows us to quantify the parameters of the empirical Bayesian net.
arXiv Detail & Related papers (2024-02-26T23:53:34Z) - Distributed Variational Inference for Online Supervised Learning [15.038649101409804]
This paper develops a scalable distributed probabilistic inference algorithm.
It applies to continuous variables, intractable posteriors and large-scale real-time data in sensor networks.
arXiv Detail & Related papers (2023-09-05T22:33:02Z) - Federated Learning as Variational Inference: A Scalable Expectation
Propagation Approach [66.9033666087719]
This paper extends the inference view and describes a variational inference formulation of federated learning.
We apply FedEP on standard federated learning benchmarks and find that it outperforms strong baselines in terms of both convergence speed and accuracy.
arXiv Detail & Related papers (2023-02-08T17:58:11Z) - Memory-Based Meta-Learning on Non-Stationary Distributions [29.443692147512742]
Memory-based meta-learning is a technique for approximating Bayes-optimal predictors.
We show that memory-based neural models, including Transformers, LSTMs, and RNNs can learn to accurately approximate known Bayes-optimal algorithms.
arXiv Detail & Related papers (2023-02-06T19:08:59Z) - Bayesian Structure Learning with Generative Flow Networks [85.84396514570373]
In Bayesian structure learning, we are interested in inferring a distribution over the directed acyclic graph (DAG) from data.
Recently, a class of probabilistic models, called Generative Flow Networks (GFlowNets), have been introduced as a general framework for generative modeling.
We show that our approach, called DAG-GFlowNet, provides an accurate approximation of the posterior over DAGs.
arXiv Detail & Related papers (2022-02-28T15:53:10Z) - Decomposing neural networks as mappings of correlation functions [57.52754806616669]
We study the mapping between probability distributions implemented by a deep feed-forward network.
We identify essential statistics in the data, as well as different information representations that can be used by neural networks.
arXiv Detail & Related papers (2022-02-10T09:30:31Z) - Kalman Bayesian Neural Networks for Closed-form Online Learning [5.220940151628734]
We propose a novel approach for BNN learning via closed-form Bayesian inference.
The calculation of the predictive distribution of the output and the update of the weight distribution are treated as Bayesian filtering and smoothing problems.
This allows closed-form expressions for training the network's parameters in a sequential/online fashion without gradient descent.
arXiv Detail & Related papers (2021-10-03T07:29:57Z) - Adaptive Conformal Inference Under Distribution Shift [0.0]
We develop methods for forming prediction sets in an online setting where the data generating distribution is allowed to vary over time in an unknown fashion.
Our framework builds on ideas from conformal inference to provide a general wrapper that can be combined with any black box method.
We test our method, adaptive conformal inference, on two real world datasets and find that its predictions are robust to visible and significant distribution shifts.
arXiv Detail & Related papers (2021-06-01T01:37:32Z) - The Bayesian Method of Tensor Networks [1.7894377200944511]
We study the Bayesian framework of the Network from two perspective.
We study the Bayesian properties of the Network by visualizing the parameters of the model and the decision boundaries in the two dimensional synthetic data set.
arXiv Detail & Related papers (2021-01-01T14:59:15Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.