GFlowOut: Dropout with Generative Flow Networks
- URL: http://arxiv.org/abs/2210.12928v3
- Date: Sat, 24 Jun 2023 02:53:49 GMT
- Title: GFlowOut: Dropout with Generative Flow Networks
- Authors: Dianbo Liu, Moksh Jain, Bonaventure Dossou, Qianli Shen, Salem Lahlou,
Anirudh Goyal, Nikolay Malkin, Chris Emezue, Dinghuai Zhang, Nadhir Hassen,
Xu Ji, Kenji Kawaguchi, Yoshua Bengio
- Abstract summary: Monte Carlo Dropout has been widely used as a relatively cheap way for approximate Inference.
Recent works show that the dropout mask can be viewed as a latent variable, which can be inferred with variational inference.
GFlowOutleverages the recently proposed probabilistic framework of Generative Flow Networks (GFlowNets) to learn the posterior distribution over dropout masks.
- Score: 76.59535235717631
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian Inference offers principled tools to tackle many critical problems
with modern neural networks such as poor calibration and generalization, and
data inefficiency. However, scaling Bayesian inference to large architectures
is challenging and requires restrictive approximations. Monte Carlo Dropout has
been widely used as a relatively cheap way for approximate Inference and to
estimate uncertainty with deep neural networks. Traditionally, the dropout mask
is sampled independently from a fixed distribution. Recent works show that the
dropout mask can be viewed as a latent variable, which can be inferred with
variational inference. These methods face two important challenges: (a) the
posterior distribution over masks can be highly multi-modal which can be
difficult to approximate with standard variational inference and (b) it is not
trivial to fully utilize sample-dependent information and correlation among
dropout masks to improve posterior estimation. In this work, we propose
GFlowOut to address these issues. GFlowOut leverages the recently proposed
probabilistic framework of Generative Flow Networks (GFlowNets) to learn the
posterior distribution over dropout masks. We empirically demonstrate that
GFlowOut results in predictive distributions that generalize better to
out-of-distribution data, and provide uncertainty estimates which lead to
better performance in downstream tasks.
Related papers
- Flexible Heteroscedastic Count Regression with Deep Double Poisson Networks [4.58556584533865]
We propose the Deep Double Poisson Network (DDPN) to produce accurate, input-conditional uncertainty representations.
DDPN vastly outperforms existing discrete models.
It can be applied to a variety of count regression datasets.
arXiv Detail & Related papers (2024-06-13T16:02:03Z) - Favour: FAst Variance Operator for Uncertainty Rating [0.034530027457862]
Bayesian Neural Networks (BNN) have emerged as a crucial approach for interpreting ML predictions.
By sampling from the posterior distribution, data scientists may estimate the uncertainty of an inference.
Previous work proposed propagating the first and second moments of the posterior directly through the network.
This method is even slower than sampling, so the propagated variance needs to be approximated.
Our contribution is a more principled variance propagation framework.
arXiv Detail & Related papers (2023-11-21T22:53:20Z) - Local Search GFlowNets [85.0053493167887]
Generative Flow Networks (GFlowNets) are amortized sampling methods that learn a distribution over discrete objects proportional to their rewards.
GFlowNets exhibit a remarkable ability to generate diverse samples, yet occasionally struggle to consistently produce samples with high rewards due to over-exploration on wide sample space.
This paper proposes to train GFlowNets with local search, which focuses on exploiting high-rewarded sample space to resolve this issue.
arXiv Detail & Related papers (2023-10-04T10:27:17Z) - Improved uncertainty quantification for neural networks with Bayesian
last layer [0.0]
Uncertainty quantification is an important task in machine learning.
We present a reformulation of the log-marginal likelihood of a NN with BLL which allows for efficient training using backpropagation.
arXiv Detail & Related papers (2023-02-21T20:23:56Z) - Variational Inference on the Final-Layer Output of Neural Networks [3.146069168382982]
This paper proposes to combine the advantages of both approaches by performing Variational Inference in the Final layer Output space (VIFO)
We use neural networks to learn the mean and the variance of the probabilistic output.
Experiments show that VIFO provides a good tradeoff in terms of run time and uncertainty quantification, especially for out of distribution data.
arXiv Detail & Related papers (2023-02-05T16:19:01Z) - Likelihood-Free Inference with Generative Neural Networks via Scoring
Rule Minimization [0.0]
Inference methods yield posterior approximations for simulator models with intractable likelihood.
Many works trained neural networks to approximate either the intractable likelihood or the posterior directly.
Here, we propose to approximate the posterior with generative networks trained by Scoring Rule minimization.
arXiv Detail & Related papers (2022-05-31T13:32:55Z) - Transformers Can Do Bayesian Inference [56.99390658880008]
We present Prior-Data Fitted Networks (PFNs)
PFNs leverage in-context learning in large-scale machine learning techniques to approximate a large set of posteriors.
We demonstrate that PFNs can near-perfectly mimic Gaussian processes and also enable efficient Bayesian inference for intractable problems.
arXiv Detail & Related papers (2021-12-20T13:07:39Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.