Non-Volatile Memory Accelerated Posterior Estimation
- URL: http://arxiv.org/abs/2202.10522v1
- Date: Mon, 21 Feb 2022 20:25:57 GMT
- Title: Non-Volatile Memory Accelerated Posterior Estimation
- Authors: Andrew Wood, Moshik Hershcovitch, Daniel Waddington, Sarel Cohen,
Peter Chin
- Abstract summary: Current machine learning models use only a single learnable parameter combination when making predictions.
We show that through the use of high-capacity persistent storage, models whose posterior distribution was too big to approximate are now feasible.
- Score: 3.4256231429537936
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian inference allows machine learning models to express uncertainty.
Current machine learning models use only a single learnable parameter
combination when making predictions, and as a result are highly overconfident
when their predictions are wrong. To use more learnable parameter combinations
efficiently, these samples must be drawn from the posterior distribution.
Unfortunately computing the posterior directly is infeasible, so often
researchers approximate it with a well known distribution such as a Gaussian.
In this paper, we show that through the use of high-capacity persistent
storage, models whose posterior distribution was too big to approximate are now
feasible, leading to improved predictions in downstream tasks.
Related papers
- Random features models: a way to study the success of naive imputation [0.0]
Constant (naive) imputation is still widely used in practice as this is a first easy-to-use technique to deal with missing data.
Recent works suggest that this bias is low in the context of high-dimensional linear predictors.
This paper confirms the intuition that the bias is negligible and that surprisingly naive imputation also remains relevant in very low dimension.
arXiv Detail & Related papers (2024-02-06T09:37:06Z) - Variational Prediction [95.00085314353436]
We present a technique for learning a variational approximation to the posterior predictive distribution using a variational bound.
This approach can provide good predictive distributions without test time marginalization costs.
arXiv Detail & Related papers (2023-07-14T18:19:31Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - Performative Prediction with Neural Networks [24.880495520422]
performative prediction is a framework for learning models that influence the data they intend to predict.
Standard convergence results for finding a performatively stable classifier with the method of repeated risk minimization assume that the data distribution is Lipschitz continuous to the model's parameters.
In this work, we instead assume that the data distribution is Lipschitz continuous with respect to the model's predictions, a more natural assumption for performative systems.
arXiv Detail & Related papers (2023-04-14T01:12:48Z) - Flat Seeking Bayesian Neural Networks [32.61417343756841]
We develop theories, the Bayesian setting, and the variational inference approach for the sharpness-aware posterior.
Specifically, the models sampled from our sharpness-aware posterior, and the optimal approximate posterior estimating this sharpness-aware posterior, have better flatness.
We conduct experiments by leveraging the sharpness-aware posterior with state-of-the-art Bayesian Neural Networks.
arXiv Detail & Related papers (2023-02-06T11:40:44Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Latent Gaussian Model Boosting [0.0]
Tree-boosting shows excellent predictive accuracy on many data sets.
We obtain increased predictive accuracy compared to existing approaches in both simulated and real-world data experiments.
arXiv Detail & Related papers (2021-05-19T07:36:30Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Curse of Small Sample Size in Forecasting of the Active Cases in
COVID-19 Outbreak [0.0]
During the COVID-19 pandemic, a massive number of attempts on the predictions of the number of cases and the other future trends of this pandemic have been made.
However, they fail to predict, in a reliable way, the medium and long term evolution of fundamental features of COVID-19 outbreak within acceptable accuracy.
This paper gives an explanation for the failure of machine learning models in this particular forecasting problem.
arXiv Detail & Related papers (2020-11-06T23:13:34Z) - Ambiguity in Sequential Data: Predicting Uncertain Futures with
Recurrent Models [110.82452096672182]
We propose an extension of the Multiple Hypothesis Prediction (MHP) model to handle ambiguous predictions with sequential data.
We also introduce a novel metric for ambiguous problems, which is better suited to account for uncertainties.
arXiv Detail & Related papers (2020-03-10T09:15:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.