Loss-calibrated expectation propagation for approximate Bayesian
decision-making
- URL: http://arxiv.org/abs/2201.03128v1
- Date: Mon, 10 Jan 2022 01:42:28 GMT
- Title: Loss-calibrated expectation propagation for approximate Bayesian
decision-making
- Authors: Michael J. Morais, Jonathan W. Pillow
- Abstract summary: We introduce loss-calibrated expectation propagation (Loss-EP), a loss-calibrated variant of expectation propagation.
We show how this asymmetry can have dramatic consequences on what information is "useful" to capture in an approximation.
- Score: 24.975981795360845
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Approximate Bayesian inference methods provide a powerful suite of tools for
finding approximations to intractable posterior distributions. However, machine
learning applications typically involve selecting actions, which -- in a
Bayesian setting -- depend on the posterior distribution only via its
contribution to expected utility. A growing body of work on loss-calibrated
approximate inference methods has therefore sought to develop posterior
approximations sensitive to the influence of the utility function. Here we
introduce loss-calibrated expectation propagation (Loss-EP), a loss-calibrated
variant of expectation propagation. This method resembles standard EP with an
additional factor that "tilts" the posterior towards higher-utility decisions.
We show applications to Gaussian process classification under binary utility
functions with asymmetric penalties on False Negative and False Positive
errors, and show how this asymmetry can have dramatic consequences on what
information is "useful" to capture in an approximation.
Related papers
- Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Doubly Robust Inference in Causal Latent Factor Models [12.116813197164047]
This article introduces a new estimator of average treatment effects under unobserved confounding in modern data-rich environments featuring large numbers of units and outcomes.
We derive finite-sample weighting and guarantees, and show that the error of the new estimator converges to a mean-zero Gaussian distribution at a parametric rate.
arXiv Detail & Related papers (2024-02-18T17:13:46Z) - Uncertainty Quantification via Stable Distribution Propagation [60.065272548502]
We propose a new approach for propagating stable probability distributions through neural networks.
Our method is based on local linearization, which we show to be an optimal approximation in terms of total variation distance for the ReLU non-linearity.
arXiv Detail & Related papers (2024-02-13T09:40:19Z) - Variational Prediction [95.00085314353436]
We present a technique for learning a variational approximation to the posterior predictive distribution using a variational bound.
This approach can provide good predictive distributions without test time marginalization costs.
arXiv Detail & Related papers (2023-07-14T18:19:31Z) - Robust Gaussian Process Regression with Huber Likelihood [2.7184224088243365]
We propose a robust process model in the Gaussian process framework with the likelihood of observed data expressed as the Huber probability distribution.
The proposed model employs weights based on projection statistics to scale residuals and bound the influence of vertical outliers and bad leverage points on the latent functions estimates.
arXiv Detail & Related papers (2023-01-19T02:59:33Z) - Variational Refinement for Importance Sampling Using the Forward
Kullback-Leibler Divergence [77.06203118175335]
Variational Inference (VI) is a popular alternative to exact sampling in Bayesian inference.
Importance sampling (IS) is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures.
We propose a novel combination of optimization and sampling techniques for approximate Bayesian inference.
arXiv Detail & Related papers (2021-06-30T11:00:24Z) - Adaptive Sampling for Estimating Distributions: A Bayesian Upper
Confidence Bound Approach [30.76846526324949]
A Bayesian variant of the existing upper confidence bound (UCB) based approaches is proposed.
The effectiveness of this strategy is discussed using data obtained from a seroprevalence survey in Los Angeles county.
arXiv Detail & Related papers (2020-12-08T00:53:34Z) - Understanding Variational Inference in Function-Space [20.940162027560408]
We highlight some advantages and limitations of employing the Kullback-Leibler divergence in this setting.
We propose (featurized) Bayesian linear regression as a benchmark for function-space' inference methods that directly measures approximation quality.
arXiv Detail & Related papers (2020-11-18T17:42:01Z) - Empirical Strategy for Stretching Probability Distribution in
Neural-network-based Regression [5.35308390309106]
In regression analysis under artificial neural networks, the prediction performance depends on determining the appropriate weights between layers.
We proposed weighted empirical stretching (WES) as a novel loss function to increase the overlap area of the two distributions.
The improved results in RMSE for the extreme domain are expected to be utilized for prediction of abnormal events in non-linear complex systems.
arXiv Detail & Related papers (2020-09-08T06:08:14Z) - A maximum-entropy approach to off-policy evaluation in average-reward
MDPs [54.967872716145656]
This work focuses on off-policy evaluation (OPE) with function approximation in infinite-horizon undiscounted Markov decision processes (MDPs)
We provide the first finite-sample OPE error bound, extending existing results beyond the episodic and discounted cases.
We show that this results in an exponential-family distribution whose sufficient statistics are the features, paralleling maximum-entropy approaches in supervised learning.
arXiv Detail & Related papers (2020-06-17T18:13:37Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.