Parametrization invariant interpretation of priors and posteriors
- URL: http://arxiv.org/abs/2105.08304v1
- Date: Tue, 18 May 2021 06:45:05 GMT
- Title: Parametrization invariant interpretation of priors and posteriors
- Authors: Jesus Cerquides
- Abstract summary: We move away from the idea that "a prior distribution establishes a probability distribution over the parameters of our model" to the idea that "a prior distribution establishes a probability distribution over probability distributions"
Under this mindset, any distribution over probability distributions should be "intrinsic", that is, invariant to the specific parametrization which is selected for the manifold.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we leverage on probability over Riemannian manifolds to rethink
the interpretation of priors and posteriors in Bayesian inference. The main
mindshift is to move away from the idea that "a prior distribution establishes
a probability distribution over the parameters of our model" to the idea that
"a prior distribution establishes a probability distribution over probability
distributions". To do that we assume that our probabilistic model is a
Riemannian manifold with the Fisher metric. Under this mindset, any
distribution over probability distributions should be "intrinsic", that is,
invariant to the specific parametrization which is selected for the manifold.
We exemplify our ideas through a simple analysis of distributions over the
manifold of Bernoulli distributions.
One of the major shortcomings of maximum a posteriori estimates is that they
depend on the parametrization. Based on the understanding developed here, we
can define the maximum a posteriori estimate which is independent of the
parametrization.
Related papers
- Differentiable Annealed Importance Sampling Minimizes The Symmetrized Kullback-Leibler Divergence Between Initial and Target Distribution [10.067421338825545]
We show that DAIS minimizes the symmetrized Kullback-Leibler divergence between the initial and target distribution.
DAIS can be seen as a form of variational inference (VI) as its initial distribution is a parametric fit to an intractable target distribution.
arXiv Detail & Related papers (2024-05-23T17:55:09Z) - Variational Prediction [95.00085314353436]
We present a technique for learning a variational approximation to the posterior predictive distribution using a variational bound.
This approach can provide good predictive distributions without test time marginalization costs.
arXiv Detail & Related papers (2023-07-14T18:19:31Z) - Wrapped Distributions on homogeneous Riemannian manifolds [58.720142291102135]
Control over distributions' properties, such as parameters, symmetry and modality yield a family of flexible distributions.
We empirically validate our approach by utilizing our proposed distributions within a variational autoencoder and a latent space network model.
arXiv Detail & Related papers (2022-04-20T21:25:21Z) - Personalized Trajectory Prediction via Distribution Discrimination [78.69458579657189]
Trarimiy prediction is confronted with the dilemma to capture the multi-modal nature of future dynamics.
We present a distribution discrimination (DisDis) method to predict personalized motion patterns.
Our method can be integrated with existing multi-modal predictive models as a plug-and-play module.
arXiv Detail & Related papers (2021-07-29T17:42:12Z) - Variational Refinement for Importance Sampling Using the Forward
Kullback-Leibler Divergence [77.06203118175335]
Variational Inference (VI) is a popular alternative to exact sampling in Bayesian inference.
Importance sampling (IS) is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures.
We propose a novel combination of optimization and sampling techniques for approximate Bayesian inference.
arXiv Detail & Related papers (2021-06-30T11:00:24Z) - Convergence Rates of Empirical Bayes Posterior Distributions: A
Variational Perspective [20.51199643121034]
We study the convergence rates of empirical Bayes posterior distributions for nonparametric and high-dimensional inference.
We show that the empirical Bayes posterior distribution induced by the maximum marginal likelihood estimator can be regarded as a variational approximation to a hierarchical Bayes posterior distribution.
arXiv Detail & Related papers (2020-09-08T19:35:27Z) - Kullback-Leibler divergence between quantum distributions, and its
upper-bound [1.2183405753834562]
This work presents an upper-bound to value that the Kullback-Leibler (KL) divergence can reach for a class of probability distributions called quantum distributions (QD)
The retrieving of an upper-bound for the entropic divergence is here shown to be possible under the condition that the compared distributions are quantum distributions over the same quantum value, thus they become comparable.
arXiv Detail & Related papers (2020-08-13T14:42:13Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z) - Distributionally Robust Bayesian Quadrature Optimization [60.383252534861136]
We study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d. samples.
A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set.
We propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose.
arXiv Detail & Related papers (2020-01-19T12:00:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.