On the safe use of prior densities for Bayesian model selection
- URL: http://arxiv.org/abs/2206.05210v1
- Date: Fri, 10 Jun 2022 16:17:48 GMT
- Title: On the safe use of prior densities for Bayesian model selection
- Authors: F. Llorente, L. Martino, E. Curbelo, J. Lopez-Santiago, D. Delgado
- Abstract summary: We discuss the issue of prior sensitivity of the marginal likelihood and its role in model selection.
We also comment on the use of uninformative priors, which are very common choices in practice.
One of them involving a real-world application on exoplanet detection.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The application of Bayesian inference for the purpose of model selection is
very popular nowadays. In this framework, models are compared through their
marginal likelihoods, or their quotients, called Bayes factors. However,
marginal likelihoods depends on the prior choice. For model selection, even
diffuse priors can be actually very informative, unlike for the parameter
estimation problem. Furthermore, when the prior is improper, the marginal
likelihood of the corresponding model is undetermined. In this work, we discuss
the issue of prior sensitivity of the marginal likelihood and its role in model
selection. We also comment on the use of uninformative priors, which are very
common choices in practice. Several practical suggestions are discussed and
many possible solutions, proposed in the literature, to design objective priors
for model selection are described. Some of them also allow the use of improper
priors. The connection between the marginal likelihood approach and the
well-known information criteria is also presented. We describe the main issues
and possible solutions by illustrative numerical examples, providing also some
related code. One of them involving a real-world application on exoplanet
detection.
Related papers
- Large Language Models Are Not Robust Multiple Choice Selectors [117.72712117510953]
Multiple choice questions (MCQs) serve as a common yet important task format in the evaluation of large language models (LLMs)
This work shows that modern LLMs are vulnerable to option position changes due to their inherent "selection bias"
We propose a label-free, inference-time debiasing method, called PriDe, which separates the model's prior bias for option IDs from the overall prediction distribution.
arXiv Detail & Related papers (2023-09-07T17:44:56Z) - Bivariate Causal Discovery using Bayesian Model Selection [11.726586969589]
We show how to incorporate causal assumptions within the Bayesian framework.
This enables us to construct models with realistic assumptions.
We then outperform previous methods on a wide range of benchmark datasets.
arXiv Detail & Related papers (2023-06-05T14:51:05Z) - The Choice of Noninformative Priors for Thompson Sampling in
Multiparameter Bandit Models [56.31310344616837]
Thompson sampling (TS) has been known for its outstanding empirical performance supported by theoretical guarantees across various reward models.
This study explores the impact of selecting noninformative priors, offering insights into the performance of TS when dealing with new models that lack theoretical understanding.
arXiv Detail & Related papers (2023-02-28T08:42:42Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - Bayesian Model Selection, the Marginal Likelihood, and Generalization [49.19092837058752]
We first revisit the appealing properties of the marginal likelihood for learning constraints and hypothesis testing.
We show how marginal likelihood can be negatively correlated with generalization, with implications for neural architecture search.
We also re-examine the connection between the marginal likelihood and PAC-Bayes bounds and use this connection to further elucidate the shortcomings of the marginal likelihood for model selection.
arXiv Detail & Related papers (2022-02-23T18:38:16Z) - An exact counterfactual-example-based approach to tree-ensemble models
interpretability [0.0]
High-performance models do not exhibit the necessary transparency to make their decisions fully understandable.
We could derive an exact geometrical characterisation of their decision regions under the form of a collection of multidimensional intervals.
An adaptation to reasoning on regression problems is also envisaged.
arXiv Detail & Related papers (2021-05-31T09:32:46Z) - Probabilistic Metric Learning with Adaptive Margin for Top-K
Recommendation [40.80017379274105]
We develop a distance-based recommendation model with several novel aspects.
The proposed model outperforms the best existing models by 4-22% in terms of recall@K on Top-K recommendation.
arXiv Detail & Related papers (2021-01-13T03:11:04Z) - Marginal likelihood computation for model selection and hypothesis
testing: an extensive review [66.37504201165159]
This article provides a comprehensive study of the state-of-the-art of the topic.
We highlight limitations, benefits, connections and differences among the different techniques.
Problems and possible solutions with the use of improper priors are also described.
arXiv Detail & Related papers (2020-05-17T18:31:58Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.