Local Bayesian Dirichlet mixing of imperfect models
- URL: http://arxiv.org/abs/2311.01596v1
- Date: Thu, 2 Nov 2023 21:02:40 GMT
- Title: Local Bayesian Dirichlet mixing of imperfect models
- Authors: Vojtech Kejzlar, L\'eo Neufcourt, Witold Nazarewicz
- Abstract summary: We study the ability of Bayesian model averaging and mixing techniques to mine nuclear masses.
We show that the global and local mixtures of models reach excellent performance on both prediction accuracy and uncertainty quantification.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To improve the predictability of complex computational models in the
experimentally-unknown domains, we propose a Bayesian statistical machine
learning framework utilizing the Dirichlet distribution that combines results
of several imperfect models. This framework can be viewed as an extension of
Bayesian stacking. To illustrate the method, we study the ability of Bayesian
model averaging and mixing techniques to mine nuclear masses. We show that the
global and local mixtures of models reach excellent performance on both
prediction accuracy and uncertainty quantification and are preferable to
classical Bayesian model averaging. Additionally, our statistical analysis
indicates that improving model predictions through mixing rather than mixing of
corrected models leads to more robust extrapolations.
Related papers
- Supervised Score-Based Modeling by Gradient Boosting [49.556736252628745]
We propose a Supervised Score-based Model (SSM) which can be viewed as a gradient boosting algorithm combining score matching.
We provide a theoretical analysis of learning and sampling for SSM to balance inference time and prediction accuracy.
Our model outperforms existing models in both accuracy and inference time.
arXiv Detail & Related papers (2024-11-02T07:06:53Z) - Model orthogonalization and Bayesian forecast mixing via Principal Component Analysis [0.0]
In many cases, the models used in the mixing process are similar.
The existence of such similar, or even redundant, models during the multimodeling process can result in misinterpretation of results and deterioration of predictive performance.
We show that by adding modelization to the proposed Bayesian Model Combination framework, one can arrive at better prediction accuracy and reach excellent uncertainty quantification performance.
arXiv Detail & Related papers (2024-05-17T15:01:29Z) - BayesBlend: Easy Model Blending using Pseudo-Bayesian Model Averaging, Stacking and Hierarchical Stacking in Python [0.0]
We introduce the BayesBlend Python package to estimate weights and blend multiple (Bayesian) models' predictive distributions.
BayesBlend implements pseudo-Bayesian model averaging, stacking and, uniquely, hierarchical Bayesian stacking to estimate model weights.
We demonstrate the usage of BayesBlend with examples of insurance loss modeling.
arXiv Detail & Related papers (2024-04-30T19:15:33Z) - Fusion of Gaussian Processes Predictions with Monte Carlo Sampling [61.31380086717422]
In science and engineering, we often work with models designed for accurate prediction of variables of interest.
Recognizing that these models are approximations of reality, it becomes desirable to apply multiple models to the same data and integrate their outcomes.
arXiv Detail & Related papers (2024-03-03T04:21:21Z) - Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Finite mixture of skewed sub-Gaussian stable distributions [0.0]
The proposed model contains the finite mixture of normal and skewed normal distributions.
It can be used as a powerful model for robust model-based clustering.
arXiv Detail & Related papers (2022-05-27T15:51:41Z) - Latent Gaussian Model Boosting [0.0]
Tree-boosting shows excellent predictive accuracy on many data sets.
We obtain increased predictive accuracy compared to existing approaches in both simulated and real-world data experiments.
arXiv Detail & Related papers (2021-05-19T07:36:30Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.