Bayesian Boosting for Linear Mixed Models
- URL: http://arxiv.org/abs/2106.04862v1
- Date: Wed, 9 Jun 2021 07:40:00 GMT
- Title: Bayesian Boosting for Linear Mixed Models
- Authors: Boyao Zhang, Colin Griesbach, Cora Kim, Nadia M\"uller-Voggel,
Elisabeth Bergherr
- Abstract summary: We propose a new inference method "BayesBoost" that combines boosting and Bayesian for linear mixed models.
The new method overcomes the shortcomings of Bayesian inference in giving precise and unambiguous guidelines.
The effectiveness of the new approach can be observed via simulation and in a data example from the field of neurophysiology.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Boosting methods are widely used in statistical learning to deal with
high-dimensional data due to their variable selection feature. However, those
methods lack straightforward ways to construct estimators for the precision of
the parameters such as variance or confidence interval, which can be achieved
by conventional statistical methods like Bayesian inference. In this paper, we
propose a new inference method "BayesBoost" that combines boosting and Bayesian
for linear mixed models to make the uncertainty estimation for the random
effects possible on the one hand. On the other hand, the new method overcomes
the shortcomings of Bayesian inference in giving precise and unambiguous
guidelines for the selection of covariates by benefiting from boosting
techniques. The implementation of Bayesian inference leads to the randomness of
model selection criteria like the conditional AIC (cAIC), so we also propose a
cAIC-based model selection criteria that focus on the stabilized regions
instead of the global minimum. The effectiveness of the new approach can be
observed via simulation and in a data example from the field of neurophysiology
focussing on the mechanisms in the brain while listening to unpleasant sounds.
Related papers
- Noise-Aware Differentially Private Variational Inference [5.4619385369457225]
Differential privacy (DP) provides robust privacy guarantees for statistical inference, but this can lead to unreliable results and biases in downstream applications.
We propose a novel method for noise-aware approximate Bayesian inference based on gradient variational inference.
We also propose a more accurate evaluation method for noise-aware posteriors.
arXiv Detail & Related papers (2024-10-25T08:18:49Z) - Achieving Well-Informed Decision-Making in Drug Discovery: A Comprehensive Calibration Study using Neural Network-Based Structure-Activity Models [4.619907534483781]
computational models that predict drug-target interactions are valuable tools to accelerate the development of new therapeutic agents.
However, such models can be poorly calibrated, which results in unreliable uncertainty estimates.
We show that combining post hoc calibration method with well-performing uncertainty quantification approaches can boost model accuracy and calibration.
arXiv Detail & Related papers (2024-07-19T10:29:00Z) - From Conformal Predictions to Confidence Regions [1.4272411349249627]
We introduce CCR, which employs a combination of conformal prediction intervals for the model outputs to establish confidence regions for model parameters.
We present coverage guarantees under minimal assumptions on noise and that is valid in finite sample regime.
Our approach is applicable to both split conformal predictions and black-box methodologies including full or cross-conformal approaches.
arXiv Detail & Related papers (2024-05-28T21:33:12Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Towards Better Certified Segmentation via Diffusion Models [62.21617614504225]
segmentation models can be vulnerable to adversarial perturbations, which hinders their use in critical-decision systems like healthcare or autonomous driving.
Recently, randomized smoothing has been proposed to certify segmentation predictions by adding Gaussian noise to the input to obtain theoretical guarantees.
In this paper, we address the problem of certifying segmentation prediction using a combination of randomized smoothing and diffusion models.
arXiv Detail & Related papers (2023-06-16T16:30:39Z) - Variational Inference for Bayesian Neural Networks under Model and
Parameter Uncertainty [12.211659310564425]
We apply the concept of model uncertainty as a framework for structural learning in BNNs.
We suggest an adaptation of a scalable variational inference approach with reparametrization of marginal inclusion probabilities.
arXiv Detail & Related papers (2023-05-01T16:38:17Z) - Error-based Knockoffs Inference for Controlled Feature Selection [49.99321384855201]
We propose an error-based knockoff inference method by integrating the knockoff features, the error-based feature importance statistics, and the stepdown procedure together.
The proposed inference procedure does not require specifying a regression model and can handle feature selection with theoretical guarantees.
arXiv Detail & Related papers (2022-03-09T01:55:59Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.