Hierarchical Embedded Bayesian Additive Regression Trees
- URL: http://arxiv.org/abs/2204.07207v2
- Date: Mon, 24 Apr 2023 14:08:48 GMT
- Title: Hierarchical Embedded Bayesian Additive Regression Trees
- Authors: Bruna Wundervald, Andrew Parnell, Katarina Domijan
- Abstract summary: HE-BART allows for random effects to be included at the terminal node level of a set of regression trees.
Using simulated and real-world examples, we demonstrate that HE-BART yields superior predictions for many of the standard mixed effects models' example data sets.
In a future version of this paper, we outline its use in larger, more advanced data sets and structures.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a simple yet powerful extension of Bayesian Additive Regression
Trees which we name Hierarchical Embedded BART (HE-BART). The model allows for
random effects to be included at the terminal node level of a set of regression
trees, making HE-BART a non-parametric alternative to mixed effects models
which avoids the need for the user to specify the structure of the random
effects in the model, whilst maintaining the prediction and uncertainty
calibration properties of standard BART. Using simulated and real-world
examples, we demonstrate that this new extension yields superior predictions
for many of the standard mixed effects models' example data sets, and yet still
provides consistent estimates of the random effect variances. In a future
version of this paper, we outline its use in larger, more advanced data sets
and structures.
Related papers
- Forecasting with Hyper-Trees [50.72190208487953]
Hyper-Trees are designed to learn the parameters of time series models.
By relating the parameters of a target time series model to features, Hyper-Trees also address the issue of parameter non-stationarity.
In this novel approach, the trees first generate informative representations from the input features, which a shallow network then maps to the target model parameters.
arXiv Detail & Related papers (2024-05-13T15:22:15Z) - Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - ChiroDiff: Modelling chirographic data with Diffusion Models [132.5223191478268]
We introduce a powerful model-class namely "Denoising Diffusion Probabilistic Models" or DDPMs for chirographic data.
Our model named "ChiroDiff", being non-autoregressive, learns to capture holistic concepts and therefore remains resilient to higher temporal sampling rate.
arXiv Detail & Related papers (2023-04-07T15:17:48Z) - GP-BART: a novel Bayesian additive regression trees approach using
Gaussian processes [1.03590082373586]
The GP-BART model is an extension of BART which addresses the limitation by assuming GP priors for the predictions of each terminal node among all trees.
The model's effectiveness is demonstrated through applications to simulated and real-world data, surpassing the performance of traditional modeling approaches in various scenarios.
arXiv Detail & Related papers (2022-04-05T11:18:44Z) - Distributional Gradient Boosting Machines [77.34726150561087]
Our framework is based on XGBoost and LightGBM.
We show that our framework achieves state-of-the-art forecast accuracy.
arXiv Detail & Related papers (2022-04-02T06:32:19Z) - On Uncertainty Estimation by Tree-based Surrogate Models in Sequential
Model-based Optimization [13.52611859628841]
We revisit various ensembles of randomized trees to investigate their behavior in the perspective of prediction uncertainty estimation.
We propose a new way of constructing an ensemble of randomized trees, referred to as BwO forest, where bagging with oversampling is employed to construct bootstrapped samples.
Experimental results demonstrate the validity and good performance of BwO forest over existing tree-based models in various circumstances.
arXiv Detail & Related papers (2022-02-22T04:50:37Z) - Generalized Bayesian Additive Regression Trees Models: Beyond
Conditional Conjugacy [2.969705152497174]
In this article, we greatly expand the domain of applicability of BART to arbitrary emphgeneralized BART models.
Our algorithm requires only that the user be able to compute the likelihood and (optionally) its gradient and Fisher information.
The potential applications are very broad; we consider examples in survival analysis, structured heteroskedastic regression, and gamma shape regression.
arXiv Detail & Related papers (2022-02-20T22:52:07Z) - Accounting for shared covariates in semi-parametric Bayesian additive regression trees [0.0]
We propose some extensions to semi-parametric models based on Bayesian additive regression trees (BART)
The main novelty in our approach lies in the way we change the tree-generation moves in BART to deal with this bias.
We show competitive performance when compared to regression models, alternative formulations of semi-parametric BART, and other tree-based methods.
arXiv Detail & Related papers (2021-08-17T13:58:44Z) - Inference in Bayesian Additive Vector Autoregressive Tree Models [0.0]
We propose combining Vector autoregressive ( VAR) models with Bayesian additive regression tree (BART) models.
The resulting BAVART model is capable of capturing arbitrary non-linear relations without much input from the researcher.
We apply our model to two datasets: the US term structure of interest rates and the Eurozone economy.
arXiv Detail & Related papers (2020-06-29T19:37:09Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z) - Particle-Gibbs Sampling For Bayesian Feature Allocation Models [77.57285768500225]
Most widely used MCMC strategies rely on an element wise Gibbs update of the feature allocation matrix.
We have developed a Gibbs sampler that can update an entire row of the feature allocation matrix in a single move.
This sampler is impractical for models with a large number of features as the computational complexity scales exponentially in the number of features.
We develop a Particle Gibbs sampler that targets the same distribution as the row wise Gibbs updates, but has computational complexity that only grows linearly in the number of features.
arXiv Detail & Related papers (2020-01-25T22:11:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.