Generalized Bayesian Additive Regression Trees Models: Beyond
Conditional Conjugacy
- URL: http://arxiv.org/abs/2202.09924v1
- Date: Sun, 20 Feb 2022 22:52:07 GMT
- Title: Generalized Bayesian Additive Regression Trees Models: Beyond
Conditional Conjugacy
- Authors: Antonio R. Linero
- Abstract summary: In this article, we greatly expand the domain of applicability of BART to arbitrary emphgeneralized BART models.
Our algorithm requires only that the user be able to compute the likelihood and (optionally) its gradient and Fisher information.
The potential applications are very broad; we consider examples in survival analysis, structured heteroskedastic regression, and gamma shape regression.
- Score: 2.969705152497174
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian additive regression trees have seen increased interest in recent
years due to their ability to combine machine learning techniques with
principled uncertainty quantification. The Bayesian backfitting algorithm used
to fit BART models, however, limits their application to a small class of
models for which conditional conjugacy exists. In this article, we greatly
expand the domain of applicability of BART to arbitrary \emph{generalized BART}
models by introducing a very simple, tuning-parameter-free, reversible jump
Markov chain Monte Carlo algorithm. Our algorithm requires only that the user
be able to compute the likelihood and (optionally) its gradient and Fisher
information. The potential applications are very broad; we consider examples in
survival analysis, structured heteroskedastic regression, and gamma shape
regression.
Related papers
- Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - What learning algorithm is in-context learning? Investigations with
linear models [87.91612418166464]
We investigate the hypothesis that transformer-based in-context learners implement standard learning algorithms implicitly.
We show that trained in-context learners closely match the predictors computed by gradient descent, ridge regression, and exact least-squares regression.
Preliminary evidence that in-context learners share algorithmic features with these predictors.
arXiv Detail & Related papers (2022-11-28T18:59:51Z) - Rethinking Log Odds: Linear Probability Modelling and Expert Advice in
Interpretable Machine Learning [8.831954614241234]
We introduce a family of interpretable machine learning models, with two broad additions: Linearised Additive Models (LAMs) and SubscaleHedge.
LAMs replace the ubiquitous logistic link function in General Additive Models (GAMs); and SubscaleHedge is an expert advice algorithm for combining base models trained on subsets of features called subscales.
arXiv Detail & Related papers (2022-11-11T17:21:57Z) - SoftBart: Soft Bayesian Additive Regression Trees [2.969705152497174]
This paper introduces the SoftBart package for fitting the Soft BART algorithm of Linero and Yang.
A major goal of this package has been to facilitate the inclusion of BART in larger models.
I show both how to use this package for standard prediction tasks and how to embed BART models in larger models.
arXiv Detail & Related papers (2022-10-28T19:25:45Z) - GP-BART: a novel Bayesian additive regression trees approach using
Gaussian processes [1.03590082373586]
The GP-BART model is an extension of BART which addresses the limitation by assuming GP priors for the predictions of each terminal node among all trees.
The model's effectiveness is demonstrated through applications to simulated and real-world data, surpassing the performance of traditional modeling approaches in various scenarios.
arXiv Detail & Related papers (2022-04-05T11:18:44Z) - A cautionary tale on fitting decision trees to data from additive
models: generalization lower bounds [9.546094657606178]
We study the generalization performance of decision trees with respect to different generative regression models.
This allows us to elicit their inductive bias, that is, the assumptions the algorithms make (or do not make) to generalize to new data.
We prove a sharp squared error generalization lower bound for a large class of decision tree algorithms fitted to sparse additive models.
arXiv Detail & Related papers (2021-10-18T21:22:40Z) - Flexible Model Aggregation for Quantile Regression [92.63075261170302]
Quantile regression is a fundamental problem in statistical learning motivated by a need to quantify uncertainty in predictions.
We investigate methods for aggregating any number of conditional quantile models.
All of the models we consider in this paper can be fit using modern deep learning toolkits.
arXiv Detail & Related papers (2021-02-26T23:21:16Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Learning Gaussian Graphical Models via Multiplicative Weights [54.252053139374205]
We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
arXiv Detail & Related papers (2020-02-20T10:50:58Z) - Stochastic tree ensembles for regularized nonlinear regression [0.913755431537592]
This paper develops a novel tree ensemble method for nonlinear regression, which we refer to as XBART.
By combining regularization and search strategies from Bayesian modeling with computationally efficient techniques, the new method attains state-of-the-art performance.
arXiv Detail & Related papers (2020-02-09T14:37:02Z) - Particle-Gibbs Sampling For Bayesian Feature Allocation Models [77.57285768500225]
Most widely used MCMC strategies rely on an element wise Gibbs update of the feature allocation matrix.
We have developed a Gibbs sampler that can update an entire row of the feature allocation matrix in a single move.
This sampler is impractical for models with a large number of features as the computational complexity scales exponentially in the number of features.
We develop a Particle Gibbs sampler that targets the same distribution as the row wise Gibbs updates, but has computational complexity that only grows linearly in the number of features.
arXiv Detail & Related papers (2020-01-25T22:11:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.