Bayesian Additive Regression Trees with Model Trees
- URL: http://arxiv.org/abs/2006.07493v5
- Date: Wed, 10 Mar 2021 16:20:03 GMT
- Title: Bayesian Additive Regression Trees with Model Trees
- Authors: Estev\~ao B. Prado, Rafael A. Moral and Andrew C. Parnell
- Abstract summary: We introduce an extension of BART, called Model Trees BART (MOTR-BART)
MOTR-BART considers piecewise linear functions at node levels instead of piecewise constants.
In our approach, local linearities are captured more efficiently and fewer trees are required to achieve equal or better performance than BART.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian Additive Regression Trees (BART) is a tree-based machine learning
method that has been successfully applied to regression and classification
problems. BART assumes regularisation priors on a set of trees that work as
weak learners and is very flexible for predicting in the presence of
non-linearity and high-order interactions. In this paper, we introduce an
extension of BART, called Model Trees BART (MOTR-BART), that considers
piecewise linear functions at node levels instead of piecewise constants. In
MOTR-BART, rather than having a unique value at node level for the prediction,
a linear predictor is estimated considering the covariates that have been used
as the split variables in the corresponding tree. In our approach, local
linearities are captured more efficiently and fewer trees are required to
achieve equal or better performance than BART. Via simulation studies and real
data applications, we compare MOTR-BART to its main competitors. R code for
MOTR-BART implementation is available at https://github.com/ebprado/MOTR-BART.
Related papers
- Neural Graph Pattern Machine [50.78679002846741]
We propose the Neural Graph Pattern Machine (GPM), a framework designed to learn directly from graph patterns.
GPM efficiently extracts and encodes substructures while identifying the most relevant ones for downstream tasks.
arXiv Detail & Related papers (2025-01-30T20:37:47Z) - An Automatic Graph Construction Framework based on Large Language Models for Recommendation [49.51799417575638]
We introduce AutoGraph, an automatic graph construction framework based on large language models for recommendation.
LLMs infer the user preference and item knowledge, which is encoded as semantic vectors.
Latent factors are incorporated as extra nodes to link the user/item nodes, resulting in a graph with in-depth global-view semantics.
arXiv Detail & Related papers (2024-12-24T07:51:29Z) - SPaR: Self-Play with Tree-Search Refinement to Improve Instruction-Following in Large Language Models [88.29990536278167]
We introduce SPaR, a self-play framework integrating tree-search self-refinement to yield valid and comparable preference pairs free from distractions.
Our experiments show that a LLaMA3-8B model, trained over three iterations guided by SPaR, surpasses GPT-4-Turbo on the IFEval benchmark without losing general capabilities.
arXiv Detail & Related papers (2024-12-16T09:47:43Z) - Oblique Bayesian additive regression trees [0.5356944479760104]
Current implementations of Bayesian Additive Regression Trees (BART) are based on axis-aligned decision rules.
We develop an oblique version of BART that leverages a data-adaptive decision rule.
We systematically compare our oblique BART to axis-aligned BART and other tree ensemble methods, finding that oblique BART was competitive with -- and sometimes much better than -- those methods.
arXiv Detail & Related papers (2024-11-13T18:29:58Z) - On the Gaussian process limit of Bayesian Additive Regression Trees [0.0]
Bayesian Additive Regression Trees (BART) is a nonparametric Bayesian regression technique of rising fame.
In the limit of infinite trees, it becomes equivalent to Gaussian process (GP) regression.
This study opens new ways to understand and develop BART and GP regression.
arXiv Detail & Related papers (2024-10-26T23:18:33Z) - ASBART:Accelerated Soft Bayes Additive Regression Trees [8.476756500467689]
Soft BART improves both practically and heoretically on existing Bayesian sum-of-trees models.
Compared to BART,it use more than about 20 times to complete the calculation with the default setting.
We proposed a variant of BART named accelerate Soft BART(ASBART)
arXiv Detail & Related papers (2023-10-21T11:27:42Z) - flexBART: Flexible Bayesian regression trees with categorical predictors [0.6577148087211809]
Most implementations of Bayesian additive regression trees (BART) one-hot encode categorical predictors, replacing each one with several binary indicators.
We re-implement BART with regression trees that can assign multiple levels to both branches of a decision tree node.
Our re-implementation, which is available in the flexBART package, often yields improved out-of-sample predictive performance and scales better to larger datasets.
arXiv Detail & Related papers (2022-11-08T18:52:37Z) - Lookback for Learning to Branch [77.32867454769936]
Bipartite Graph Neural Networks (GNNs) have been shown to be an important component of deep learning based Mixed-Integer Linear Program (MILP) solvers.
Recent works have demonstrated the effectiveness of such GNNs in replacing the branching (variable selection) in branch-and-bound (B&B) solvers.
arXiv Detail & Related papers (2022-06-30T02:33:32Z) - GP-BART: a novel Bayesian additive regression trees approach using
Gaussian processes [1.03590082373586]
The GP-BART model is an extension of BART which addresses the limitation by assuming GP priors for the predictions of each terminal node among all trees.
The model's effectiveness is demonstrated through applications to simulated and real-world data, surpassing the performance of traditional modeling approaches in various scenarios.
arXiv Detail & Related papers (2022-04-05T11:18:44Z) - Momentum Pseudo-Labeling for Semi-Supervised Speech Recognition [55.362258027878966]
We present momentum pseudo-labeling (MPL) as a simple yet effective strategy for semi-supervised speech recognition.
MPL consists of a pair of online and offline models that interact and learn from each other, inspired by the mean teacher method.
The experimental results demonstrate that MPL effectively improves over the base model and is scalable to different semi-supervised scenarios.
arXiv Detail & Related papers (2021-06-16T16:24:55Z) - Improved Branch and Bound for Neural Network Verification via Lagrangian
Decomposition [161.09660864941603]
We improve the scalability of Branch and Bound (BaB) algorithms for formally proving input-output properties of neural networks.
We present a novel activation-based branching strategy and a BaB framework, named Branch and Dual Network Bound (BaDNB)
BaDNB outperforms previous complete verification systems by a large margin, cutting average verification times by factors up to 50 on adversarial properties.
arXiv Detail & Related papers (2021-04-14T09:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.