ASBART:Accelerated Soft Bayes Additive Regression Trees
- URL: http://arxiv.org/abs/2310.13975v1
- Date: Sat, 21 Oct 2023 11:27:42 GMT
- Title: ASBART:Accelerated Soft Bayes Additive Regression Trees
- Authors: Hao Ran and Yang Bai
- Abstract summary: Soft BART improves both practically and heoretically on existing Bayesian sum-of-trees models.
Compared to BART,it use more than about 20 times to complete the calculation with the default setting.
We proposed a variant of BART named accelerate Soft BART(ASBART)
- Score: 8.476756500467689
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayes additive regression trees(BART) is a nonparametric regression model
which has gained wide-spread popularity in recent years due to its flexibility
and high accuracy of estimation. Soft BART,one variation of BART,improves both
practically and heoretically on existing Bayesian sum-of-trees models. One
bottleneck for Soft BART is its slow speed in the long MCMC loop. Compared to
BART,it use more than about 20 times to complete the calculation with the
default setting. We proposed a variant of BART named accelerate Soft
BART(ASBART). Simulation studies show that the new method is about 10 times
faster than the Soft BART with comparable accuracy. Our code is open-source and
available at https://github.com/richael008/XSBART.
Related papers
- Very fast Bayesian Additive Regression Trees on GPU [0.0]
I present a GPU-enabled implementation of BART, faster by up to 200x relative to a single CPU core, making BART competitive in running time with XGBoost.
This implementation is available in the Python package bartz.
arXiv Detail & Related papers (2024-10-30T17:29:03Z) - On the Gaussian process limit of Bayesian Additive Regression Trees [0.0]
Bayesian Additive Regression Trees (BART) is a nonparametric Bayesian regression technique of rising fame.
In the limit of infinite trees, it becomes equivalent to Gaussian process (GP) regression.
This study opens new ways to understand and develop BART and GP regression.
arXiv Detail & Related papers (2024-10-26T23:18:33Z) - SoftBart: Soft Bayesian Additive Regression Trees [2.969705152497174]
This paper introduces the SoftBart package for fitting the Soft BART algorithm of Linero and Yang.
A major goal of this package has been to facilitate the inclusion of BART in larger models.
I show both how to use this package for standard prediction tasks and how to embed BART models in larger models.
arXiv Detail & Related papers (2022-10-28T19:25:45Z) - A Mixing Time Lower Bound for a Simplified Version of BART [5.149859291357858]
We provide the first lower bound on the mixing time for a simplified version of BART.
Inspired by this new connection between the mixing time and the number of data points, we perform rigorous simulations on BART.
We show qualitatively that BART's mixing time increases with the number of data points.
arXiv Detail & Related papers (2022-10-17T18:45:36Z) - BARTScore: Evaluating Generated Text as Text Generation [89.50052670307434]
We conceptualize the evaluation of generated text as a text generation problem, modeled using pre-trained sequence-to-sequence models.
We operationalize this idea using BART, an encoder-decoder based pre-trained model.
We propose a metric BARTScore with a number of variants that can be flexibly applied to evaluation of text from different perspectives.
arXiv Detail & Related papers (2021-06-22T03:20:53Z) - Beta-CROWN: Efficient Bound Propagation with Per-neuron Split
Constraints for Complete and Incomplete Neural Network Verification [151.62491805851107]
We develop $beta$-CROWN, a bound propagation based verifier that can fully encode per-neuron splits.
$beta$-CROWN is close to three orders of magnitude faster than LP-based BaB methods for robustness verification.
By terminating BaB early, our method can also be used for incomplete verification.
arXiv Detail & Related papers (2021-03-11T11:56:54Z) - Double Forward Propagation for Memorized Batch Normalization [68.34268180871416]
Batch Normalization (BN) has been a standard component in designing deep neural networks (DNNs)
We propose a memorized batch normalization (MBN) which considers multiple recent batches to obtain more accurate and robust statistics.
Compared to related methods, the proposed MBN exhibits consistent behaviors in both training and inference.
arXiv Detail & Related papers (2020-10-10T08:48:41Z) - KG-BART: Knowledge Graph-Augmented BART for Generative Commonsense
Reasoning [78.81080813406177]
We propose a novel knowledge graph augmented pre-trained language generation model KG-BART.
KG-BART encompasses the complex relations of concepts through the knowledge graph and produces more logical and natural sentences as output.
arXiv Detail & Related papers (2020-09-26T19:57:49Z) - Bayesian Additive Regression Trees with Model Trees [0.0]
We introduce an extension of BART, called Model Trees BART (MOTR-BART)
MOTR-BART considers piecewise linear functions at node levels instead of piecewise constants.
In our approach, local linearities are captured more efficiently and fewer trees are required to achieve equal or better performance than BART.
arXiv Detail & Related papers (2020-06-12T22:19:58Z) - DADA: Differentiable Automatic Data Augmentation [58.560309490774976]
We propose Differentiable Automatic Data Augmentation (DADA) which dramatically reduces the cost.
We conduct extensive experiments on CIFAR-10, CIFAR-100, SVHN, and ImageNet datasets.
Results show our DADA is at least one order of magnitude faster than the state-of-the-art while achieving very comparable accuracy.
arXiv Detail & Related papers (2020-03-08T13:23:14Z) - Near-linear Time Gaussian Process Optimization with Adaptive Batching
and Resparsification [119.41129787351092]
We introduce BBKB, the first no-regret GP optimization algorithm that provably runs in near-linear time and selects candidates in batches.
We show that the same bound can be used to adaptively delay costly updates to the sparse GP approximation, achieving a near-constant per-step amortized cost.
arXiv Detail & Related papers (2020-02-23T17:43:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.