Inference in Bayesian Additive Vector Autoregressive Tree Models
- URL: http://arxiv.org/abs/2006.16333v2
- Date: Tue, 9 Mar 2021 12:29:13 GMT
- Title: Inference in Bayesian Additive Vector Autoregressive Tree Models
- Authors: Florian Huber and Luca Rossini
- Abstract summary: We propose combining Vector autoregressive ( VAR) models with Bayesian additive regression tree (BART) models.
The resulting BAVART model is capable of capturing arbitrary non-linear relations without much input from the researcher.
We apply our model to two datasets: the US term structure of interest rates and the Eurozone economy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vector autoregressive (VAR) models assume linearity between the endogenous
variables and their lags. This assumption might be overly restrictive and could
have a deleterious impact on forecasting accuracy. As a solution, we propose
combining VAR with Bayesian additive regression tree (BART) models. The
resulting Bayesian additive vector autoregressive tree (BAVART) model is
capable of capturing arbitrary non-linear relations between the endogenous
variables and the covariates without much input from the researcher. Since
controlling for heteroscedasticity is key for producing precise density
forecasts, our model allows for stochastic volatility in the errors. We apply
our model to two datasets. The first application shows that the BAVART model
yields highly competitive forecasts of the US term structure of interest rates.
In a second application, we estimate our model using a moderately sized
Eurozone dataset to investigate the dynamic effects of uncertainty on the
economy.
Related papers
- Co-data Learning for Bayesian Additive Regression Trees [0.0]
We propose to incorporate co-data into a sum-of-trees prediction model.
The proposed method can handle multiple types of co-data simultaneously.
Co-data enhances prediction in an application to diffuse large B-cell lymphoma prognosis.
arXiv Detail & Related papers (2023-11-16T16:14:39Z) - Linked shrinkage to improve estimation of interaction effects in
regression models [0.0]
We develop an estimator that adapts well to two-way interaction terms in a regression model.
We evaluate the potential of the model for inference, which is notoriously hard for selection strategies.
Our models can be very competitive to a more advanced machine learner, like random forest, even for fairly large sample sizes.
arXiv Detail & Related papers (2023-09-25T10:03:39Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - Hierarchical Embedded Bayesian Additive Regression Trees [0.0]
HE-BART allows for random effects to be included at the terminal node level of a set of regression trees.
Using simulated and real-world examples, we demonstrate that HE-BART yields superior predictions for many of the standard mixed effects models' example data sets.
In a future version of this paper, we outline its use in larger, more advanced data sets and structures.
arXiv Detail & Related papers (2022-04-14T19:56:03Z) - Benign-Overfitting in Conditional Average Treatment Effect Prediction
with Linear Regression [14.493176427999028]
We study the benign overfitting theory in the prediction of the conditional average treatment effect (CATE) with linear regression models.
We show that the T-learner fails to achieve the consistency except the random assignment, while the IPW-learner converges the risk to zero if the propensity score is known.
arXiv Detail & Related papers (2022-02-10T18:51:52Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Multivariate Probabilistic Regression with Natural Gradient Boosting [63.58097881421937]
We propose a Natural Gradient Boosting (NGBoost) approach based on nonparametrically modeling the conditional parameters of the multivariate predictive distribution.
Our method is robust, works out-of-the-box without extensive tuning, is modular with respect to the assumed target distribution, and performs competitively in comparison to existing approaches.
arXiv Detail & Related papers (2021-06-07T17:44:49Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - A Locally Adaptive Interpretable Regression [7.4267694612331905]
Linear regression is one of the most interpretable prediction models.
In this work, we introduce a locally adaptive interpretable regression (LoAIR)
Our model achieves comparable or better predictive performance than the other state-of-the-art baselines.
arXiv Detail & Related papers (2020-05-07T09:26:14Z) - Nonparametric Estimation in the Dynamic Bradley-Terry Model [69.70604365861121]
We develop a novel estimator that relies on kernel smoothing to pre-process the pairwise comparisons over time.
We derive time-varying oracle bounds for both the estimation error and the excess risk in the model-agnostic setting.
arXiv Detail & Related papers (2020-02-28T21:52:49Z) - On the Discrepancy between Density Estimation and Sequence Generation [92.70116082182076]
log-likelihood is highly correlated with BLEU when we consider models within the same family.
We observe no correlation between rankings of models across different families.
arXiv Detail & Related papers (2020-02-17T20:13:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.