RFpredInterval: An R Package for Prediction Intervals with Random
Forests and Boosted Forests
- URL: http://arxiv.org/abs/2106.08217v1
- Date: Tue, 15 Jun 2021 15:27:50 GMT
- Title: RFpredInterval: An R Package for Prediction Intervals with Random
Forests and Boosted Forests
- Authors: Cansu Alakus, Denis Larocque, Aurelie Labbe
- Abstract summary: We have developed a comprehensive R package, RFpredInterval, that integrates 16 methods to build prediction intervals with random forests and boosted forests.
The methods implemented in the package are a new method to build prediction intervals with boosted forests (PIBF) and 15 different variants to produce prediction intervals with random forests proposed by Roy and Larocque ( 2020)
The results show that the proposed method is very competitive and, globally, it outperforms the competing methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Like many predictive models, random forests provide a point prediction for a
new observation. Besides the point prediction, it is important to quantify the
uncertainty in the prediction. Prediction intervals provide information about
the reliability of the point predictions. We have developed a comprehensive R
package, RFpredInterval, that integrates 16 methods to build prediction
intervals with random forests and boosted forests. The methods implemented in
the package are a new method to build prediction intervals with boosted forests
(PIBF) and 15 different variants to produce prediction intervals with random
forests proposed by Roy and Larocque (2020). We perform an extensive simulation
study and apply real data analyses to compare the performance of the proposed
method to ten existing methods to build prediction intervals with random
forests. The results show that the proposed method is very competitive and,
globally, it outperforms the competing methods.
Related papers
- Joint Prediction Regions for time-series models [0.0]
It is an easy task to compute Joint Prediction regions (JPR) when the data is IID.
This project aims to implement Wolf and Wunderli's method for constructing JPRs and compare it with other methods.
arXiv Detail & Related papers (2024-05-14T02:38:49Z) - When Rigidity Hurts: Soft Consistency Regularization for Probabilistic
Hierarchical Time Series Forecasting [69.30930115236228]
Probabilistic hierarchical time-series forecasting is an important variant of time-series forecasting.
Most methods focus on point predictions and do not provide well-calibrated probabilistic forecasts distributions.
We propose PROFHiT, a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.
arXiv Detail & Related papers (2023-10-17T20:30:16Z) - Inference with Mondrian Random Forests [6.97762648094816]
We give precise bias and variance characterizations, along with a Berry-Esseen-type central limit theorem, for the Mondrian random forest regression estimator.
We present valid statistical inference methods for the unknown regression function.
Efficient and implementable algorithms are devised for both batch and online learning settings.
arXiv Detail & Related papers (2023-10-15T01:41:42Z) - Prediction Intervals in the Beta Autoregressive Moving Average Model [0.0]
Two of the proposed prediction intervals are based on approximations considering the normal distribution and the quantile function of the beta distribution.
We also consider bootstrap-based prediction intervals, namely: (i) bootstrap prediction errors (BPE) interval; (ii) bias-corrected and acceleration (BCa) prediction interval; and (iii) percentile prediction interval based on the quantiles of the bootstrap-predicted values for two different bootstrapping schemes.
arXiv Detail & Related papers (2022-07-24T01:22:27Z) - When Rigidity Hurts: Soft Consistency Regularization for Probabilistic
Hierarchical Time Series Forecasting [69.30930115236228]
Probabilistic hierarchical time-series forecasting is an important variant of time-series forecasting.
Most methods focus on point predictions and do not provide well-calibrated probabilistic forecasts distributions.
We propose PROFHiT, a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.
arXiv Detail & Related papers (2022-06-16T06:13:53Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - CovarianceNet: Conditional Generative Model for Correct Covariance
Prediction in Human Motion Prediction [71.31516599226606]
We present a new method to correctly predict the uncertainty associated with the predicted distribution of future trajectories.
Our approach, CovariaceNet, is based on a Conditional Generative Model with Gaussian latent variables.
arXiv Detail & Related papers (2021-09-07T09:38:24Z) - Probabilistic Gradient Boosting Machines for Large-Scale Probabilistic
Regression [51.770998056563094]
Probabilistic Gradient Boosting Machines (PGBM) is a method to create probabilistic predictions with a single ensemble of decision trees.
We empirically demonstrate the advantages of PGBM compared to existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-03T08:32:13Z) - Quantifying Uncertainty in Deep Spatiotemporal Forecasting [67.77102283276409]
We describe two types of forecasting problems: regular grid-based and graph-based.
We analyze UQ methods from both the Bayesian and the frequentist point view, casting in a unified framework via statistical decision theory.
Through extensive experiments on real-world road network traffic, epidemics, and air quality forecasting tasks, we reveal the statistical computational trade-offs for different UQ methods.
arXiv Detail & Related papers (2021-05-25T14:35:46Z) - Interpretable Machines: Constructing Valid Prediction Intervals with
Random Forests [0.0]
An important issue when using Machine Learning algorithms in recent research is the lack of interpretability.
A contribution to this gap for the Random Forest Regression Learner is presented here.
Several parametric and non-parametric prediction intervals are provided for Random Forest point predictions.
A thorough investigation through Monte-Carlo simulation is conducted evaluating the performance of the proposed methods.
arXiv Detail & Related papers (2021-03-09T23:05:55Z) - Prediction intervals for Deep Neural Networks [0.0]
We adapt the randomized trees method originally developed for random forests to construct ensembles of neural networks.
The extra-randomness introduced in the ensemble reduces the variance of the predictions and yields gains in out-of-sample accuracy.
arXiv Detail & Related papers (2020-10-08T15:11:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.