Model Selection for Time Series Forecasting: Empirical Analysis of
Different Estimators
- URL: http://arxiv.org/abs/2104.00584v1
- Date: Thu, 1 Apr 2021 16:08:25 GMT
- Title: Model Selection for Time Series Forecasting: Empirical Analysis of
Different Estimators
- Authors: Vitor Cerqueira, Luis Torgo, Carlos Soares
- Abstract summary: We compare a set of estimation methods for model selection in time series forecasting tasks.
We empirically found that the accuracy of the estimators for selecting the best solution is low.
Some factors, such as the sample size, are important in the relative performance of the estimators.
- Score: 1.6328866317851185
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Evaluating predictive models is a crucial task in predictive analytics. This
process is especially challenging with time series data where the observations
show temporal dependencies. Several studies have analysed how different
performance estimation methods compare with each other for approximating the
true loss incurred by a given forecasting model. However, these studies do not
address how the estimators behave for model selection: the ability to select
the best solution among a set of alternatives. We address this issue and
compare a set of estimation methods for model selection in time series
forecasting tasks. We attempt to answer two main questions: (i) how often is
the best possible model selected by the estimators; and (ii) what is the
performance loss when it does not. We empirically found that the accuracy of
the estimators for selecting the best solution is low, and the overall
forecasting performance loss associated with the model selection process ranges
from 1.2% to 2.3%. We also discovered that some factors, such as the sample
size, are important in the relative performance of the estimators.
Related papers
- Model Assessment and Selection under Temporal Distribution Shift [1.024113475677323]
We develop an adaptive rolling window approach to estimate the generalization error of a given model.
We also integrate pairwise comparisons into a single-elimination tournament, achieving near-optimal model selection from a collection of candidates.
arXiv Detail & Related papers (2024-02-13T18:54:08Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - Empirical Analysis of Model Selection for Heterogeneous Causal Effect Estimation [24.65301562548798]
We study the problem of model selection in causal inference, specifically for conditional average treatment effect (CATE) estimation.
We conduct an empirical analysis to benchmark the surrogate model selection metrics introduced in the literature, as well as the novel ones introduced in this work.
arXiv Detail & Related papers (2022-11-03T16:26:06Z) - Post-Selection Confidence Bounds for Prediction Performance [2.28438857884398]
In machine learning, the selection of a promising model from a potentially large number of competing models and the assessment of its generalization performance are critical tasks.
We propose an algorithm how to compute valid lower confidence bounds for multiple models that have been selected based on their prediction performances in the evaluation set.
arXiv Detail & Related papers (2022-10-24T13:28:43Z) - Multi-Objective Model Selection for Time Series Forecasting [9.473440847947492]
We present a benchmark, evaluating 7 classical and 6 deep learning forecasting methods on 44 datasets.
We leverage the benchmark evaluations to learn good defaults that consider multiple objectives such as accuracy and latency.
By learning a mapping from forecasting models to performance metrics, we show that our method PARETOSELECT is able to accurately select models.
arXiv Detail & Related papers (2022-02-17T07:40:15Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Models, Pixels, and Rewards: Evaluating Design Trade-offs in Visual
Model-Based Reinforcement Learning [109.74041512359476]
We study a number of design decisions for the predictive model in visual MBRL algorithms.
We find that a range of design decisions that are often considered crucial, such as the use of latent spaces, have little effect on task performance.
We show how this phenomenon is related to exploration and how some of the lower-scoring models on standard benchmarks will perform the same as the best-performing models when trained on the same training data.
arXiv Detail & Related papers (2020-12-08T18:03:21Z) - A Worrying Analysis of Probabilistic Time-series Models for Sales
Forecasting [10.690379201437015]
Probabilistic time-series models become popular in the forecasting field as they help to make optimal decisions under uncertainty.
We analyze the performance of three prominent probabilistic time-series models for sales forecasting.
arXiv Detail & Related papers (2020-11-21T03:31:23Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.