Forecasting with sktime: Designing sktime's New Forecasting API and
Applying It to Replicate and Extend the M4 Study
- URL: http://arxiv.org/abs/2005.08067v2
- Date: Mon, 8 Jun 2020 17:57:30 GMT
- Title: Forecasting with sktime: Designing sktime's New Forecasting API and
Applying It to Replicate and Extend the M4 Study
- Authors: Markus L\"oning, Franz Kir\'aly
- Abstract summary: We present a new open-source framework for forecasting in Python.
Our framework forms part of sktime, a more general machine learning toolbox for time series with scikit-learn compatible interfaces for different learning tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new open-source framework for forecasting in Python. Our
framework forms part of sktime, a more general machine learning toolbox for
time series with scikit-learn compatible interfaces for different learning
tasks. Our new framework provides dedicated forecasting algorithms and tools to
build, tune and evaluate composite models. We use sktime to both replicate and
extend key results from the M4 forecasting study. In particular, we further
investigate the potential of simple off-the-shelf machine learning approaches
for univariate forecasting. Our main results are that simple hybrid approaches
can boost the performance of statistical models, and that simple pure
approaches can achieve competitive performance on the hourly data set,
outperforming the statistical algorithms and coming close to the M4 winner.
Related papers
- Context is Key: A Benchmark for Forecasting with Essential Textual Information [87.3175915185287]
"Context is Key" (CiK) is a time series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context.
We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters.
Our experiments highlight the importance of incorporating contextual information, demonstrate surprising performance when using LLM-based forecasting models, and also reveal some of their critical shortcomings.
arXiv Detail & Related papers (2024-10-24T17:56:08Z) - Learning Augmentation Policies from A Model Zoo for Time Series Forecasting [58.66211334969299]
We introduce AutoTSAug, a learnable data augmentation method based on reinforcement learning.
By augmenting the marginal samples with a learnable policy, AutoTSAug substantially improves forecasting performance.
arXiv Detail & Related papers (2024-09-10T07:34:19Z) - Chronos: Learning the Language of Time Series [79.38691251254173]
Chronos is a framework for pretrained probabilistic time series models.
We show that Chronos models can leverage time series data from diverse domains to improve zero-shot accuracy on unseen forecasting tasks.
arXiv Detail & Related papers (2024-03-12T16:53:54Z) - AutoGluon-TimeSeries: AutoML for Probabilistic Time Series Forecasting [80.14147131520556]
AutoGluon-TimeSeries is an open-source AutoML library for probabilistic time series forecasting.
It generates accurate point and quantile forecasts with just 3 lines of Python code.
arXiv Detail & Related papers (2023-08-10T13:28:59Z) - EAMDrift: An interpretable self retrain model for time series [0.0]
We present EAMDrift, a novel method that combines forecasts from multiple individual predictors by weighting each prediction according to a performance metric.
EAMDrift is designed to automatically adapt to out-of-distribution patterns in data and identify the most appropriate models to use at each moment.
Our study on real-world datasets shows that EAMDrift outperforms individual baseline models by 20% and achieves comparable accuracy results to non-interpretable ensemble models.
arXiv Detail & Related papers (2023-05-31T13:25:26Z) - Cluster-and-Conquer: A Framework For Time-Series Forecasting [94.63501563413725]
We propose a three-stage framework for forecasting high-dimensional time-series data.
Our framework is highly general, allowing for any time-series forecasting and clustering method to be used in each step.
When instantiated with simple linear autoregressive models, we are able to achieve state-of-the-art results on several benchmark datasets.
arXiv Detail & Related papers (2021-10-26T20:41:19Z) - Sparse MoEs meet Efficient Ensembles [49.313497379189315]
We study the interplay of two popular classes of such models: ensembles of neural networks and sparse mixture of experts (sparse MoEs)
We present Efficient Ensemble of Experts (E$3$), a scalable and simple ensemble of sparse MoEs that takes the best of both classes of models, while using up to 45% fewer FLOPs than a deep ensemble.
arXiv Detail & Related papers (2021-10-07T11:58:35Z) - Ensembles of Randomized NNs for Pattern-based Time Series Forecasting [0.0]
We propose an ensemble forecasting approach based on randomized neural networks.
A pattern-based representation of time series makes the proposed approach suitable for forecasting time series with multiple seasonality.
Case studies conducted on four real-world forecasting problems verified the effectiveness and superior performance of the proposed ensemble forecasting approach.
arXiv Detail & Related papers (2021-07-08T20:13:50Z) - An Accurate and Fully-Automated Ensemble Model for Weekly Time Series
Forecasting [9.617563440471928]
We propose a forecasting method in this domain, leveraging state-of-the-art forecasting techniques.
We consider different meta-learning architectures, algorithms, and base model pools.
Our proposed method consistently outperforms a set of benchmarks and state-of-the-art weekly forecasting models.
arXiv Detail & Related papers (2020-10-16T04:29:09Z) - Supervised learning from noisy observations: Combining machine-learning
techniques with data assimilation [0.6091702876917281]
We show how to optimally combine forecast models and their inherent uncertainty with incoming noisy observations.
We show that the obtained forecast model has remarkably good forecast skill while being computationally cheap once trained.
Going beyond the task of forecasting, we show that our method can be used to generate reliable ensembles for probabilistic forecasting as well as to learn effective model closure in multi-scale systems.
arXiv Detail & Related papers (2020-07-14T22:29:37Z) - For2For: Learning to forecast from forecasts [1.6752182911522522]
This paper presents a time series forecasting framework which combines standard forecasting methods and a machine learning model.
Tested on the M4 competition dataset, this approach outperforms all submissions for quarterly series, and is more accurate than all but the winning algorithm for monthly series.
arXiv Detail & Related papers (2020-01-14T03:06:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.