Forecasting with sktime: Designing sktime's New Forecasting API and
Applying It to Replicate and Extend the M4 Study
- URL: http://arxiv.org/abs/2005.08067v2
- Date: Mon, 8 Jun 2020 17:57:30 GMT
- Title: Forecasting with sktime: Designing sktime's New Forecasting API and
Applying It to Replicate and Extend the M4 Study
- Authors: Markus L\"oning, Franz Kir\'aly
- Abstract summary: We present a new open-source framework for forecasting in Python.
Our framework forms part of sktime, a more general machine learning toolbox for time series with scikit-learn compatible interfaces for different learning tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new open-source framework for forecasting in Python. Our
framework forms part of sktime, a more general machine learning toolbox for
time series with scikit-learn compatible interfaces for different learning
tasks. Our new framework provides dedicated forecasting algorithms and tools to
build, tune and evaluate composite models. We use sktime to both replicate and
extend key results from the M4 forecasting study. In particular, we further
investigate the potential of simple off-the-shelf machine learning approaches
for univariate forecasting. Our main results are that simple hybrid approaches
can boost the performance of statistical models, and that simple pure
approaches can achieve competitive performance on the hourly data set,
outperforming the statistical algorithms and coming close to the M4 winner.
Related papers
- Benchmarking Time Series Forecasting Models: From Statistical Techniques to Foundation Models in Real-World Applications [0.0]
Time series forecasting is essential for operational intelligence in the hospitality industry.
This study evaluates the performance of statistical, machine learning (ML), deep learning, and foundation models in forecasting hourly sales over a 14-day horizon.
arXiv Detail & Related papers (2025-02-05T17:30:31Z) - EasyTime: Time Series Forecasting Made Easy [35.66163191201942]
We show how EasyTime can be used to simplify the use of time series forecasting.
EasyTime enables one-click evaluation of new forecasting methods.
It provides an Automated Ensemble module that combines the promising forecasting methods to yield superior forecasting accuracy.
arXiv Detail & Related papers (2024-12-23T14:22:02Z) - Enhancing Foundation Models for Time Series Forecasting via Wavelet-based Tokenization [74.3339999119713]
We develop a wavelet-based tokenizer that allows models to learn complex representations directly in the space of time-localized frequencies.
Our method first scales and decomposes the input time series, then thresholds and quantizes the wavelet coefficients, and finally pre-trains an autoregressive model to forecast coefficients for the forecast horizon.
arXiv Detail & Related papers (2024-12-06T18:22:59Z) - Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Chronos: Learning the Language of Time Series [79.38691251254173]
Chronos is a framework for pretrained probabilistic time series models.
We show that Chronos models can leverage time series data from diverse domains to improve zero-shot accuracy on unseen forecasting tasks.
arXiv Detail & Related papers (2024-03-12T16:53:54Z) - AutoGluon-TimeSeries: AutoML for Probabilistic Time Series Forecasting [80.14147131520556]
AutoGluon-TimeSeries is an open-source AutoML library for probabilistic time series forecasting.
It generates accurate point and quantile forecasts with just 3 lines of Python code.
arXiv Detail & Related papers (2023-08-10T13:28:59Z) - Cluster-and-Conquer: A Framework For Time-Series Forecasting [94.63501563413725]
We propose a three-stage framework for forecasting high-dimensional time-series data.
Our framework is highly general, allowing for any time-series forecasting and clustering method to be used in each step.
When instantiated with simple linear autoregressive models, we are able to achieve state-of-the-art results on several benchmark datasets.
arXiv Detail & Related papers (2021-10-26T20:41:19Z) - Sparse MoEs meet Efficient Ensembles [49.313497379189315]
We study the interplay of two popular classes of such models: ensembles of neural networks and sparse mixture of experts (sparse MoEs)
We present Efficient Ensemble of Experts (E$3$), a scalable and simple ensemble of sparse MoEs that takes the best of both classes of models, while using up to 45% fewer FLOPs than a deep ensemble.
arXiv Detail & Related papers (2021-10-07T11:58:35Z) - Ensembles of Randomized NNs for Pattern-based Time Series Forecasting [0.0]
We propose an ensemble forecasting approach based on randomized neural networks.
A pattern-based representation of time series makes the proposed approach suitable for forecasting time series with multiple seasonality.
Case studies conducted on four real-world forecasting problems verified the effectiveness and superior performance of the proposed ensemble forecasting approach.
arXiv Detail & Related papers (2021-07-08T20:13:50Z) - An Accurate and Fully-Automated Ensemble Model for Weekly Time Series
Forecasting [9.617563440471928]
We propose a forecasting method in this domain, leveraging state-of-the-art forecasting techniques.
We consider different meta-learning architectures, algorithms, and base model pools.
Our proposed method consistently outperforms a set of benchmarks and state-of-the-art weekly forecasting models.
arXiv Detail & Related papers (2020-10-16T04:29:09Z) - For2For: Learning to forecast from forecasts [1.6752182911522522]
This paper presents a time series forecasting framework which combines standard forecasting methods and a machine learning model.
Tested on the M4 competition dataset, this approach outperforms all submissions for quarterly series, and is more accurate than all but the winning algorithm for monthly series.
arXiv Detail & Related papers (2020-01-14T03:06:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.