Hyperparameter Tuning MLPs for Probabilistic Time Series Forecasting
- URL: http://arxiv.org/abs/2403.04477v1
- Date: Thu, 7 Mar 2024 13:22:25 GMT
- Title: Hyperparameter Tuning MLPs for Probabilistic Time Series Forecasting
- Authors: Kiran Madhusudhanan, Shayan Jawed, Lars Schmidt-Thieme
- Abstract summary: Time series forecasting attempts to predict future events by analyzing past trends and patterns.
Although well researched, certain critical aspects to the use of deep learning in time series forecasting remain ambiguous.
In this work, we introduce the largest metadataset for timeseries forecasting to date, named TSBench, comprising 97200 evaluations.
- Score: 6.579888565581481
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time series forecasting attempts to predict future events by analyzing past
trends and patterns. Although well researched, certain critical aspects
pertaining to the use of deep learning in time series forecasting remain
ambiguous. Our research primarily focuses on examining the impact of specific
hyperparameters related to time series, such as context length and validation
strategy, on the performance of the state-of-the-art MLP model in time series
forecasting. We have conducted a comprehensive series of experiments involving
4800 configurations per dataset across 20 time series forecasting datasets, and
our findings demonstrate the importance of tuning these parameters.
Furthermore, in this work, we introduce the largest metadataset for timeseries
forecasting to date, named TSBench, comprising 97200 evaluations, which is a
twentyfold increase compared to previous works in the field. Finally, we
demonstrate the utility of the created metadataset on multi-fidelity
hyperparameter optimization tasks.
Related papers
- GIFT-Eval: A Benchmark For General Time Series Forecasting Model Evaluation [90.53485251837235]
GIFT-Eval is a pioneering benchmark aimed at promoting evaluation across diverse datasets.
GIFT-Eval encompasses 28 datasets over 144,000 time series and 177 million data points.
We also provide a non-leaking pretraining dataset containing approximately 230 billion data points.
arXiv Detail & Related papers (2024-10-14T11:29:38Z) - Metadata Matters for Time Series: Informative Forecasting with Transformers [70.38241681764738]
We propose a Metadata-informed Time Series Transformer (MetaTST) for time series forecasting.
To tackle the unstructured nature of metadata, MetaTST formalizes them into natural languages by pre-designed templates.
A Transformer encoder is employed to communicate series and metadata tokens, which can extend series representations by metadata information.
arXiv Detail & Related papers (2024-10-04T11:37:55Z) - Optimal starting point for time series forecasting [1.9937737230710553]
We introduce a novel approach called Optimal Starting Point Time Series Forecast (OSP-TSP)
By adjusting the sequence length via leveraging the XGBoost and LightGBM models, the proposed approach can determine optimal starting point (OSP) of the time series.
Empirical results indicate that predictions based on the OSP-TSP approach consistently outperform those using the complete dataset.
arXiv Detail & Related papers (2024-09-25T11:51:00Z) - Learning Graph Structures and Uncertainty for Accurate and Calibrated Time-series Forecasting [65.40983982856056]
We introduce STOIC, that leverages correlations between time-series to learn underlying structure between time-series and to provide well-calibrated and accurate forecasts.
Over a wide-range of benchmark datasets STOIC provides 16% more accurate and better-calibrated forecasts.
arXiv Detail & Related papers (2024-07-02T20:14:32Z) - Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How [62.467716468917224]
We propose a methodology that jointly searches for the optimal pretrained model and the hyperparameters for finetuning it.
Our method transfers knowledge about the performance of many pretrained models on a series of datasets.
We empirically demonstrate that our resulting approach can quickly select an accurate pretrained model for a new dataset.
arXiv Detail & Related papers (2023-06-06T16:15:26Z) - Conformal Prediction Bands for Two-Dimensional Functional Time Series [0.0]
Time evolving surfaces can be modeled as two-dimensional Functional time series, exploiting the tools of Functional data analysis.
The main focus revolves around Conformal Prediction, a versatile non-parametric paradigm used to quantify uncertainty in prediction problems.
A probabilistic forecasting scheme for two-dimensional functional time series is presented, while providing an extension of Functional Autoregressive Processes of order one to this setting.
arXiv Detail & Related papers (2022-07-27T17:23:14Z) - Optimal Latent Space Forecasting for Large Collections of Short Time
Series Using Temporal Matrix Factorization [0.0]
It is a common practice to evaluate multiple methods and choose one of these methods or an ensemble for producing the best forecasts.
We propose a framework for forecasting short high-dimensional time series data by combining low-rank temporal matrix factorization and optimal model selection on latent time series.
arXiv Detail & Related papers (2021-12-15T11:39:21Z) - Cluster-and-Conquer: A Framework For Time-Series Forecasting [94.63501563413725]
We propose a three-stage framework for forecasting high-dimensional time-series data.
Our framework is highly general, allowing for any time-series forecasting and clustering method to be used in each step.
When instantiated with simple linear autoregressive models, we are able to achieve state-of-the-art results on several benchmark datasets.
arXiv Detail & Related papers (2021-10-26T20:41:19Z) - Feature-weighted Stacking for Nonseasonal Time Series Forecasts: A Case
Study of the COVID-19 Epidemic Curves [0.0]
We investigate ensembling techniques in forecasting and examine their potential for use in nonseasonal time-series.
We propose using late data fusion, using a stacked ensemble of two forecasting models and two meta-features that prove their predictive power during a preliminary forecasting stage.
arXiv Detail & Related papers (2021-08-19T14:44:46Z) - Monash Time Series Forecasting Archive [6.0617755214437405]
We present a comprehensive time series forecasting archive containing 20 publicly available time series datasets from varied domains.
We characterise the datasets, and identify similarities and differences among them, by conducting a feature analysis.
We present the performance of a set of standard baseline forecasting methods over all datasets across eight error metrics.
arXiv Detail & Related papers (2021-05-14T04:49:58Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.