ForecastPFN: Synthetically-Trained Zero-Shot Forecasting
- URL: http://arxiv.org/abs/2311.01933v1
- Date: Fri, 3 Nov 2023 14:17:11 GMT
- Title: ForecastPFN: Synthetically-Trained Zero-Shot Forecasting
- Authors: Samuel Dooley, Gurnoor Singh Khurana, Chirag Mohapatra, Siddartha
Naidu, Colin White
- Abstract summary: ForecastPFN is the first zero-shot forecasting model trained purely on a novel synthetic data distribution.
We show that zero-shot predictions made by ForecastPFN are more accurate and faster compared to state-of-the-art forecasting methods.
- Score: 16.12148632541671
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The vast majority of time-series forecasting approaches require a substantial
training dataset. However, many real-life forecasting applications have very
little initial observations, sometimes just 40 or fewer. Thus, the
applicability of most forecasting methods is restricted in data-sparse
commercial applications. While there is recent work in the setting of very
limited initial data (so-called `zero-shot' forecasting), its performance is
inconsistent depending on the data used for pretraining. In this work, we take
a different approach and devise ForecastPFN, the first zero-shot forecasting
model trained purely on a novel synthetic data distribution. ForecastPFN is a
prior-data fitted network, trained to approximate Bayesian inference, which can
make predictions on a new time series dataset in a single forward pass. Through
extensive experiments, we show that zero-shot predictions made by ForecastPFN
are more accurate and faster compared to state-of-the-art forecasting methods,
even when the other methods are allowed to train on hundreds of additional
in-distribution data points.
Related papers
- Learning Augmentation Policies from A Model Zoo for Time Series Forecasting [58.66211334969299]
We introduce AutoTSAug, a learnable data augmentation method based on reinforcement learning.
By augmenting the marginal samples with a learnable policy, AutoTSAug substantially improves forecasting performance.
arXiv Detail & Related papers (2024-09-10T07:34:19Z) - Forecasting with Deep Learning: Beyond Average of Average of Average Performance [0.393259574660092]
Current practices for evaluating and comparing forecasting models focus on summarising performance into a single score.
We propose a novel framework for evaluating models from multiple perspectives.
We show the advantages of this framework by comparing a state-of-the-art deep learning approach with classical forecasting techniques.
arXiv Detail & Related papers (2024-06-24T12:28:22Z) - Enhancing Mean-Reverting Time Series Prediction with Gaussian Processes:
Functional and Augmented Data Structures in Financial Forecasting [0.0]
We explore the application of Gaussian Processes (GPs) for predicting mean-reverting time series with an underlying structure.
GPs offer the potential to forecast not just the average prediction but the entire probability distribution over a future trajectory.
This is particularly beneficial in financial contexts, where accurate predictions alone may not suffice if incorrect volatility assessments lead to capital losses.
arXiv Detail & Related papers (2024-02-23T06:09:45Z) - Loss Shaping Constraints for Long-Term Time Series Forecasting [79.3533114027664]
We present a Constrained Learning approach for long-term time series forecasting that respects a user-defined upper bound on the loss at each time-step.
We propose a practical Primal-Dual algorithm to tackle it, and aims to demonstrate that it exhibits competitive average performance in time series benchmarks, while shaping the errors across the predicted window.
arXiv Detail & Related papers (2024-02-14T18:20:44Z) - RobustTSF: Towards Theory and Design of Robust Time Series Forecasting
with Anomalies [28.59935971037066]
We develop methods to automatically learn a robust forecasting model from contaminated data.
Based on our analyses, we propose a simple and efficient algorithm to learn a robust forecasting model.
arXiv Detail & Related papers (2024-02-03T05:13:09Z) - Improving Event Time Prediction by Learning to Partition the Event Time
Space [13.5391816206237]
Recently developed survival analysis methods improve upon existing approaches by predicting the probability of event occurrence in each of a number pre-specified (discrete) time intervals.
In clinical settings with limited available data, it is often preferable to judiciously partition the event time space into a limited number of intervals well suited to the prediction task at hand.
We show that in two simulated datasets, we are able to recover intervals that match the underlying generative model.
We then demonstrate improved prediction performance on three real-world observational datasets, including a large, newly harmonized stroke risk prediction dataset.
arXiv Detail & Related papers (2023-10-24T14:11:40Z) - When Rigidity Hurts: Soft Consistency Regularization for Probabilistic
Hierarchical Time Series Forecasting [69.30930115236228]
Probabilistic hierarchical time-series forecasting is an important variant of time-series forecasting.
Most methods focus on point predictions and do not provide well-calibrated probabilistic forecasts distributions.
We propose PROFHiT, a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.
arXiv Detail & Related papers (2023-10-17T20:30:16Z) - Pushing the Limits of Pre-training for Time Series Forecasting in the
CloudOps Domain [54.67888148566323]
We introduce three large-scale time series forecasting datasets from the cloud operations domain.
We show it is a strong zero-shot baseline and benefits from further scaling, both in model and dataset size.
Accompanying these datasets and results is a suite of comprehensive benchmark results comparing classical and deep learning baselines to our pre-trained method.
arXiv Detail & Related papers (2023-10-08T08:09:51Z) - When Rigidity Hurts: Soft Consistency Regularization for Probabilistic
Hierarchical Time Series Forecasting [69.30930115236228]
Probabilistic hierarchical time-series forecasting is an important variant of time-series forecasting.
Most methods focus on point predictions and do not provide well-calibrated probabilistic forecasts distributions.
We propose PROFHiT, a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.
arXiv Detail & Related papers (2022-06-16T06:13:53Z) - RNN with Particle Flow for Probabilistic Spatio-temporal Forecasting [30.277213545837924]
Many classical statistical models often fall short in handling the complexity and high non-linearity present in time-series data.
In this work, we consider the time-series data as a random realization from a nonlinear state-space model.
We use particle flow as the tool for approximating the posterior distribution of the states, as it is shown to be highly effective in complex, high-dimensional settings.
arXiv Detail & Related papers (2021-06-10T21:49:23Z) - Ambiguity in Sequential Data: Predicting Uncertain Futures with
Recurrent Models [110.82452096672182]
We propose an extension of the Multiple Hypothesis Prediction (MHP) model to handle ambiguous predictions with sequential data.
We also introduce a novel metric for ambiguous problems, which is better suited to account for uncertainties.
arXiv Detail & Related papers (2020-03-10T09:15:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.