Time Series Forecasting Models Copy the Past: How to Mitigate
- URL: http://arxiv.org/abs/2207.13441v1
- Date: Wed, 27 Jul 2022 10:39:00 GMT
- Title: Time Series Forecasting Models Copy the Past: How to Mitigate
- Authors: Chrysoula Kosma, Giannis Nikolentzos, Nancy Xu, Michalis Vazirgiannis
- Abstract summary: In the presence of noise and uncertainty, neural network models tend to replicate the last observed value of the time series.
We propose a regularization term penalizing the replication of previously seen values.
Our results indicate that the regularization term mitigates to some extent the aforementioned problem and gives rise to more robust models.
- Score: 24.397660153755997
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time series forecasting is at the core of important application domains
posing significant challenges to machine learning algorithms. Recently neural
network architectures have been widely applied to the problem of time series
forecasting. Most of these models are trained by minimizing a loss function
that measures predictions' deviation from the real values. Typical loss
functions include mean squared error (MSE) and mean absolute error (MAE). In
the presence of noise and uncertainty, neural network models tend to replicate
the last observed value of the time series, thus limiting their applicability
to real-world data. In this paper, we provide a formal definition of the above
problem and we also give some examples of forecasts where the problem is
observed. We also propose a regularization term penalizing the replication of
previously seen values. We evaluate the proposed regularization term both on
synthetic and real-world datasets. Our results indicate that the regularization
term mitigates to some extent the aforementioned problem and gives rise to more
robust models.
Related papers
- Loss Shaping Constraints for Long-Term Time Series Forecasting [79.3533114027664]
We present a Constrained Learning approach for long-term time series forecasting that respects a user-defined upper bound on the loss at each time-step.
We propose a practical Primal-Dual algorithm to tackle it, and aims to demonstrate that it exhibits competitive average performance in time series benchmarks, while shaping the errors across the predicted window.
arXiv Detail & Related papers (2024-02-14T18:20:44Z) - Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - Counterfactual Explanations for Time Series Forecasting [14.03870816983583]
We formulate the novel problem of counterfactual generation for time series forecasting, and propose an algorithm, called ForecastCF.
ForecastCF solves the problem by applying gradient-based perturbations to the original time series.
Our results show that ForecastCF outperforms the baseline in terms of counterfactual validity and data manifold closeness.
arXiv Detail & Related papers (2023-10-12T08:51:59Z) - MPR-Net:Multi-Scale Pattern Reproduction Guided Universality Time Series
Interpretable Forecasting [13.790498420659636]
Time series forecasting has received wide interest from existing research due to its broad applications inherent challenging.
This paper proposes a forecasting model, MPR-Net. It first adaptively decomposes multi-scale historical series patterns using convolution operation, then constructs a pattern extension forecasting method based on the prior knowledge of pattern reproduction, and finally reconstructs future patterns into future series using deconvolution operation.
By leveraging the temporal dependencies present in the time series, MPR-Net not only achieves linear time complexity, but also makes the forecasting process interpretable.
arXiv Detail & Related papers (2023-07-13T13:16:01Z) - WaveBound: Dynamic Error Bounds for Stable Time Series Forecasting [30.692056599222926]
Time series forecasting has become a critical task due to its high practicality in real-world applications.
Recent deep-learning-based approaches have shown remarkable success in time series forecasting.
Deep networks still suffer from unstable training and overfitting.
arXiv Detail & Related papers (2022-10-25T19:58:02Z) - Residual Correction in Real-Time Traffic Forecasting [29.93640276427495]
Deep-learning-based traffic forecasting models still fail in certain patterns, mainly in event situations.
We introduce ResCAL, a residual estimation module for traffic forecasting.
Our ResCAL calibrates the prediction of the existing models in real time by estimating future errors using previous errors and graph signals.
arXiv Detail & Related papers (2022-09-12T16:57:25Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Low-Rank Temporal Attention-Augmented Bilinear Network for financial
time-series forecasting [93.73198973454944]
Deep learning models have led to significant performance improvements in many problems coming from different domains, including prediction problems of financial time-series data.
The Temporal Attention-Augmented Bilinear network was recently proposed as an efficient and high-performing model for Limit Order Book time-series forecasting.
In this paper, we propose a low-rank tensor approximation of the model to further reduce the number of trainable parameters and increase its speed.
arXiv Detail & Related papers (2021-07-05T10:15:23Z) - Simultaneously Reconciled Quantile Forecasting of Hierarchically Related
Time Series [11.004159006784977]
We propose a flexible nonlinear model that optimize quantile regression loss coupled with suitable regularization terms to maintain consistency of forecasts across hierarchies.
The theoretical framework introduced herein can be applied to any forecasting model with an underlying differentiable loss function.
arXiv Detail & Related papers (2021-02-25T00:59:01Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Ambiguity in Sequential Data: Predicting Uncertain Futures with
Recurrent Models [110.82452096672182]
We propose an extension of the Multiple Hypothesis Prediction (MHP) model to handle ambiguous predictions with sequential data.
We also introduce a novel metric for ambiguous problems, which is better suited to account for uncertainties.
arXiv Detail & Related papers (2020-03-10T09:15:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.