N-HiTS: Neural Hierarchical Interpolation for Time Series Forecasting
- URL: http://arxiv.org/abs/2201.12886v2
- Date: Wed, 2 Feb 2022 17:55:16 GMT
- Title: N-HiTS: Neural Hierarchical Interpolation for Time Series Forecasting
- Authors: Cristian Challu, Kin G. Olivares, Boris N. Oreshkin, Federico Garza,
Max Mergenthaler, Artur Dubrawski
- Abstract summary: Two common challenges afflicting long-horizon forecasting are the volatility of the predictions and their computational complexity.
We introduce N-HiTS, a model which addresses both challenges by incorporating novel hierarchical and multi-rate data sampling techniques.
We conduct an empirical evaluation demonstrating the advantages of N-HiTS over the state-of-the-art long-horizon forecasting methods.
- Score: 17.53378788483556
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent progress in neural forecasting accelerated improvements in the
performance of large-scale forecasting systems. Yet, long-horizon forecasting
remains a very difficult task. Two common challenges afflicting long-horizon
forecasting are the volatility of the predictions and their computational
complexity. In this paper, we introduce N-HiTS, a model which addresses both
challenges by incorporating novel hierarchical interpolation and multi-rate
data sampling techniques. These techniques enable the proposed method to
assemble its predictions sequentially, selectively emphasizing components with
different frequencies and scales, while decomposing the input signal and
synthesizing the forecast. We conduct an extensive empirical evaluation
demonstrating the advantages of N-HiTS over the state-of-the-art long-horizon
forecasting methods. On an array of multivariate forecasting tasks, the
proposed method provides an average accuracy improvement of 25% over the latest
Transformer architectures while reducing the computation time by an order of
magnitude. Our code is available at https://github.com/cchallu/n-hits.
Related papers
- MGCP: A Multi-Grained Correlation based Prediction Network for Multivariate Time Series [54.91026286579748]
We propose a Multi-Grained Correlations-based Prediction Network.
It simultaneously considers correlations at three levels to enhance prediction performance.
It employs adversarial training with an attention mechanism-based predictor and conditional discriminator to optimize prediction results at coarse-grained level.
arXiv Detail & Related papers (2024-05-30T03:32:44Z) - SFANet: Spatial-Frequency Attention Network for Weather Forecasting [54.470205739015434]
Weather forecasting plays a critical role in various sectors, driving decision-making and risk management.
Traditional methods often struggle to capture the complex dynamics of meteorological systems.
We propose a novel framework designed to address these challenges and enhance the accuracy of weather prediction.
arXiv Detail & Related papers (2024-05-29T08:00:15Z) - State Sequences Prediction via Fourier Transform for Representation
Learning [111.82376793413746]
We propose State Sequences Prediction via Fourier Transform (SPF), a novel method for learning expressive representations efficiently.
We theoretically analyze the existence of structural information in state sequences, which is closely related to policy performance and signal regularity.
Experiments demonstrate that the proposed method outperforms several state-of-the-art algorithms in terms of both sample efficiency and performance.
arXiv Detail & Related papers (2023-10-24T14:47:02Z) - CARD: Channel Aligned Robust Blend Transformer for Time Series
Forecasting [50.23240107430597]
We design a special Transformer, i.e., Channel Aligned Robust Blend Transformer (CARD for short), that addresses key shortcomings of CI type Transformer in time series forecasting.
First, CARD introduces a channel-aligned attention structure that allows it to capture both temporal correlations among signals.
Second, in order to efficiently utilize the multi-scale knowledge, we design a token blend module to generate tokens with different resolutions.
Third, we introduce a robust loss function for time series forecasting to alleviate the potential overfitting issue.
arXiv Detail & Related papers (2023-05-20T05:16:31Z) - cs-net: structural approach to time-series forecasting for
high-dimensional feature space data with limited observations [1.5533753199073637]
We propose a flexible data feature extraction technique that excels in high-dimensional multivariate forecasting tasks.
Our approach was originally developed for the National Science Foundation (NSF) Algorithms for Threat Detection (ATD) 2022 Challenge.
Our models trained on the GDELT dataset finished 1st and 2nd places in the ATD sprint series and hold promise for other datasets for time series forecasting.
arXiv Detail & Related papers (2022-12-05T19:46:47Z) - Similarity-based Feature Extraction for Large-scale Sparse Traffic
Forecasting [4.295541562380963]
The NeurIPS 2022 Traffic4cast challenge is dedicated to predicting the citywide traffic states with publicly available sparse loop count data.
This technical report introduces our second-place winning solution to the extended challenge of ETA prediction.
arXiv Detail & Related papers (2022-11-13T22:19:21Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Probabilistic AutoRegressive Neural Networks for Accurate Long-range
Forecasting [6.295157260756792]
We introduce the Probabilistic AutoRegressive Neural Networks (PARNN)
PARNN is capable of handling complex time series data exhibiting non-stationarity, nonlinearity, non-seasonality, long-range dependence, and chaotic patterns.
We evaluate the performance of PARNN against standard statistical, machine learning, and deep learning models, including Transformers, NBeats, and DeepAR.
arXiv Detail & Related papers (2022-04-01T17:57:36Z) - DMIDAS: Deep Mixed Data Sampling Regression for Long Multi-Horizon Time
Series Forecasting [13.458489651961106]
We develop a method to predict long-term energy prices using high-frequency healthcare and electricity price data.
We improve the prediction accuracy by 5% over state-of-the-art models, reducing the number of parameters of NBEATS by nearly 70%.
arXiv Detail & Related papers (2021-06-07T22:36:38Z) - Stochastically forced ensemble dynamic mode decomposition for
forecasting and analysis of near-periodic systems [65.44033635330604]
We introduce a novel load forecasting method in which observed dynamics are modeled as a forced linear system.
We show that its use of intrinsic linear dynamics offers a number of desirable properties in terms of interpretability and parsimony.
Results are presented for a test case using load data from an electrical grid.
arXiv Detail & Related papers (2020-10-08T20:25:52Z) - A clustering approach to time series forecasting using neural networks:
A comparative study on distance-based vs. feature-based clustering methods [1.256413718364189]
We propose various neural network architectures to forecast the time series data using the dynamic measurements.
We also investigate the importance of performing techniques such as anomaly detection and clustering on forecasting accuracy.
Our results indicate that clustering can improve the overall prediction time as well as improve the forecasting performance of the neural network.
arXiv Detail & Related papers (2020-01-27T00:31:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.