Multi-Scale Dilated Convolution Network for Long-Term Time Series Forecasting
- URL: http://arxiv.org/abs/2405.05499v2
- Date: Tue, 14 May 2024 07:52:01 GMT
- Title: Multi-Scale Dilated Convolution Network for Long-Term Time Series Forecasting
- Authors: Feifei Li, Suhan Guo, Feng Han, Jian Zhao, Furao Shen,
- Abstract summary: We propose Multi Scale Dilated Convolution Network (MSDCN) to capture the period and trend characteristics of long time series.
We design different convolution blocks with exponentially growing dilations and varying kernel sizes to sample time series data at different scales.
To validate the effectiveness of the proposed approach, we conduct experiments on eight challenging long-term time series forecasting benchmark datasets.
- Score: 17.132063819650355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate forecasting of long-term time series has important applications for decision making and planning. However, it remains challenging to capture the long-term dependencies in time series data. To better extract long-term dependencies, We propose Multi Scale Dilated Convolution Network (MSDCN), a method that utilizes a shallow dilated convolution architecture to capture the period and trend characteristics of long time series. We design different convolution blocks with exponentially growing dilations and varying kernel sizes to sample time series data at different scales. Furthermore, we utilize traditional autoregressive model to capture the linear relationships within the data. To validate the effectiveness of the proposed approach, we conduct experiments on eight challenging long-term time series forecasting benchmark datasets. The experimental results show that our approach outperforms the prior state-of-the-art approaches and shows significant inference speed improvements compared to several strong baseline methods.
Related papers
- Deep End-to-End Survival Analysis with Temporal Consistency [49.77103348208835]
We present a novel Survival Analysis algorithm designed to efficiently handle large-scale longitudinal data.
A central idea in our method is temporal consistency, a hypothesis that past and future outcomes in the data evolve smoothly over time.
Our framework uniquely incorporates temporal consistency into large datasets by providing a stable training signal.
arXiv Detail & Related papers (2024-10-09T11:37:09Z) - Efficient High-Resolution Time Series Classification via Attention
Kronecker Decomposition [17.71968215237596]
High-resolution time series classification is essential due to the increasing availability of detailed temporal data in various domains.
We propose a new time series transformer backbone (KronTime) by introducing Kronecker-decomposed attention to process such multi-level time series.
Experiments on four long time series datasets demonstrate superior classification results with improved efficiency compared to baseline methods.
arXiv Detail & Related papers (2024-03-07T20:14:20Z) - PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from
the perspective of partial differential equations [49.80959046861793]
We present PDETime, a novel LMTF model inspired by the principles of Neural PDE solvers.
Our experimentation across seven diversetemporal real-world LMTF datasets reveals that PDETime adapts effectively to the intrinsic nature of the data.
arXiv Detail & Related papers (2024-02-25T17:39:44Z) - Unified Long-Term Time-Series Forecasting Benchmark [0.6526824510982802]
We present a comprehensive dataset designed explicitly for long-term time-series forecasting.
We incorporate a collection of datasets obtained from diverse, dynamic systems and real-life records.
To determine the most effective model in diverse scenarios, we conduct an extensive benchmarking analysis using classical and state-of-the-art models.
Our findings reveal intriguing performance comparisons among these models, highlighting the dataset-dependent nature of model effectiveness.
arXiv Detail & Related papers (2023-09-27T18:59:00Z) - Stecformer: Spatio-temporal Encoding Cascaded Transformer for
Multivariate Long-term Time Series Forecasting [11.021398675773055]
We propose a complete solution to address problems in terms of feature extraction and target prediction.
For extraction, we design an efficient-temporal encoding extractor including a semi-adaptive graph to acquire sufficient-temporal information.
For prediction, we propose a Cascaded De Predictor (CDP) to strengthen the correlation between different intervals.
arXiv Detail & Related papers (2023-05-25T13:00:46Z) - Grouped self-attention mechanism for a memory-efficient Transformer [64.0125322353281]
Real-world tasks such as forecasting weather, electricity consumption, and stock market involve predicting data that vary over time.
Time-series data are generally recorded over a long period of observation with long sequences owing to their periodic characteristics and long-range dependencies over time.
We propose two novel modules, Grouped Self-Attention (GSA) and Compressed Cross-Attention (CCA)
Our proposed model efficiently exhibited reduced computational complexity and performance comparable to or better than existing methods.
arXiv Detail & Related papers (2022-10-02T06:58:49Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Time Series Forecasting with Ensembled Stochastic Differential Equations
Driven by L\'evy Noise [2.3076895420652965]
We use a collection of SDEs equipped with neural networks to predict long-term trend of noisy time series.
Our contributions are, first, we use the phase space reconstruction method to extract intrinsic dimension of the time series data.
Second, we explore SDEs driven by $alpha$-stable L'evy motion to model the time series data and solve the problem through neural network approximation.
arXiv Detail & Related papers (2021-11-25T16:49:01Z) - Instance-wise Graph-based Framework for Multivariate Time Series
Forecasting [69.38716332931986]
We propose a simple yet efficient instance-wise graph-based framework to utilize the inter-dependencies of different variables at different time stamps.
The key idea of our framework is aggregating information from the historical time series of different variables to the current time series that we need to forecast.
arXiv Detail & Related papers (2021-09-14T07:38:35Z) - Progressive Growing of Neural ODEs [7.558546277131641]
We propose a progressive learning paradigm of NODEs for long-term time series forecasting.
Specifically, following the principle of curriculum learning, we gradually increase the complexity of data and network capacity as training progresses.
Our experiments with both synthetic data and real traffic data (PeMS Bay Area traffic data) show that our training methodology consistently improves the performance of vanilla NODEs by over 64%.
arXiv Detail & Related papers (2020-03-08T01:15:01Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.