Less Is More: Fast Multivariate Time Series Forecasting with Light
Sampling-oriented MLP Structures
- URL: http://arxiv.org/abs/2207.01186v1
- Date: Mon, 4 Jul 2022 04:03:00 GMT
- Title: Less Is More: Fast Multivariate Time Series Forecasting with Light
Sampling-oriented MLP Structures
- Authors: Tianping Zhang, Yizhuo Zhang, Wei Cao, Jiang Bian, Xiaohan Yi, Shun
Zheng, Jian Li
- Abstract summary: We introduce LightTS, a light deep learning architecture merely based on simple-based structures.
Compared with the existing state-of-the-art methods, LightTS demonstrates better performance on five of them and comparable performance on the rest.
LightTS is robust and has a much smaller variance in forecasting accuracy than previous SOTA methods in long sequence forecasting tasks.
- Score: 18.592350352298553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multivariate time series forecasting has seen widely ranging applications in
various domains, including finance, traffic, energy, and healthcare. To capture
the sophisticated temporal patterns, plenty of research studies designed
complex neural network architectures based on many variants of RNNs, GNNs, and
Transformers. However, complex models are often computationally expensive and
thus face a severe challenge in training and inference efficiency when applied
to large-scale real-world datasets. In this paper, we introduce LightTS, a
light deep learning architecture merely based on simple MLP-based structures.
The key idea of LightTS is to apply an MLP-based structure on top of two
delicate down-sampling strategies, including interval sampling and continuous
sampling, inspired by a crucial fact that down-sampling time series often
preserves the majority of its information. We conduct extensive experiments on
eight widely used benchmark datasets. Compared with the existing
state-of-the-art methods, LightTS demonstrates better performance on five of
them and comparable performance on the rest. Moreover, LightTS is highly
efficient. It uses less than 5% FLOPS compared with previous SOTA methods on
the largest benchmark dataset. In addition, LightTS is robust and has a much
smaller variance in forecasting accuracy than previous SOTA methods in long
sequence forecasting tasks.
Related papers
- Test Time Learning for Time Series Forecasting [1.4605709124065924]
Test-Time Training (TTT) modules consistently outperform state-of-the-art models, including the Mamba-based TimeMachine.
Our results show significant improvements in Mean Squared Error (MSE) and Mean Absolute Error (MAE)
This work sets a new benchmark for time-series forecasting and lays the groundwork for future research in scalable, high-performance forecasting models.
arXiv Detail & Related papers (2024-09-21T04:40:08Z) - WEITS: A Wavelet-enhanced residual framework for interpretable time series forecasting [3.1551278097133895]
WEITS is a frequency-aware deep learning framework that is highly interpretable and computationally efficient.
In this paper, we present WEITS, a frequency-aware deep learning framework that is highly interpretable and computationally efficient.
arXiv Detail & Related papers (2024-05-17T16:09:51Z) - Spatiotemporal-Linear: Towards Universal Multivariate Time Series
Forecasting [10.404951989266191]
We introduce the Spatio-Temporal- Linear (STL) framework.
STL seamlessly integrates time-embedded and spatially-informed bypasses to augment the Linear-based architecture.
Empirical evidence highlights STL's prowess, outpacing both Linear and Transformer benchmarks across varied observation and prediction durations and datasets.
arXiv Detail & Related papers (2023-12-22T17:46:34Z) - Frequency-domain MLPs are More Effective Learners in Time Series
Forecasting [67.60443290781988]
Time series forecasting has played the key role in different industrial domains, including finance, traffic, energy, and healthcare.
Most-based forecasting methods suffer from the point-wise mappings and information bottleneck.
We propose FreTS, a simple yet effective architecture built upon Frequency-domains for Time Series forecasting.
arXiv Detail & Related papers (2023-11-10T17:05:13Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series [57.4208255711412]
Building on copula theory, we propose a simplified objective for the recently-introduced transformer-based attentional copulas (TACTiS)
We show that the resulting model has significantly better training dynamics and achieves state-of-the-art performance across diverse real-world forecasting tasks.
arXiv Detail & Related papers (2023-10-02T16:45:19Z) - Contextualizing MLP-Mixers Spatiotemporally for Urban Data Forecast at Scale [54.15522908057831]
We propose an adapted version of the computationally-Mixer for STTD forecast at scale.
Our results surprisingly show that this simple-yeteffective solution can rival SOTA baselines when tested on several traffic benchmarks.
Our findings contribute to the exploration of simple-yet-effective models for real-world STTD forecasting.
arXiv Detail & Related papers (2023-07-04T05:19:19Z) - To Repeat or Not To Repeat: Insights from Scaling LLM under Token-Crisis [50.31589712761807]
Large language models (LLMs) are notoriously token-hungry during pre-training, and high-quality text data on the web is approaching its scaling limit for LLMs.
We investigate the consequences of repeating pre-training data, revealing that the model is susceptible to overfitting.
Second, we examine the key factors contributing to multi-epoch degradation, finding that significant factors include dataset size, model parameters, and training objectives.
arXiv Detail & Related papers (2023-05-22T17:02:15Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - Spatial-Temporal Identity: A Simple yet Effective Baseline for
Multivariate Time Series Forecasting [17.84296081495185]
We explore the critical factors of MTS forecasting and design a model that is as powerful as STGNNs, but more concise and efficient.
We identify the indistinguishability of samples in both spatial and temporal dimensions as a key bottleneck, and propose a simple yet effective baseline for MTS forecasting.
These results suggest that we can design efficient and effective models as long as they solve the indistinguishability of samples, without being limited to STGNNs.
arXiv Detail & Related papers (2022-08-10T09:25:43Z) - An Experimental Review on Deep Learning Architectures for Time Series
Forecasting [0.0]
We provide the most extensive deep learning study for time series forecasting.
Among all studied models, the results show that long short-term memory (LSTM) and convolutional networks (CNN) are the best alternatives.
CNNs achieve comparable performance with less variability of results under different parameter configurations, while also being more efficient.
arXiv Detail & Related papers (2021-03-22T17:58:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.