AverageTime: Enhance Long-Term Time Series Forecasting with Simple Averaging
- URL: http://arxiv.org/abs/2412.20727v3
- Date: Wed, 02 Apr 2025 09:14:55 GMT
- Title: AverageTime: Enhance Long-Term Time Series Forecasting with Simple Averaging
- Authors: Gaoxiang Zhao, Li Zhou, Xiaoqiang Wang,
- Abstract summary: Long-term time series forecasting focuses on leveraging historical data to predict future trends.<n>The core challenge lies in effectively modeling dependencies both within sequences and channels.<n>Our research proposes a new approach for capturing sequence and channel dependencies: AverageTime.
- Score: 6.125620036017928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Long-term time series forecasting focuses on leveraging historical data to predict future trends. The core challenge lies in effectively modeling dependencies both within sequences and channels. Convolutional Neural Networks and Linear models often excel in sequence modeling but frequently fall short in capturing complex channel dependencies. In contrast, Transformer-based models, with their attention mechanisms applied to both sequences and channels, have demonstrated strong predictive performance. Our research proposes a new approach for capturing sequence and channel dependencies: AverageTime, an exceptionally simple yet effective structure. By employing mixed channel embedding and averaging operations, AverageTime separately captures correlations for sequences and channels through channel mapping and result averaging. In addition, we integrate clustering methods to further accelerate the model's training process. Experiments on real-world datasets demonstrate that AverageTime surpasses state-of-the-art models in predictive performance while maintaining efficiency comparable to lightweight linear models. This provides a new and effective framework for modeling long time series.
Related papers
- Merging Models on the Fly Without Retraining: A Sequential Approach to Scalable Continual Model Merging [75.93960998357812]
Deep model merging represents an emerging research direction that combines multiple fine-tuned models to harness their capabilities across different tasks and domains.
Current model merging techniques focus on merging all available models simultaneously, with weight matrices-based methods being the predominant approaches.
We propose a training-free projection-based continual merging method that processes models sequentially.
arXiv Detail & Related papers (2025-01-16T13:17:24Z) - WaveGNN: Modeling Irregular Multivariate Time Series for Accurate Predictions [3.489870763747715]
Real-world time series often exhibit irregularities such as misaligned timestamps, missing entries, and variable sampling rates.
Existing approaches often rely on imputation, which can introduce biases.
We present WaveGNN, a novel framework designed to embed irregularly sampled time series data for accurate predictions.
arXiv Detail & Related papers (2024-12-14T00:03:44Z) - UmambaTSF: A U-shaped Multi-Scale Long-Term Time Series Forecasting Method Using Mamba [7.594115034632109]
We propose UmambaTSF, a novel long-term time series forecasting framework.
It integrates multi-scale feature extraction capabilities of U-shaped encoder-decoder multilayer perceptrons (MLP) with Mamba's long sequence representation.
UmambaTSF achieves state-of-the-art performance and excellent generality on widely used benchmark datasets.
arXiv Detail & Related papers (2024-10-15T04:56:43Z) - sTransformer: A Modular Approach for Extracting Inter-Sequential and Temporal Information for Time-Series Forecasting [6.434378359932152]
We review and categorize existing Transformer-based models into two main types: (1) modifications to the model structure and (2) modifications to the input data.
We propose $textbfsTransformer$, which introduces the Sequence and Temporal Convolutional Network (STCN) to fully capture both sequential and temporal information.
We compare our model with linear models and existing forecasting models on long-term time-series forecasting, achieving new state-of-the-art results.
arXiv Detail & Related papers (2024-08-19T06:23:41Z) - SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion [59.96233305733875]
Time series forecasting plays a crucial role in various fields such as finance, traffic management, energy, and healthcare.
Several methods utilize mechanisms like attention or mixer to address this by capturing channel correlations.
This paper presents an efficient-based model, the Series-cOre Fused Time Series forecaster (SOFTS)
arXiv Detail & Related papers (2024-04-22T14:06:35Z) - From Similarity to Superiority: Channel Clustering for Time Series Forecasting [61.96777031937871]
We develop a novel and adaptable Channel Clustering Module ( CCM)
CCM dynamically groups channels characterized by intrinsic similarities and leverages cluster information instead of individual channel identities.
CCM can boost the performance of CI and CD models by an average margin of 2.4% and 7.2% on long-term and short-term forecasting, respectively.
arXiv Detail & Related papers (2024-03-31T02:46:27Z) - Rough Transformers for Continuous and Efficient Time-Series Modelling [46.58170057001437]
Time-series data in real-world medical settings typically exhibit long-range dependencies and are observed at non-uniform intervals.
We introduce the Rough Transformer, a variation of the Transformer model which operates on continuous-time representations of input sequences.
We find that Rough Transformers consistently outperform their vanilla attention counterparts while obtaining the benefits of Neural ODE-based models.
arXiv Detail & Related papers (2024-03-15T13:29:45Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Parsimony or Capability? Decomposition Delivers Both in Long-term Time Series Forecasting [46.63798583414426]
Long-term time series forecasting (LTSF) represents a critical frontier in time series analysis.
Our study demonstrates, through both analytical and empirical evidence, that decomposition is key to containing excessive model inflation.
Remarkably, by tailoring decomposition to the intrinsic dynamics of time series data, our proposed model outperforms existing benchmarks.
arXiv Detail & Related papers (2024-01-22T13:15:40Z) - Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs [50.25683648762602]
We introduce Koopman VAE, a new generative framework that is based on a novel design for the model prior.
Inspired by Koopman theory, we represent the latent conditional prior dynamics using a linear map.
KoVAE outperforms state-of-the-art GAN and VAE methods across several challenging synthetic and real-world time series generation benchmarks.
arXiv Detail & Related papers (2023-10-04T07:14:43Z) - Time Series Continuous Modeling for Imputation and Forecasting with Implicit Neural Representations [15.797295258800638]
We introduce a novel modeling approach for time series imputation and forecasting, tailored to address the challenges often encountered in real-world data.
Our method relies on a continuous-time-dependent model of the series' evolution dynamics.
A modulation mechanism, driven by a meta-learning algorithm, allows adaptation to unseen samples and extrapolation beyond observed time-windows.
arXiv Detail & Related papers (2023-06-09T13:20:04Z) - Towards Long-Term Time-Series Forecasting: Feature, Pattern, and
Distribution [57.71199089609161]
Long-term time-series forecasting (LTTF) has become a pressing demand in many applications, such as wind power supply planning.
Transformer models have been adopted to deliver high prediction capacity because of the high computational self-attention mechanism.
We propose an efficient Transformerbased model, named Conformer, which differentiates itself from existing methods for LTTF in three aspects.
arXiv Detail & Related papers (2023-01-05T13:59:29Z) - Contextually Enhanced ES-dRNN with Dynamic Attention for Short-Term Load
Forecasting [1.1602089225841632]
The proposed model is composed of two simultaneously trained tracks: the context track and the main track.
The RNN architecture consists of multiple recurrent layers stacked with hierarchical dilations and equipped with recently proposed attentive recurrent cells.
The model produces both point forecasts and predictive intervals.
arXiv Detail & Related papers (2022-12-18T07:42:48Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Stacking VAE with Graph Neural Networks for Effective and Interpretable
Time Series Anomaly Detection [5.935707085640394]
We propose a stacking variational auto-encoder (VAE) model with graph neural networks for the effective and interpretable time-series anomaly detection.
We show that our proposed model outperforms the strong baselines on three public datasets with considerable improvements.
arXiv Detail & Related papers (2021-05-18T09:50:00Z) - Stochastically forced ensemble dynamic mode decomposition for
forecasting and analysis of near-periodic systems [65.44033635330604]
We introduce a novel load forecasting method in which observed dynamics are modeled as a forced linear system.
We show that its use of intrinsic linear dynamics offers a number of desirable properties in terms of interpretability and parsimony.
Results are presented for a test case using load data from an electrical grid.
arXiv Detail & Related papers (2020-10-08T20:25:52Z) - Pattern Similarity-based Machine Learning Methods for Mid-term Load
Forecasting: A Comparative Study [0.0]
We use pattern similarity-based methods for forecasting monthly electricity demand expressing annual seasonality.
An integral part of the models is the time series representation using patterns of time series sequences.
We consider four such models: nearest neighbor model, fuzzy neighborhood model, kernel regression model and general regression neural network.
arXiv Detail & Related papers (2020-03-03T12:14:36Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.