D2Vformer: A Flexible Time Series Prediction Model Based on Time Position Embedding
- URL: http://arxiv.org/abs/2409.11024v1
- Date: Tue, 17 Sep 2024 09:39:37 GMT
- Title: D2Vformer: A Flexible Time Series Prediction Model Based on Time Position Embedding
- Authors: Xiaobao Song, Hao Wang, Liwei Deng, Yuxin He, Wenming Cao, Chi-Sing Leungc,
- Abstract summary: Time position embeddings capture the positional information of time steps, often serving as auxiliary inputs to enhance the predictive capabilities of time series models.
This paper proposes a novel model called D2Vformer to handle scenarios where the predicted sequence is not adjacent to the input sequence.
D2Vformer surpasses state-of-the-art methods in both fixed-length and variable-length prediction tasks.
- Score: 10.505132550106389
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time position embeddings capture the positional information of time steps, often serving as auxiliary inputs to enhance the predictive capabilities of time series models. However, existing models exhibit limitations in capturing intricate time positional information and effectively utilizing these embeddings. To address these limitations, this paper proposes a novel model called D2Vformer. Unlike typical prediction methods that rely on RNNs or Transformers, this approach can directly handle scenarios where the predicted sequence is not adjacent to the input sequence or where its length dynamically changes. In comparison to conventional methods, D2Vformer undoubtedly saves a significant amount of training resources. In D2Vformer, the Date2Vec module uses the timestamp information and feature sequences to generate time position embeddings. Afterward, D2Vformer introduces a new fusion block that utilizes an attention mechanism to explore the similarity in time positions between the embeddings of the input sequence and the predicted sequence, thereby generating predictions based on this similarity. Through extensive experiments on six datasets, we demonstrate that Date2Vec outperforms other time position embedding methods, and D2Vformer surpasses state-of-the-art methods in both fixed-length and variable-length prediction tasks.
Related papers
- Times2D: Multi-Period Decomposition and Derivative Mapping for General Time Series Forecasting [0.6554326244334868]
Time series forecasting is an important application in various domains such as energy management, traffic planning, financial markets, meteorology, and medicine.
Previous models that rely on 1D time series representations usually struggle with complex temporal variations.
This study introduces the Times2D method that transforms the 1D time series into 2D space.
arXiv Detail & Related papers (2025-03-31T18:08:30Z) - Timer-XL: Long-Context Transformers for Unified Time Series Forecasting [67.83502953961505]
We present Timer-XL, a generative Transformer for unified time series forecasting.
Timer-XL achieves state-of-the-art performance across challenging forecasting benchmarks through a unified approach.
arXiv Detail & Related papers (2024-10-07T07:27:39Z) - Double-Path Adaptive-correlation Spatial-Temporal Inverted Transformer for Stock Time Series Forecasting [1.864621482724548]
We propose a Double-Path Adaptive-correlation Spatial-Temporal Inverted Transformer (DPA-STIFormer) to more comprehensively extract dynamic spatial information from stock data.
Experiments conducted on four stock market datasets demonstrate state-of-the-art results, validating the model's superior capability in uncovering latent temporal-correlation patterns.
arXiv Detail & Related papers (2024-09-24T01:53:22Z) - Leveraging 2D Information for Long-term Time Series Forecasting with Vanilla Transformers [55.475142494272724]
Time series prediction is crucial for understanding and forecasting complex dynamics in various domains.
We introduce GridTST, a model that combines the benefits of two approaches using innovative multi-directional attentions.
The model consistently delivers state-of-the-art performance across various real-world datasets.
arXiv Detail & Related papers (2024-05-22T16:41:21Z) - DuETT: Dual Event Time Transformer for Electronic Health Records [14.520791492631114]
We introduce the DuETT architecture, an extension of Transformers designed to attend over both time and event type dimensions.
DuETT uses an aggregated input where sparse time series are transformed into a regular sequence with fixed length.
Our model outperforms state-of-the-art deep learning models on multiple downstream tasks from the MIMIC-IV and PhysioNet-2012 EHR datasets.
arXiv Detail & Related papers (2023-04-25T17:47:48Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - TimesNet: Temporal 2D-Variation Modeling for General Time Series
Analysis [80.56913334060404]
Time series analysis is of immense importance in applications, such as weather forecasting, anomaly detection, and action recognition.
Previous methods attempt to accomplish this directly from the 1D time series.
We ravel out the complex temporal variations into the multiple intraperiod- and interperiod-variations.
arXiv Detail & Related papers (2022-10-05T12:19:51Z) - Enhancing Spatiotemporal Prediction Model using Modular Design and
Beyond [2.323220706791067]
It is challenging to predict sequence varies both in time and space.
The mainstream method is to model and spatial temporal structures at the same time.
A modular design is proposed, which embeds sequence model into two modules: a spatial encoder-decoder and a predictor.
arXiv Detail & Related papers (2022-10-04T10:09:35Z) - P-STMO: Pre-Trained Spatial Temporal Many-to-One Model for 3D Human Pose
Estimation [78.83305967085413]
This paper introduces a novel Pre-trained Spatial Temporal Many-to-One (P-STMO) model for 2D-to-3D human pose estimation task.
Our method outperforms state-of-the-art methods with fewer parameters and less computational overhead.
arXiv Detail & Related papers (2022-03-15T04:00:59Z) - Attention Augmented Convolutional Transformer for Tabular Time-series [0.9137554315375922]
Time-series classification is one of the most frequently performed tasks in industrial data science.
We propose a novel scalable architecture for learning representations from time-series data.
Our proposed model is end-to-end and can handle both categorical and continuous valued inputs.
arXiv Detail & Related papers (2021-10-05T05:20:46Z) - Predicting Temporal Sets with Deep Neural Networks [50.53727580527024]
We propose an integrated solution based on the deep neural networks for temporal sets prediction.
A unique perspective is to learn element relationship by constructing set-level co-occurrence graph.
We design an attention-based module to adaptively learn the temporal dependency of elements and sets.
arXiv Detail & Related papers (2020-06-20T03:29:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.