Learning Gaussian Mixture Representations for Tensor Time Series
Forecasting
- URL: http://arxiv.org/abs/2306.00390v3
- Date: Wed, 7 Jun 2023 13:07:38 GMT
- Title: Learning Gaussian Mixture Representations for Tensor Time Series
Forecasting
- Authors: Jiewen Deng, Jinliang Deng, Renhe Jiang, Xuan Song
- Abstract summary: We develop a novel TTS forecasting framework, which seeks to individually model each heterogeneity component implied in the time, the location, and the source variables.
Experiment results on two real-world TTS datasets verify the superiority of our approach compared with the state-of-the-art baselines.
- Score: 8.31607451942671
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor time series (TTS) data, a generalization of one-dimensional time
series on a high-dimensional space, is ubiquitous in real-world scenarios,
especially in monitoring systems involving multi-source spatio-temporal data
(e.g., transportation demands and air pollutants). Compared to modeling time
series or multivariate time series, which has received much attention and
achieved tremendous progress in recent years, tensor time series has been paid
less effort. Properly coping with the tensor time series is a much more
challenging task, due to its high-dimensional and complex inner structure. In
this paper, we develop a novel TTS forecasting framework, which seeks to
individually model each heterogeneity component implied in the time, the
location, and the source variables. We name this framework as GMRL, short for
Gaussian Mixture Representation Learning. Experiment results on two real-world
TTS datasets verify the superiority of our approach compared with the
state-of-the-art baselines. Code and data are published on
https://github.com/beginner-sketch/GMRL.
Related papers
- Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts [103.725112190618]
This paper introduces Moirai-MoE, using a single input/output projection layer while delegating the modeling of diverse time series patterns to the sparse mixture of experts.
Extensive experiments on 39 datasets demonstrate the superiority of Moirai-MoE over existing foundation models in both in-distribution and zero-shot scenarios.
arXiv Detail & Related papers (2024-10-14T13:01:11Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - DCSF: Deep Convolutional Set Functions for Classification of
Asynchronous Time Series [5.339109578928972]
Asynchronous Time Series is a time series where all the channels are observed asynchronously-independently.
This paper proposes a novel framework, that is highly scalable and memory efficient, for the asynchronous time series classification task.
We explore convolutional neural networks, which are well researched for the closely related problem-classification of regularly sampled and fully observed time series.
arXiv Detail & Related papers (2022-08-24T08:47:36Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Towards Similarity-Aware Time-Series Classification [51.2400839966489]
We study time-series classification (TSC), a fundamental task of time-series data mining.
We propose Similarity-Aware Time-Series Classification (SimTSC), a framework that models similarity information with graph neural networks (GNNs)
arXiv Detail & Related papers (2022-01-05T02:14:57Z) - GACAN: Graph Attention-Convolution-Attention Networks for Traffic
Forecasting Based on Multi-granularity Time Series [9.559635281384134]
We propose Graph Attention-Convolution-Attention Networks (GACAN) for traffic forecasting.
The model uses a novel Att-Conv-Att block which contains two graph attention layers and one spectral-based GCN layer sandwiched in between.
The main novelty of the model is the integration of time series of four different time granularities.
arXiv Detail & Related papers (2021-10-27T10:21:13Z) - TE-ESN: Time Encoding Echo State Network for Prediction Based on
Irregularly Sampled Time Series Data [6.221375620565451]
Prediction based on Irregularly Sampled Time Series (ISTS) is of wide concern in the real-world applications.
We create a new model structure named Time Echo State Network (TE-ESN)
It is the first ESNs-based model that can process ISTS data.
Experiments on one chaos system and three real-world datasets show that TE-ESN performs better than all baselines.
arXiv Detail & Related papers (2021-05-02T08:00:46Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z) - A Deep Structural Model for Analyzing Correlated Multivariate Time
Series [11.009809732645888]
We present a deep learning structural time series model which can handle correlated multivariate time series input.
The model explicitly learns/extracts the trend, seasonality, and event components.
We compare our model with several state-of-the-art methods through a comprehensive set of experiments on a variety of time series data sets.
arXiv Detail & Related papers (2020-01-02T18:48:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.