Networked Time Series Prediction with Incomplete Data
- URL: http://arxiv.org/abs/2110.02271v2
- Date: Mon, 3 Jul 2023 16:18:31 GMT
- Title: Networked Time Series Prediction with Incomplete Data
- Authors: Yichen Zhu, Mengtian Zhang, Bo Jiang, Haiming Jin, Jianqiang Huang,
Xinbing Wang
- Abstract summary: We propose NETS-ImpGAN, a novel deep learning framework that can be trained on incomplete data with missing values in both history and future.
We conduct extensive experiments on three real-world datasets under different missing patterns and missing rates.
- Score: 59.45358694862176
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A networked time series (NETS) is a family of time series on a given graph,
one for each node. It has found a wide range of applications from intelligent
transportation, environment monitoring to mobile network management. An
important task in such applications is to predict the future values of a NETS
based on its historical values and the underlying graph. Most existing methods
require complete data for training. However, in real-world scenarios, it is not
uncommon to have missing data due to sensor malfunction, incomplete sensing
coverage, etc. In this paper, we study the problem of NETS prediction with
incomplete data. We propose NETS-ImpGAN, a novel deep learning framework that
can be trained on incomplete data with missing values in both history and
future. Furthermore, we propose novel Graph Temporal Attention Networks by
incorporating the attention mechanism to capture both inter-time series
correlations and temporal correlations. We conduct extensive experiments on
three real-world datasets under different missing patterns and missing rates.
The experimental results show that NETS-ImpGAN outperforms existing methods
except when data exhibit very low variance, in which case NETS-ImpGAN still
achieves competitive performance.
Related papers
- Exploring the Limits of Historical Information for Temporal Knowledge
Graph Extrapolation [59.417443739208146]
We propose a new event forecasting model based on a novel training framework of historical contrastive learning.
CENET learns both the historical and non-historical dependency to distinguish the most potential entities.
We evaluate our proposed model on five benchmark graphs.
arXiv Detail & Related papers (2023-08-29T03:26:38Z) - Temporal Graph Benchmark for Machine Learning on Temporal Graphs [54.52243310226456]
Temporal Graph Benchmark (TGB) is a collection of challenging and diverse benchmark datasets.
We benchmark each dataset and find that the performance of common models can vary drastically across datasets.
TGB provides an automated machine learning pipeline for reproducible and accessible temporal graph research.
arXiv Detail & Related papers (2023-07-03T13:58:20Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - Graph Convolutional Networks for Traffic Forecasting with Missing Values [0.5774786149181392]
We propose a Graph Convolutional Network model with the ability to handle the complex missing values in the Spatio-temporal context.
We propose as well a dynamic graph learning module based on the learned local-global features.
The experimental results on real-life datasets show the reliability of our proposed method.
arXiv Detail & Related papers (2022-12-13T08:04:38Z) - Diving into Unified Data-Model Sparsity for Class-Imbalanced Graph
Representation Learning [30.23894624193583]
Graph Neural Networks (GNNs) training upon non-Euclidean graph data often encounters relatively higher time costs.
We develop a unified data-model dynamic sparsity framework named Graph Decantation (GraphDec) to address challenges brought by training upon a massive class-imbalanced graph data.
arXiv Detail & Related papers (2022-10-01T01:47:00Z) - STING: Self-attention based Time-series Imputation Networks using GAN [4.052758394413726]
STING (Self-attention based Time-series Imputation Networks using GAN) is proposed.
We take advantage of generative adversarial networks and bidirectional recurrent neural networks to learn latent representations of the time series.
Experimental results on three real-world datasets demonstrate that STING outperforms the existing state-of-the-art methods in terms of imputation accuracy.
arXiv Detail & Related papers (2022-09-22T06:06:56Z) - Evidential Temporal-aware Graph-based Social Event Detection via
Dempster-Shafer Theory [76.4580340399321]
We propose ETGNN, a novel Evidential Temporal-aware Graph Neural Network.
We construct view-specific graphs whose nodes are the texts and edges are determined by several types of shared elements respectively.
Considering the view-specific uncertainty, the representations of all views are converted into mass functions through evidential deep learning (EDL) neural networks.
arXiv Detail & Related papers (2022-05-24T16:22:40Z) - PGCN: Progressive Graph Convolutional Networks for Spatial-Temporal Traffic Forecasting [4.14360329494344]
We propose a novel traffic forecasting framework called Progressive Graph Convolutional Network (PGCN)
PGCN constructs a set of graphs by progressively adapting to online input data during the training and testing phases.
The proposed model achieves state-of-the-art performance with consistency in all datasets.
arXiv Detail & Related papers (2022-02-18T02:15:44Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z) - Neural ODEs for Informative Missingness in Multivariate Time Series [0.7233897166339269]
Practical applications, e.g., sensor data, healthcare, weather, generates data that is in truth continuous in time.
Deep learning model called GRU-D is one early attempt to address informative missingness in time series data.
New family of neural networks called Neural ODEs are natural and efficient for processing time series data which is continuous in time.
arXiv Detail & Related papers (2020-05-20T00:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.