Imputation with Inter-Series Information from Prototypes for Irregular
Sampled Time Series
- URL: http://arxiv.org/abs/2401.07249v1
- Date: Sun, 14 Jan 2024 10:51:30 GMT
- Title: Imputation with Inter-Series Information from Prototypes for Irregular
Sampled Time Series
- Authors: Zhihao Yu, Xu Chu, Liantao Ma, Yasha Wang, Wenwu Zhu
- Abstract summary: We propose PRIME, a Prototype Recurrent Imputation ModEl, which integrates both intra-series and inter-series information for imputing missing values in irregularly sampled time series.
Our framework comprises a prototype memory module for learning inter-series information, a bidirectional gated recurrent unit utilizing prototype information for imputation, and an attentive refinement module for adjusting imputations.
- Score: 44.12943317707651
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Irregularly sampled time series are ubiquitous, presenting significant
challenges for analysis due to missing values. Despite existing methods address
imputation, they predominantly focus on leveraging intra-series information,
neglecting the potential benefits that inter-series information could provide,
such as reducing uncertainty and memorization effect. To bridge this gap, we
propose PRIME, a Prototype Recurrent Imputation ModEl, which integrates both
intra-series and inter-series information for imputing missing values in
irregularly sampled time series. Our framework comprises a prototype memory
module for learning inter-series information, a bidirectional gated recurrent
unit utilizing prototype information for imputation, and an attentive
prototypical refinement module for adjusting imputations. We conducted
extensive experiments on three datasets, and the results underscore PRIME's
superiority over the state-of-the-art models by up to 26% relative improvement
on mean square error.
Related papers
- M$^3$-Impute: Mask-guided Representation Learning for Missing Value Imputation [12.174699459648842]
M$3$-Impute aims to explicitly leverage the missingness information and such correlations with novel masking schemes.
Experiment results show the effectiveness of M$3$-Impute by achieving 20 best and 4 second-best MAE scores on average.
arXiv Detail & Related papers (2024-10-11T13:25:32Z) - An End-to-End Model for Time Series Classification In the Presence of Missing Values [25.129396459385873]
Time series classification with missing data is a prevalent issue in time series analysis.
This study proposes an end-to-end neural network that unifies data imputation and representation learning within a single framework.
arXiv Detail & Related papers (2024-08-11T19:39:12Z) - DeformTime: Capturing Variable Dependencies with Deformable Attention for Time Series Forecasting [0.34530027457862006]
We present DeformTime, a neural network architecture that attempts to capture correlated temporal patterns from the input space.
We conduct extensive experiments on 6 MTS data sets, using previously established benchmarks as well as challenging infectious disease modelling tasks.
Results demonstrate that DeformTime improves accuracy against previous competitive methods across the vast majority of MTS forecasting tasks.
arXiv Detail & Related papers (2024-06-11T16:45:48Z) - Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - Compatible Transformer for Irregularly Sampled Multivariate Time Series [75.79309862085303]
We propose a transformer-based encoder to achieve comprehensive temporal-interaction feature learning for each individual sample.
We conduct extensive experiments on 3 real-world datasets and validate that the proposed CoFormer significantly and consistently outperforms existing methods.
arXiv Detail & Related papers (2023-10-17T06:29:09Z) - STING: Self-attention based Time-series Imputation Networks using GAN [4.052758394413726]
STING (Self-attention based Time-series Imputation Networks using GAN) is proposed.
We take advantage of generative adversarial networks and bidirectional recurrent neural networks to learn latent representations of the time series.
Experimental results on three real-world datasets demonstrate that STING outperforms the existing state-of-the-art methods in terms of imputation accuracy.
arXiv Detail & Related papers (2022-09-22T06:06:56Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Stacking VAE with Graph Neural Networks for Effective and Interpretable
Time Series Anomaly Detection [5.935707085640394]
We propose a stacking variational auto-encoder (VAE) model with graph neural networks for the effective and interpretable time-series anomaly detection.
We show that our proposed model outperforms the strong baselines on three public datasets with considerable improvements.
arXiv Detail & Related papers (2021-05-18T09:50:00Z) - Time-Series Imputation with Wasserstein Interpolation for Optimal
Look-Ahead-Bias and Variance Tradeoff [66.59869239999459]
In finance, imputation of missing returns may be applied prior to training a portfolio optimization model.
There is an inherent trade-off between the look-ahead-bias of using the full data set for imputation and the larger variance in the imputation from using only the training data.
We propose a Bayesian posterior consensus distribution which optimally controls the variance and look-ahead-bias trade-off in the imputation.
arXiv Detail & Related papers (2021-02-25T09:05:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.