NRTSI: Non-Recurrent Time Series Imputation for Irregularly-sampled Data
- URL: http://arxiv.org/abs/2102.03340v1
- Date: Fri, 5 Feb 2021 18:41:25 GMT
- Title: NRTSI: Non-Recurrent Time Series Imputation for Irregularly-sampled Data
- Authors: Siyuan Shan, Junier B. Oliva
- Abstract summary: Time series imputation is a fundamental task for understanding time series with missing data.
We propose a novel imputation model called NRTSI without any recurrent modules.
NRTSI can easily handle irregularly-sampled data, perform multiple-mode imputation, and handle the scenario where dimensions are partially observed.
- Score: 14.343059464246425
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Time series imputation is a fundamental task for understanding time series
with missing data. Existing imputation methods often rely on recurrent models
such as RNNs and ordinary differential equations, both of which suffer from the
error compounding problems of recurrent models. In this work, we view the
imputation task from the perspective of permutation equivariant modeling of
sets and propose a novel imputation model called NRTSI without any recurrent
modules. Taking advantage of the permutation equivariant nature of NRTSI, we
design a principled and efficient hierarchical imputation procedure. NRTSI can
easily handle irregularly-sampled data, perform multiple-mode stochastic
imputation, and handle the scenario where dimensions are partially observed. We
show that NRTSI achieves state-of-the-art performance across a wide range of
commonly used time series imputation benchmarks.
Related papers
- DiffImp: Efficient Diffusion Model for Probabilistic Time Series Imputation with Bidirectional Mamba Backbone [6.428451261614519]
Current DDPM-based probabilistic time series imputation methodologies are confronted with two types of challenges.
We integrate the computational efficient state space model, namely Mamba, as the backbone denosing module for DDPMs.
Our approach can achieve state-of-the-art time series imputation results on multiple datasets, different missing scenarios and missing ratios.
arXiv Detail & Related papers (2024-10-17T08:48:52Z) - Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - Continuous-time Autoencoders for Regular and Irregular Time Series Imputation [21.25279298572273]
Time series imputation is one of the most fundamental tasks for time series.
Recent self-attention-based methods show the state-of-the-art imputation performance.
It has been overlooked for a long time to design an imputation method based on continuous-time recurrent neural networks.
arXiv Detail & Related papers (2023-12-27T14:13:42Z) - Gated Recurrent Neural Networks with Weighted Time-Delay Feedback [59.125047512495456]
We introduce a novel gated recurrent unit (GRU) with a weighted time-delay feedback mechanism.
We show that $tau$-GRU can converge faster and generalize better than state-of-the-art recurrent units and gated recurrent architectures.
arXiv Detail & Related papers (2022-12-01T02:26:34Z) - STING: Self-attention based Time-series Imputation Networks using GAN [4.052758394413726]
STING (Self-attention based Time-series Imputation Networks using GAN) is proposed.
We take advantage of generative adversarial networks and bidirectional recurrent neural networks to learn latent representations of the time series.
Experimental results on three real-world datasets demonstrate that STING outperforms the existing state-of-the-art methods in terms of imputation accuracy.
arXiv Detail & Related papers (2022-09-22T06:06:56Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Task-Synchronized Recurrent Neural Networks [0.0]
Recurrent Neural Networks (RNNs) traditionally involve ignoring the fact, feeding the time differences as additional inputs, or resampling the data.
We propose an elegant straightforward alternative approach where instead the RNN is in effect resampled in time to match the time of the data or the task at hand.
We confirm empirically that our models can effectively compensate for the time-non-uniformity of the data and demonstrate that they compare favorably to data resampling, classical RNN methods, and alternative RNN models.
arXiv Detail & Related papers (2022-04-11T15:27:40Z) - CSDI: Conditional Score-based Diffusion Models for Probabilistic Time
Series Imputation [107.63407690972139]
Conditional Score-based Diffusion models for Imputation (CSDI) is a novel time series imputation method that utilizes score-based diffusion models conditioned on observed data.
CSDI improves by 40-70% over existing probabilistic imputation methods on popular performance metrics.
In addition, C reduces the error by 5-20% compared to the state-of-the-art deterministic imputation methods.
arXiv Detail & Related papers (2021-07-07T22:20:24Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z) - Learning from Irregularly-Sampled Time Series: A Missing Data
Perspective [18.493394650508044]
Irregularly-sampled time series occur in many domains including healthcare.
We model irregularly-sampled time series data as a sequence of index-value pairs sampled from a continuous but unobserved function.
We propose learning methods for this framework based on variational autoencoders and generative adversarial networks.
arXiv Detail & Related papers (2020-08-17T20:01:55Z) - STEER: Simple Temporal Regularization For Neural ODEs [80.80350769936383]
We propose a new regularization technique: randomly sampling the end time of the ODE during training.
The proposed regularization is simple to implement, has negligible overhead and is effective across a wide variety of tasks.
We show through experiments on normalizing flows, time series models and image recognition that the proposed regularization can significantly decrease training time and even improve performance over baseline models.
arXiv Detail & Related papers (2020-06-18T17:44:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.