Missing Value Imputation on Multidimensional Time Series
- URL: http://arxiv.org/abs/2103.01600v3
- Date: Wed, 21 Jun 2023 07:13:14 GMT
- Title: Missing Value Imputation on Multidimensional Time Series
- Authors: Parikshit Bansal, Prathamesh Deshpande, Sunita Sarawagi
- Abstract summary: We present DeepMVI, a deep learning method for missing value imputation in multidimensional time-series datasets.
DeepMVI combines fine-grained and coarse-grained patterns along a time series, and trends from related series across categorical dimensions.
Experiments show that DeepMVI is significantly more accurate, reducing error by more than 50% in more than half the cases.
- Score: 16.709162372224355
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present DeepMVI, a deep learning method for missing value imputation in
multidimensional time-series datasets. Missing values are commonplace in
decision support platforms that aggregate data over long time stretches from
disparate sources, and reliable data analytics calls for careful handling of
missing data. One strategy is imputing the missing values, and a wide variety
of algorithms exist spanning simple interpolation, matrix factorization methods
like SVD, statistical models like Kalman filters, and recent deep learning
methods. We show that often these provide worse results on aggregate analytics
compared to just excluding the missing data. DeepMVI uses a neural network to
combine fine-grained and coarse-grained patterns along a time series, and
trends from related series across categorical dimensions. After failing with
off-the-shelf neural architectures, we design our own network that includes a
temporal transformer with a novel convolutional window feature, and kernel
regression with learned embeddings. The parameters and their training are
designed carefully to generalize across different placements of missing blocks
and data characteristics. Experiments across nine real datasets, four different
missing scenarios, comparing seven existing methods show that DeepMVI is
significantly more accurate, reducing error by more than 50% in more than half
the cases, compared to the best existing method. Although slower than simpler
matrix factorization methods, we justify the increased time overheads by
showing that DeepMVI is the only option that provided overall more accurate
analytics than dropping missing values.
Related papers
- Multilinear Kernel Regression and Imputation via Manifold Learning [5.482532589225551]
MultiL-KRIM builds on the intuitive concept of spaces to tangent and incorporates collaboration among point-cloud neighbors (regressors) directly into the data-modeling term of the loss function.
Two important application domains showcase the functionality of MultiL-KRIM: time-varying-graph-signal (TVGS) recovery, and reconstruction of highly accelerated dynamic-magnetic-resonance-imaging (dMRI) data.
arXiv Detail & Related papers (2024-02-06T02:50:42Z) - Easy Differentially Private Linear Regression [16.325734286930764]
We study an algorithm which uses the exponential mechanism to select a model with high Tukey depth from a collection of non-private regression models.
We find that this algorithm obtains strong empirical performance in the data-rich setting.
arXiv Detail & Related papers (2022-08-15T17:42:27Z) - Minimax rate of consistency for linear models with missing values [0.0]
Missing values arise in most real-world data sets due to the aggregation of multiple sources and intrinsically missing information (sensor failure, unanswered questions in surveys...).
In this paper, we focus on the extensively-studied linear models, but in presence of missing values, which turns out to be quite a challenging task.
This eventually requires to solve a number of learning tasks, exponential in the number of input features, which makes predictions impossible for current real-world datasets.
arXiv Detail & Related papers (2022-02-03T08:45:34Z) - RIFLE: Imputation and Robust Inference from Low Order Marginals [10.082738539201804]
We develop a statistical inference framework for regression and classification in the presence of missing data without imputation.
Our framework, RIFLE, estimates low-order moments of the underlying data distribution with corresponding confidence intervals to learn a distributionally robust model.
Our experiments demonstrate that RIFLE outperforms other benchmark algorithms when the percentage of missing values is high and/or when the number of data points is relatively small.
arXiv Detail & Related papers (2021-09-01T23:17:30Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - A Local Similarity-Preserving Framework for Nonlinear Dimensionality
Reduction with Neural Networks [56.068488417457935]
We propose a novel local nonlinear approach named Vec2vec for general purpose dimensionality reduction.
To train the neural network, we build the neighborhood similarity graph of a matrix and define the context of data points.
Experiments of data classification and clustering on eight real datasets show that Vec2vec is better than several classical dimensionality reduction methods in the statistical hypothesis test.
arXiv Detail & Related papers (2021-03-10T23:10:47Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Variational Bayesian Unlearning [54.26984662139516]
We study the problem of approximately unlearning a Bayesian model from a small subset of the training data to be erased.
We show that it is equivalent to minimizing an evidence upper bound which trades off between fully unlearning from erased data vs. not entirely forgetting the posterior belief.
In model training with VI, only an approximate (instead of exact) posterior belief given the full data can be obtained, which makes unlearning even more challenging.
arXiv Detail & Related papers (2020-10-24T11:53:00Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z) - Deep transformation models: Tackling complex regression problems with
neural network based transformation models [0.0]
We present a deep transformation model for probabilistic regression.
It estimates the whole conditional probability distribution, which is the most thorough way to capture uncertainty about the outcome.
Our method works for complex input data, which we demonstrate by employing a CNN architecture on image data.
arXiv Detail & Related papers (2020-04-01T14:23:12Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.