Multi-version Tensor Completion for Time-delayed Spatio-temporal Data
- URL: http://arxiv.org/abs/2105.05326v1
- Date: Tue, 11 May 2021 19:55:56 GMT
- Title: Multi-version Tensor Completion for Time-delayed Spatio-temporal Data
- Authors: Cheng Qian, Nikos Kargas, Cao Xiao, Lucas Glass, Nicholas
Sidiropoulos, Jimeng Sun
- Abstract summary: Real-world-temporal data is often incomplete or inaccurate due to various data loading delays.
We propose a low-rank tensor model to predict the updates over time.
We obtain up to 27.2% lower root mean-squared-error compared to the best baseline method.
- Score: 50.762087239885936
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-world spatio-temporal data is often incomplete or inaccurate due to
various data loading delays. For example, a location-disease-time tensor of
case counts can have multiple delayed updates of recent temporal slices for
some locations or diseases. Recovering such missing or noisy (under-reported)
elements of the input tensor can be viewed as a generalized tensor completion
problem. Existing tensor completion methods usually assume that i) missing
elements are randomly distributed and ii) noise for each tensor element is
i.i.d. zero-mean. Both assumptions can be violated for spatio-temporal tensor
data. We often observe multiple versions of the input tensor with different
under-reporting noise levels. The amount of noise can be time- or
location-dependent as more updates are progressively introduced to the tensor.
We model such dynamic data as a multi-version tensor with an extra tensor mode
capturing the data updates. We propose a low-rank tensor model to predict the
updates over time. We demonstrate that our method can accurately predict the
ground-truth values of many real-world tensors. We obtain up to 27.2% lower
root mean-squared-error compared to the best baseline method. Finally, we
extend our method to track the tensor data over time, leading to significant
computational savings.
Related papers
- A Sparse Tensor Generator with Efficient Feature Extraction [1.3124513975412255]
A major obstacle for research in sparse tensor operations is the deficiency of a broad-scale sparse tensor dataset.
We have developed a smart sparse tensor generator that mimics the substantial features of real sparse tensors.
The effectiveness of our generator is validated through the quality of features and the performance of decomposition.
arXiv Detail & Related papers (2024-05-08T10:28:20Z) - Multi-Dictionary Tensor Decomposition [5.733331864416094]
We propose a framework for Multi-Dictionary Decomposition (MDTD)
We derive a general optimization algorithm for MDTD that handles both complete input and input with missing values.
It can impute missing values in billion-entry tensors more accurately and scalably than state-of-the-art competitors.
arXiv Detail & Related papers (2023-09-18T12:31:56Z) - Low-Rank Tensor Function Representation for Multi-Dimensional Data
Recovery [52.21846313876592]
Low-rank tensor function representation (LRTFR) can continuously represent data beyond meshgrid with infinite resolution.
We develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization.
Our method substantiates the superiority and versatility of our method as compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-12-01T04:00:38Z) - Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor
Decompositions [51.19236668224547]
We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions.
For tensor train decomposition, we give a bicriteria $(1 + eps)$-approximation algorithm with a small bicriteria rank and $O(q cdot nnz(A))$ running time.
In addition, we extend our algorithm to tensor networks with arbitrary graphs.
arXiv Detail & Related papers (2022-07-15T11:55:09Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Efficient Tensor Completion via Element-wise Weighted Low-rank Tensor
Train with Overlapping Ket Augmentation [18.438177637687357]
We propose a novel tensor completion approach via the element-wise weighted technique.
We specifically consider the recovery quality of edge elements from adjacent blocks.
Our experimental results demonstrate that the proposed algorithm TWMac-TT outperforms several other competing tensor completion methods.
arXiv Detail & Related papers (2021-09-13T06:50:37Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - TenIPS: Inverse Propensity Sampling for Tensor Completion [34.209486541525294]
We study the problem of completing a partially observed tensor with MNAR observations.
We assume that both the original tensor and the tensor of propensities have low multilinear rank.
The algorithm first estimates the propensities using a convex relaxation and then predicts missing values using a higher-order SVD approach.
arXiv Detail & Related papers (2021-01-01T22:13:19Z) - Time-Aware Tensor Decomposition for Missing Entry Prediction [14.61218681943499]
Given a time-evolving tensor with missing entries, how can we effectively factorize it for precisely predicting the missing entries?
We propose TATD (Time-Aware Decomposition), a novel tensor decomposition method for real-world temporal tensors.
We show that TATD provides the state-of-the-art accuracy for decomposing temporal tensors.
arXiv Detail & Related papers (2020-12-16T10:52:34Z) - Uncertainty quantification for nonconvex tensor completion: Confidence
intervals, heteroscedasticity and optimality [92.35257908210316]
We study the problem of estimating a low-rank tensor given incomplete and corrupted observations.
We find that it attains unimprovable rates $ell-2$ accuracy.
arXiv Detail & Related papers (2020-06-15T17:47:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.