TenIPS: Inverse Propensity Sampling for Tensor Completion
- URL: http://arxiv.org/abs/2101.00323v2
- Date: Mon, 1 Mar 2021 23:34:26 GMT
- Title: TenIPS: Inverse Propensity Sampling for Tensor Completion
- Authors: Chengrun Yang, Lijun Ding, Ziyang Wu, Madeleine Udell
- Abstract summary: We study the problem of completing a partially observed tensor with MNAR observations.
We assume that both the original tensor and the tensor of propensities have low multilinear rank.
The algorithm first estimates the propensities using a convex relaxation and then predicts missing values using a higher-order SVD approach.
- Score: 34.209486541525294
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Tensors are widely used to represent multiway arrays of data. The recovery of
missing entries in a tensor has been extensively studied, generally under the
assumption that entries are missing completely at random (MCAR). However, in
most practical settings, observations are missing not at random (MNAR): the
probability that a given entry is observed (also called the propensity) may
depend on other entries in the tensor or even on the value of the missing
entry. In this paper, we study the problem of completing a partially observed
tensor with MNAR observations, without prior information about the
propensities. To complete the tensor, we assume that both the original tensor
and the tensor of propensities have low multilinear rank. The algorithm first
estimates the propensities using a convex relaxation and then predicts missing
values using a higher-order SVD approach, reweighting the observed tensor by
the inverse propensities. We provide finite-sample error bounds on the
resulting complete tensor. Numerical experiments demonstrate the effectiveness
of our approach.
Related papers
- Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Decomposable Sparse Tensor on Tensor Regression [1.370633147306388]
We consider the sparse low rank tensor on tensor regression where predictors $mathcalX$ and responses $mathcalY$ are both high-dimensional tensors.
We propose a fast solution based on stagewise search composed by contraction part and generation part which are optimized alternatively.
arXiv Detail & Related papers (2022-12-09T18:16:41Z) - Sparse Nonnegative Tucker Decomposition and Completion under Noisy
Observations [22.928734507082574]
We propose a sparse nonnegative Tucker decomposition and completion method for the recovery of underlying nonnegative data under noisy observations.
Our theoretical results are better than those by existing tensor-based or matrix-based methods.
arXiv Detail & Related papers (2022-08-17T13:29:14Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Noisy Tensor Completion via Low-rank Tensor Ring [41.86521269183527]
tensor completion is a fundamental tool for incomplete data analysis, where the goal is to predict missing entries from partial observations.
Existing methods often make the explicit or implicit assumption that the observed entries are noise-free to provide a theoretical guarantee of exact recovery of missing entries.
This paper proposes a novel noisy tensor completion model, which complements the incompetence of existing works in handling the degeneration of high-order and noisy observations.
arXiv Detail & Related papers (2022-03-14T14:09:43Z) - Robust M-estimation-based Tensor Ring Completion: a Half-quadratic
Minimization Approach [14.048989759890475]
We develop a robust approach to tensor ring completion that uses an M-estimator as its error statistic.
We present two HQ-based algorithms based on truncated singular value decomposition and matrix factorization.
arXiv Detail & Related papers (2021-06-19T04:37:50Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - Optimization Variance: Exploring Generalization Properties of DNNs [83.78477167211315]
The test error of a deep neural network (DNN) often demonstrates double descent.
We propose a novel metric, optimization variance (OV), to measure the diversity of model updates.
arXiv Detail & Related papers (2021-06-03T09:34:17Z) - Multi-version Tensor Completion for Time-delayed Spatio-temporal Data [50.762087239885936]
Real-world-temporal data is often incomplete or inaccurate due to various data loading delays.
We propose a low-rank tensor model to predict the updates over time.
We obtain up to 27.2% lower root mean-squared-error compared to the best baseline method.
arXiv Detail & Related papers (2021-05-11T19:55:56Z) - Sparse Nonnegative Tensor Factorization and Completion with Noisy
Observations [22.928734507082574]
We study the sparse nonnegative tensor factorization and completion problem from partial and noisy observations.
We show that the error bounds of the estimator of the proposed model can be established under general noise observations.
arXiv Detail & Related papers (2020-07-21T07:17:52Z) - Uncertainty quantification for nonconvex tensor completion: Confidence
intervals, heteroscedasticity and optimality [92.35257908210316]
We study the problem of estimating a low-rank tensor given incomplete and corrupted observations.
We find that it attains unimprovable rates $ell-2$ accuracy.
arXiv Detail & Related papers (2020-06-15T17:47:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.