Sparse Nonnegative Tensor Factorization and Completion with Noisy
Observations
- URL: http://arxiv.org/abs/2007.10626v3
- Date: Wed, 20 Oct 2021 12:47:16 GMT
- Title: Sparse Nonnegative Tensor Factorization and Completion with Noisy
Observations
- Authors: Xiongjun Zhang and Michael K. Ng
- Abstract summary: We study the sparse nonnegative tensor factorization and completion problem from partial and noisy observations.
We show that the error bounds of the estimator of the proposed model can be established under general noise observations.
- Score: 22.928734507082574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the sparse nonnegative tensor factorization and
completion problem from partial and noisy observations for third-order tensors.
Because of sparsity and nonnegativity, the underlying tensor is decomposed into
the tensor-tensor product of one sparse nonnegative tensor and one nonnegative
tensor. We propose to minimize the sum of the maximum likelihood estimation for
the observations with nonnegativity constraints and the tensor $\ell_0$ norm
for the sparse factor. We show that the error bounds of the estimator of the
proposed model can be established under general noise observations. The
detailed error bounds under specific noise distributions including additive
Gaussian noise, additive Laplace noise, and Poisson observations can be
derived. Moreover, the minimax lower bounds are shown to be matched with the
established upper bounds up to a logarithmic factor of the sizes of the
underlying tensor. These theoretical results for tensors are better than those
obtained for matrices, and this illustrates the advantage of the use of
nonnegative sparse tensor models for completion and denoising. Numerical
experiments are provided to validate the superiority of the proposed
tensor-based method compared with the matrix-based approach.
Related papers
- Low-Rank Tensor Learning by Generalized Nonconvex Regularization [25.115066273660478]
We study the problem of low-rank tensor learning, where only a few samples are observed the underlying tensor.
A family of non tensor learning functions are employed to characterize the low-rankness of the underlying tensor.
An algorithm designed to solve the resulting majorization-minimization is proposed.
arXiv Detail & Related papers (2024-10-24T03:33:20Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Sparse Nonnegative Tucker Decomposition and Completion under Noisy
Observations [22.928734507082574]
We propose a sparse nonnegative Tucker decomposition and completion method for the recovery of underlying nonnegative data under noisy observations.
Our theoretical results are better than those by existing tensor-based or matrix-based methods.
arXiv Detail & Related papers (2022-08-17T13:29:14Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Noisy Tensor Completion via Low-rank Tensor Ring [41.86521269183527]
tensor completion is a fundamental tool for incomplete data analysis, where the goal is to predict missing entries from partial observations.
Existing methods often make the explicit or implicit assumption that the observed entries are noise-free to provide a theoretical guarantee of exact recovery of missing entries.
This paper proposes a novel noisy tensor completion model, which complements the incompetence of existing works in handling the degeneration of high-order and noisy observations.
arXiv Detail & Related papers (2022-03-14T14:09:43Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - TenIPS: Inverse Propensity Sampling for Tensor Completion [34.209486541525294]
We study the problem of completing a partially observed tensor with MNAR observations.
We assume that both the original tensor and the tensor of propensities have low multilinear rank.
The algorithm first estimates the propensities using a convex relaxation and then predicts missing values using a higher-order SVD approach.
arXiv Detail & Related papers (2021-01-01T22:13:19Z) - Uncertainty quantification for nonconvex tensor completion: Confidence
intervals, heteroscedasticity and optimality [92.35257908210316]
We study the problem of estimating a low-rank tensor given incomplete and corrupted observations.
We find that it attains unimprovable rates $ell-2$ accuracy.
arXiv Detail & Related papers (2020-06-15T17:47:13Z) - Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation [105.33409035876691]
This paper explores the problem of multi-view spectral clustering (MVSC) based on tensor low-rank modeling.
We design a novel structured tensor low-rank norm tailored to MVSC.
We show that the proposed method outperforms state-of-the-art methods to a significant extent.
arXiv Detail & Related papers (2020-04-30T11:52:12Z) - Tensor denoising and completion based on ordinal observations [11.193504036335503]
We consider the problem of low-rank tensor estimation from possibly incomplete, ordinal-valued observations.
We propose a multi-linear cumulative link model, develop a rank-constrained M-estimator, and obtain theoretical accuracy guarantees.
We show that the proposed estimator is minimax optimal under the class of low-rank models.
arXiv Detail & Related papers (2020-02-16T07:09:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.