Sparse Nonnegative Tucker Decomposition and Completion under Noisy
Observations
- URL: http://arxiv.org/abs/2208.08287v1
- Date: Wed, 17 Aug 2022 13:29:14 GMT
- Title: Sparse Nonnegative Tucker Decomposition and Completion under Noisy
Observations
- Authors: Xiongjun Zhang and Michael K. Ng
- Abstract summary: We propose a sparse nonnegative Tucker decomposition and completion method for the recovery of underlying nonnegative data under noisy observations.
Our theoretical results are better than those by existing tensor-based or matrix-based methods.
- Score: 22.928734507082574
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tensor decomposition is a powerful tool for extracting physically meaningful
latent factors from multi-dimensional nonnegative data, and has been an
increasing interest in a variety of fields such as image processing, machine
learning, and computer vision. In this paper, we propose a sparse nonnegative
Tucker decomposition and completion method for the recovery of underlying
nonnegative data under noisy observations. Here the underlying nonnegative data
tensor is decomposed into a core tensor and several factor matrices with all
entries being nonnegative and the factor matrices being sparse. The loss
function is derived by the maximum likelihood estimation of the noisy
observations, and the $\ell_0$ norm is employed to enhance the sparsity of the
factor matrices. We establish the error bound of the estimator of the proposed
model under generic noise scenarios, which is then specified to the
observations with additive Gaussian noise, additive Laplace noise, and Poisson
observations, respectively. Our theoretical results are better than those by
existing tensor-based or matrix-based methods. Moreover, the minimax lower
bounds are shown to be matched with the derived upper bounds up to logarithmic
factors. Numerical examples on both synthetic and real-world data sets
demonstrate the superiority of the proposed method for nonnegative tensor data
completion.
Related papers
- High-Dimensional Tensor Discriminant Analysis with Incomplete Tensors [5.745276598549783]
We introduce a novel approach to tensor classification with incomplete data, framed within high-dimensional linear discriminant analysis.
Our method demonstrates excellent performance in simulations and real data analysis, even with significant proportions of missing data.
arXiv Detail & Related papers (2024-10-18T18:00:16Z) - Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Noisy Tensor Completion via Low-rank Tensor Ring [41.86521269183527]
tensor completion is a fundamental tool for incomplete data analysis, where the goal is to predict missing entries from partial observations.
Existing methods often make the explicit or implicit assumption that the observed entries are noise-free to provide a theoretical guarantee of exact recovery of missing entries.
This paper proposes a novel noisy tensor completion model, which complements the incompetence of existing works in handling the degeneration of high-order and noisy observations.
arXiv Detail & Related papers (2022-03-14T14:09:43Z) - A Robust and Flexible EM Algorithm for Mixtures of Elliptical
Distributions with Missing Data [71.9573352891936]
This paper tackles the problem of missing data imputation for noisy and non-Gaussian data.
A new EM algorithm is investigated for mixtures of elliptical distributions with the property of handling potential missing data.
Experimental results on synthetic data demonstrate that the proposed algorithm is robust to outliers and can be used with non-Gaussian data.
arXiv Detail & Related papers (2022-01-28T10:01:37Z) - Entropy Minimizing Matrix Factorization [102.26446204624885]
Nonnegative Matrix Factorization (NMF) is a widely-used data analysis technique, and has yielded impressive results in many real-world tasks.
In this study, an Entropy Minimizing Matrix Factorization framework (EMMF) is developed to tackle the above problem.
Considering that the outliers are usually much less than the normal samples, a new entropy loss function is established for matrix factorization.
arXiv Detail & Related papers (2021-03-24T21:08:43Z) - Improving Nonparametric Density Estimation with Tensor Decompositions [14.917420021212912]
Nonparametric density estimators often perform well on low dimensional data, but suffer when applied to higher dimensional data.
This paper investigates whether these improvements can be extended to other simplified dependence assumptions.
We prove that restricting estimation to low-rank nonnegative PARAFAC or Tucker decompositions removes the dimensionality exponent on bin width rates for multidimensional histograms.
arXiv Detail & Related papers (2020-10-06T01:39:09Z) - Low-Rank and Sparse Enhanced Tucker Decomposition for Tensor Completion [3.498620439731324]
We introduce a unified low-rank and sparse enhanced Tucker decomposition model for tensor completion.
Our model possesses a sparse regularization term to promote a sparse core tensor, which is beneficial for tensor data compression.
It is remarkable that our model is able to deal with different types of real-world data sets, since it exploits the potential periodicity and inherent correlation properties appeared in tensors.
arXiv Detail & Related papers (2020-10-01T12:45:39Z) - Sparse Nonnegative Tensor Factorization and Completion with Noisy
Observations [22.928734507082574]
We study the sparse nonnegative tensor factorization and completion problem from partial and noisy observations.
We show that the error bounds of the estimator of the proposed model can be established under general noise observations.
arXiv Detail & Related papers (2020-07-21T07:17:52Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.