Low-Rank and Sparse Enhanced Tucker Decomposition for Tensor Completion
- URL: http://arxiv.org/abs/2010.00359v3
- Date: Thu, 20 May 2021 03:18:17 GMT
- Title: Low-Rank and Sparse Enhanced Tucker Decomposition for Tensor Completion
- Authors: Chenjian Pan and Chen Ling and Hongjin He and Liqun Qi and Yanwei Xu
- Abstract summary: We introduce a unified low-rank and sparse enhanced Tucker decomposition model for tensor completion.
Our model possesses a sparse regularization term to promote a sparse core tensor, which is beneficial for tensor data compression.
It is remarkable that our model is able to deal with different types of real-world data sets, since it exploits the potential periodicity and inherent correlation properties appeared in tensors.
- Score: 3.498620439731324
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor completion refers to the task of estimating the missing data from an
incomplete measurement or observation, which is a core problem frequently
arising from the areas of big data analysis, computer vision, and network
engineering. Due to the multidimensional nature of high-order tensors, the
matrix approaches, e.g., matrix factorization and direct matricization of
tensors, are often not ideal for tensor completion and recovery. In this paper,
we introduce a unified low-rank and sparse enhanced Tucker decomposition model
for tensor completion. Our model possesses a sparse regularization term to
promote a sparse core tensor of the Tucker decomposition, which is beneficial
for tensor data compression. Moreover, we enforce low-rank regularization terms
on factor matrices of the Tucker decomposition for inducing the low-rankness of
the tensor with a cheap computational cost. Numerically, we propose a
customized ADMM with enough easy subproblems to solve the underlying model. It
is remarkable that our model is able to deal with different types of real-world
data sets, since it exploits the potential periodicity and inherent correlation
properties appeared in tensors. A series of computational experiments on
real-world data sets, including internet traffic data sets, color images, and
face recognition, demonstrate that our model performs better than many existing
state-of-the-art matricization and tensorization approaches in terms of
achieving higher recovery accuracy.
Related papers
- Revisiting Trace Norm Minimization for Tensor Tucker Completion: A Direct Multilinear Rank Learning Approach [22.740653766104153]
This paper shows that trace norm-based formulations in Tucker completion are inefficient in multilinear rank minimization.
We propose a new interpretation of Tucker format such that trace norm minimization is applied to the factor matrices of the equivalent representation.
Numerical results are presented to show that the proposed algorithm exhibits significant improved performance in terms of multilinear rank learning.
arXiv Detail & Related papers (2024-09-08T15:44:00Z) - Provable Tensor Completion with Graph Information [49.08648842312456]
We introduce a novel model, theory, and algorithm for solving the dynamic graph regularized tensor completion problem.
We develop a comprehensive model simultaneously capturing the low-rank and similarity structure of the tensor.
In terms of theory, we showcase the alignment between the proposed graph smoothness regularization and a weighted tensor nuclear norm.
arXiv Detail & Related papers (2023-10-04T02:55:10Z) - Spatiotemporal Regularized Tucker Decomposition Approach for Traffic
Data Imputation [0.0]
In intelligent transportation systems, traffic data imputation estimating the missing value from partially observed data is an inevitable challenging task.
Previous studies have not fully considered traffic data's multidimensionality and correlations, but they are vital to data recovery, especially for high-level missing scenarios.
arXiv Detail & Related papers (2023-05-11T04:42:35Z) - Low-Rank Tensor Function Representation for Multi-Dimensional Data
Recovery [52.21846313876592]
Low-rank tensor function representation (LRTFR) can continuously represent data beyond meshgrid with infinite resolution.
We develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization.
Our method substantiates the superiority and versatility of our method as compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-12-01T04:00:38Z) - Graph Polynomial Convolution Models for Node Classification of
Non-Homophilous Graphs [52.52570805621925]
We investigate efficient learning from higher-order graph convolution and learning directly from adjacency matrix for node classification.
We show that the resulting model lead to new graphs and residual scaling parameter.
We demonstrate that the proposed methods obtain improved accuracy for node-classification of non-homophilous parameters.
arXiv Detail & Related papers (2022-09-12T04:46:55Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - ER: Equivariance Regularizer for Knowledge Graph Completion [107.51609402963072]
We propose a new regularizer, namely, Equivariance Regularizer (ER)
ER can enhance the generalization ability of the model by employing the semantic equivariance between the head and tail entities.
The experimental results indicate a clear and substantial improvement over the state-of-the-art relation prediction methods.
arXiv Detail & Related papers (2022-06-24T08:18:05Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation
from Incomplete Measurements [30.395874385570007]
A fundamental task is to faithfully recover tensors from highly incomplete measurements.
We develop an algorithm to directly recover the tensor factors in the Tucker decomposition.
We show that it provably converges at a linear independent rate of the ground truth tensor for two canonical problems.
arXiv Detail & Related papers (2021-04-29T17:44:49Z) - Anomaly Detection with Tensor Networks [2.3895981099137535]
We exploit the memory and computational efficiency of tensor networks to learn a linear transformation over a space with a dimension exponential in the number of original features.
We produce competitive results on image datasets, despite not exploiting the locality of images.
arXiv Detail & Related papers (2020-06-03T20:41:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.