Robust M-estimation-based Tensor Ring Completion: a Half-quadratic
Minimization Approach
- URL: http://arxiv.org/abs/2106.10422v1
- Date: Sat, 19 Jun 2021 04:37:50 GMT
- Title: Robust M-estimation-based Tensor Ring Completion: a Half-quadratic
Minimization Approach
- Authors: Yicong He and George K. Atia
- Abstract summary: We develop a robust approach to tensor ring completion that uses an M-estimator as its error statistic.
We present two HQ-based algorithms based on truncated singular value decomposition and matrix factorization.
- Score: 14.048989759890475
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tensor completion is the problem of estimating the missing values of
high-order data from partially observed entries. Among several definitions of
tensor rank, tensor ring rank affords the flexibility and accuracy needed to
model tensors of different orders, which motivated recent efforts on
tensor-ring completion. However, data corruption due to prevailing outliers
poses major challenges to existing algorithms. In this paper, we develop a
robust approach to tensor ring completion that uses an M-estimator as its error
statistic, which can significantly alleviate the effect of outliers. Leveraging
a half-quadratic (HQ) method, we reformulate the problem as one of weighted
tensor completion. We present two HQ-based algorithms based on truncated
singular value decomposition and matrix factorization along with their
convergence and complexity analysis. Extendibility of the proposed approach to
alternative definitions of tensor rank is also discussed. The experimental
results demonstrate the superior performance of the proposed approach over
state-of-the-art robust algorithms for tensor completion.
Related papers
- Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Quantization of Large Language Models with an Overdetermined Basis [73.79368761182998]
We introduce an algorithm for data quantization based on the principles of Kashin representation.
Our findings demonstrate that Kashin Quantization achieves competitive or superior quality in model performance.
arXiv Detail & Related papers (2024-04-15T12:38:46Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Noisy Tensor Completion via Low-rank Tensor Ring [41.86521269183527]
tensor completion is a fundamental tool for incomplete data analysis, where the goal is to predict missing entries from partial observations.
Existing methods often make the explicit or implicit assumption that the observed entries are noise-free to provide a theoretical guarantee of exact recovery of missing entries.
This paper proposes a novel noisy tensor completion model, which complements the incompetence of existing works in handling the degeneration of high-order and noisy observations.
arXiv Detail & Related papers (2022-03-14T14:09:43Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation
from Incomplete Measurements [30.395874385570007]
A fundamental task is to faithfully recover tensors from highly incomplete measurements.
We develop an algorithm to directly recover the tensor factors in the Tucker decomposition.
We show that it provably converges at a linear independent rate of the ground truth tensor for two canonical problems.
arXiv Detail & Related papers (2021-04-29T17:44:49Z) - Robust Low-tubal-rank Tensor Completion based on Tensor Factorization
and Maximum Correntopy Criterion [12.02023514105999]
We propose a new objective function for low-tubal-rank tensor completion, which uses correntropy as the error measure to mitigate the effect of the outliers.
Numerical results using both synthetic and real data demonstrate the robust and superior performance of the proposed algorithms.
arXiv Detail & Related papers (2020-10-22T13:56:55Z) - Uncertainty quantification for nonconvex tensor completion: Confidence
intervals, heteroscedasticity and optimality [92.35257908210316]
We study the problem of estimating a low-rank tensor given incomplete and corrupted observations.
We find that it attains unimprovable rates $ell-2$ accuracy.
arXiv Detail & Related papers (2020-06-15T17:47:13Z) - Tensor completion via nonconvex tensor ring rank minimization with
guaranteed convergence [16.11872681638052]
In recent studies, the tensor ring (TR) rank has shown high effectiveness in tensor completion.
A recently proposed TR rank is based on capturing the structure within the weighted sum penalizing the singular value equally.
In this paper, we propose to use the logdet-based function as a non smooth relaxation for solutions practice.
arXiv Detail & Related papers (2020-05-14T03:13:17Z) - Tensor denoising and completion based on ordinal observations [11.193504036335503]
We consider the problem of low-rank tensor estimation from possibly incomplete, ordinal-valued observations.
We propose a multi-linear cumulative link model, develop a rank-constrained M-estimator, and obtain theoretical accuracy guarantees.
We show that the proposed estimator is minimax optimal under the class of low-rank models.
arXiv Detail & Related papers (2020-02-16T07:09:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.