Guaranteed Tensor Recovery Fused Low-rankness and Smoothness
- URL: http://arxiv.org/abs/2302.02155v1
- Date: Sat, 4 Feb 2023 12:20:32 GMT
- Title: Guaranteed Tensor Recovery Fused Low-rankness and Smoothness
- Authors: Hailin Wang, Jiangjun Peng, Wenjin Qin, Jianjun Wang and Deyu Meng
- Abstract summary: We build a unique regularization term, which essentially encodes both L and S priors of a tensor simultaneously.
This should be the first exact-recovery results among all related L+S methods for tensor recovery.
- Score: 38.0243349269575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The tensor data recovery task has thus attracted much research attention in
recent years. Solving such an ill-posed problem generally requires to explore
intrinsic prior structures underlying tensor data, and formulate them as
certain forms of regularization terms for guiding a sound estimate of the
restored tensor. Recent research have made significant progress by adopting two
insightful tensor priors, i.e., global low-rankness (L) and local smoothness
(S) across different tensor modes, which are always encoded as a sum of two
separate regularization terms into the recovery models. However, unlike the
primary theoretical developments on low-rank tensor recovery, these joint L+S
models have no theoretical exact-recovery guarantees yet, making the methods
lack reliability in real practice. To this crucial issue, in this work, we
build a unique regularization term, which essentially encodes both L and S
priors of a tensor simultaneously. Especially, by equipping this single
regularizer into the recovery models, we can rigorously prove the exact
recovery guarantees for two typical tensor recovery tasks, i.e., tensor
completion (TC) and tensor robust principal component analysis (TRPCA). To the
best of our knowledge, this should be the first exact-recovery results among
all related L+S methods for tensor recovery. Significant recovery accuracy
improvements over many other SOTA methods in several TC and TRPCA tasks with
various kinds of visual tensor data are observed in extensive experiments.
Typically, our method achieves a workable performance when the missing rate is
extremely large, e.g., 99.5%, for the color image inpainting task, while all
its peers totally fail in such challenging case.
Related papers
- Handling The Non-Smooth Challenge in Tensor SVD: A Multi-Objective Tensor Recovery Framework [15.16222081389267]
We introduce a novel tensor recovery model with a learnable tensor nuclear norm to address the challenge of non-smooth changes in tensor data.
We develop a new optimization algorithm named the Alternating Proximal Multiplier Method (APMM) to iteratively solve the proposed tensor completion model.
In addition, we propose a multi-objective tensor recovery framework based on APMM to efficiently explore the correlations of tensor data across its various dimensions.
arXiv Detail & Related papers (2023-11-23T12:16:33Z) - Provable Tensor Completion with Graph Information [49.08648842312456]
We introduce a novel model, theory, and algorithm for solving the dynamic graph regularized tensor completion problem.
We develop a comprehensive model simultaneously capturing the low-rank and similarity structure of the tensor.
In terms of theory, we showcase the alignment between the proposed graph smoothness regularization and a weighted tensor nuclear norm.
arXiv Detail & Related papers (2023-10-04T02:55:10Z) - A Novel Tensor Factorization-Based Method with Robustness to Inaccurate
Rank Estimation [9.058215418134209]
We propose a new tensor norm with a dual low-rank constraint, which utilizes the low-rank prior and rank information at the same time.
It is proven theoretically that the resulting tensor completion model can effectively avoid performance degradation caused by inaccurate rank estimation.
Based on this, the total cost at each iteration of the optimization algorithm is reduced to $mathcalO(n3log n +kn3)$ from $mathcalO(n4)$ achieved with standard methods.
arXiv Detail & Related papers (2023-05-19T06:26:18Z) - TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Robust M-estimation-based Tensor Ring Completion: a Half-quadratic
Minimization Approach [14.048989759890475]
We develop a robust approach to tensor ring completion that uses an M-estimator as its error statistic.
We present two HQ-based algorithms based on truncated singular value decomposition and matrix factorization.
arXiv Detail & Related papers (2021-06-19T04:37:50Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation
from Incomplete Measurements [30.395874385570007]
A fundamental task is to faithfully recover tensors from highly incomplete measurements.
We develop an algorithm to directly recover the tensor factors in the Tucker decomposition.
We show that it provably converges at a linear independent rate of the ground truth tensor for two canonical problems.
arXiv Detail & Related papers (2021-04-29T17:44:49Z) - Low-Rank and Sparse Enhanced Tucker Decomposition for Tensor Completion [3.498620439731324]
We introduce a unified low-rank and sparse enhanced Tucker decomposition model for tensor completion.
Our model possesses a sparse regularization term to promote a sparse core tensor, which is beneficial for tensor data compression.
It is remarkable that our model is able to deal with different types of real-world data sets, since it exploits the potential periodicity and inherent correlation properties appeared in tensors.
arXiv Detail & Related papers (2020-10-01T12:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.