Tensor Recovery Based on Tensor Equivalent Minimax-Concave Penalty
- URL: http://arxiv.org/abs/2201.12709v1
- Date: Sun, 30 Jan 2022 03:28:01 GMT
- Title: Tensor Recovery Based on Tensor Equivalent Minimax-Concave Penalty
- Authors: Hongbing Zhang, Xinyi Liu, Hongtao Fan, Yajing Li, Yinlin Ye
- Abstract summary: It is an important problem in computer and machine learning.
We propose two adaptive models for two tensor recovery problems.
The proposed method is superior to state-of-arts experiments.
- Score: 3.0711362702464675
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tensor recovery is an important problem in computer vision and machine
learning. It usually uses the convex relaxation of tensor rank and $l_{0}$
norm, i.e., the nuclear norm and $l_{1}$ norm respectively, to solve the
problem. It is well known that convex approximations produce biased estimators.
In order to overcome this problem, a corresponding non-convex regularizer has
been proposed to solve it. Inspired by matrix equivalent Minimax-Concave
Penalty (EMCP), we propose and prove theorems of tensor equivalent
Minimax-Concave Penalty (TEMCP). The tensor equivalent MCP (TEMCP) as a
non-convex regularizer and the equivalent weighted tensor $\gamma$ norm (EWTGN)
which can represent the low-rank part are obtained. Both of them can realize
weight adaptive. At the same time, we propose two corresponding adaptive models
for two classical tensor recovery problems, low-rank tensor completion (LRTC)
and tensor robust principal component analysis (TRPCA), and the optimization
algorithm is based on alternating direction multiplier (ADMM). This novel
iterative adaptive algorithm can produce more accurate tensor recovery effect.
For the tensor completion model, multispectral image (MSI), magnetic resonance
imaging (MRI) and color video (CV) data sets are considered, while for the
tensor robust principal component analysis model, hyperspectral image (HSI)
denoising under gaussian noise plus salt and pepper noise is considered. The
proposed algorithm is superior to the state-of-arts method, and the algorithm
is guaranteed to meet the reduction and convergence through experiments.
Related papers
- Gradient Normalization with(out) Clipping Ensures Convergence of Nonconvex SGD under Heavy-Tailed Noise with Improved Results [60.92029979853314]
This paper investigates Gradient Normalization without (NSGDC) its gradient reduction variant (NSGDC-VR)
We present significant improvements in the theoretical results for both algorithms.
arXiv Detail & Related papers (2024-10-21T22:40:42Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - A Novel Tensor Factorization-Based Method with Robustness to Inaccurate
Rank Estimation [9.058215418134209]
We propose a new tensor norm with a dual low-rank constraint, which utilizes the low-rank prior and rank information at the same time.
It is proven theoretically that the resulting tensor completion model can effectively avoid performance degradation caused by inaccurate rank estimation.
Based on this, the total cost at each iteration of the optimization algorithm is reduced to $mathcalO(n3log n +kn3)$ from $mathcalO(n4)$ achieved with standard methods.
arXiv Detail & Related papers (2023-05-19T06:26:18Z) - Softmax-free Linear Transformers [90.83157268265654]
Vision transformers (ViTs) have pushed the state-of-the-art for visual perception tasks.
Existing methods are either theoretically flawed or empirically ineffective for visual recognition.
We propose a family of Softmax-Free Transformers (SOFT)
arXiv Detail & Related papers (2022-07-05T03:08:27Z) - Tensor Recovery Based on A Novel Non-convex Function Minimax Logarithmic
Concave Penalty Function [5.264776812468168]
In this paper, we propose a new non-arithmic solution, Miniarithmic Concave Penalty (MLCP) function.
The proposed function is generalized to cases, weighted to $LLoja.
It is proved that the proposed sequence has finite length and converges to the critical point globally.
arXiv Detail & Related papers (2022-06-25T12:26:53Z) - SOFT: Softmax-free Transformer with Linear Complexity [112.9754491864247]
Vision transformers (ViTs) have pushed the state-of-the-art for various visual recognition tasks by patch-wise image tokenization followed by self-attention.
Various attempts on approximating the self-attention with linear complexity have been made in Natural Language Processing.
We identify that their limitations are rooted in keeping the softmax self-attention during approximations.
For the first time, a softmax-free transformer or SOFT is proposed.
arXiv Detail & Related papers (2021-10-22T17:57:29Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation
from Incomplete Measurements [30.395874385570007]
A fundamental task is to faithfully recover tensors from highly incomplete measurements.
We develop an algorithm to directly recover the tensor factors in the Tucker decomposition.
We show that it provably converges at a linear independent rate of the ground truth tensor for two canonical problems.
arXiv Detail & Related papers (2021-04-29T17:44:49Z) - Robust Compressed Sensing using Generative Models [98.64228459705859]
In this paper we propose an algorithm inspired by the Median-of-Means (MOM)
Our algorithm guarantees recovery for heavy-tailed data, even in the presence of outliers.
arXiv Detail & Related papers (2020-06-16T19:07:41Z) - Tensor completion via nonconvex tensor ring rank minimization with
guaranteed convergence [16.11872681638052]
In recent studies, the tensor ring (TR) rank has shown high effectiveness in tensor completion.
A recently proposed TR rank is based on capturing the structure within the weighted sum penalizing the singular value equally.
In this paper, we propose to use the logdet-based function as a non smooth relaxation for solutions practice.
arXiv Detail & Related papers (2020-05-14T03:13:17Z) - Tensor denoising and completion based on ordinal observations [11.193504036335503]
We consider the problem of low-rank tensor estimation from possibly incomplete, ordinal-valued observations.
We propose a multi-linear cumulative link model, develop a rank-constrained M-estimator, and obtain theoretical accuracy guarantees.
We show that the proposed estimator is minimax optimal under the class of low-rank models.
arXiv Detail & Related papers (2020-02-16T07:09:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.