Tensor Recovery Based on A Novel Non-convex Function Minimax Logarithmic
Concave Penalty Function
- URL: http://arxiv.org/abs/2206.13506v1
- Date: Sat, 25 Jun 2022 12:26:53 GMT
- Title: Tensor Recovery Based on A Novel Non-convex Function Minimax Logarithmic
Concave Penalty Function
- Authors: Hongbing Zhang, Xinyi Liu, Chang Liu, Hongtao Fan, Yajing Li, Xinyun
Zhu
- Abstract summary: In this paper, we propose a new non-arithmic solution, Miniarithmic Concave Penalty (MLCP) function.
The proposed function is generalized to cases, weighted to $LLoja.
It is proved that the proposed sequence has finite length and converges to the critical point globally.
- Score: 5.264776812468168
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Non-convex relaxation methods have been widely used in tensor recovery
problems, and compared with convex relaxation methods, can achieve better
recovery results. In this paper, a new non-convex function, Minimax Logarithmic
Concave Penalty (MLCP) function, is proposed, and some of its intrinsic
properties are analyzed, among which it is interesting to find that the
Logarithmic function is an upper bound of the MLCP function. The proposed
function is generalized to tensor cases, yielding tensor MLCP and weighted
tensor $L\gamma$-norm. Consider that its explicit solution cannot be obtained
when applying it directly to the tensor recovery problem. Therefore, the
corresponding equivalence theorems to solve such problem are given, namely,
tensor equivalent MLCP theorem and equivalent weighted tensor $L\gamma$-norm
theorem. In addition, we propose two EMLCP-based models for classic tensor
recovery problems, namely low-rank tensor completion (LRTC) and tensor robust
principal component analysis (TRPCA), and design proximal alternate
linearization minimization (PALM) algorithms to solve them individually.
Furthermore, based on the Kurdyka-{\L}ojasiwicz property, it is proved that the
solution sequence of the proposed algorithm has finite length and converges to
the critical point globally. Finally, Extensive experiments show that proposed
algorithm achieve good results, and it is confirmed that the MLCP function is
indeed better than the Logarithmic function in the minimization problem, which
is consistent with the analysis of theoretical properties.
Related papers
- Low-Rank Tensor Learning by Generalized Nonconvex Regularization [25.115066273660478]
We study the problem of low-rank tensor learning, where only a few samples are observed the underlying tensor.
A family of non tensor learning functions are employed to characterize the low-rankness of the underlying tensor.
An algorithm designed to solve the resulting majorization-minimization is proposed.
arXiv Detail & Related papers (2024-10-24T03:33:20Z) - Computational and Statistical Guarantees for Tensor-on-Tensor Regression with Tensor Train Decomposition [27.29463801531576]
We study the theoretical and algorithmic aspects of the TT-based ToT regression model.
We propose two algorithms to efficiently find solutions to constrained error bounds.
We establish the linear convergence rate of both IHT and RGD.
arXiv Detail & Related papers (2024-06-10T03:51:38Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Convex Bounds on the Softmax Function with Applications to Robustness
Verification [69.09991317119679]
The softmax function is a ubiquitous component at the output of neural networks and increasingly in intermediate layers as well.
This paper provides convex lower bounds and concave upper bounds on the softmax function, which are compatible with convex optimization formulations for characterizing neural networks and other ML models.
arXiv Detail & Related papers (2023-03-03T05:07:02Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Tensor Recovery Based on Tensor Equivalent Minimax-Concave Penalty [3.0711362702464675]
It is an important problem in computer and machine learning.
We propose two adaptive models for two tensor recovery problems.
The proposed method is superior to state-of-arts experiments.
arXiv Detail & Related papers (2022-01-30T03:28:01Z) - Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector
Problems [98.34292831923335]
Motivated by the problem of online correlation analysis, we propose the emphStochastic Scaled-Gradient Descent (SSD) algorithm.
We bring these ideas together in an application to online correlation analysis, deriving for the first time an optimal one-time-scale algorithm with an explicit rate of local convergence to normality.
arXiv Detail & Related papers (2021-12-29T18:46:52Z) - Convergence bounds for nonlinear least squares and applications to
tensor recovery [0.0]
We consider the problem of approximating a function in general nonlinear subsets of $L2$ when only a weighted Monte Carlo estimate of the $L2$-norm can be computed.
A critical analysis of our results allows us to derive a sample efficient algorithm for the model set of low-rank tensors.
arXiv Detail & Related papers (2021-08-11T14:14:02Z) - On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and
Non-Asymptotic Concentration [115.1954841020189]
We study the inequality and non-asymptotic properties of approximation procedures with Polyak-Ruppert averaging.
We prove a central limit theorem (CLT) for the averaged iterates with fixed step size and number of iterations going to infinity.
arXiv Detail & Related papers (2020-04-09T17:54:18Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.