Matrix Completion via Non-Convex Relaxation and Adaptive Correlation
Learning
- URL: http://arxiv.org/abs/2203.02189v1
- Date: Fri, 4 Mar 2022 08:50:50 GMT
- Title: Matrix Completion via Non-Convex Relaxation and Adaptive Correlation
Learning
- Authors: Xuelong Li, Hongyuan Zhang, Rui Zhang
- Abstract summary: We develop a novel surrogate that can be optimized by closed-form solutions.
We exploit upperwise correlation for completion, and thus an adaptive correlation learning model.
- Score: 90.8576971748142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The existing matrix completion methods focus on optimizing the relaxation of
rank function such as nuclear norm, Schatten-p norm, etc. They usually need
many iterations to converge. Moreover, only the low-rank property of matrices
is utilized in most existing models and several methods that incorporate other
knowledge are quite time-consuming in practice. To address these issues, we
propose a novel non-convex surrogate that can be optimized by closed-form
solutions, such that it empirically converges within dozens of iterations.
Besides, the optimization is parameter-free and the convergence is proved.
Compared with the relaxation of rank, the surrogate is motivated by optimizing
an upper-bound of rank. We theoretically validate that it is equivalent to the
existing matrix completion models. Besides the low-rank assumption, we intend
to exploit the column-wise correlation for matrix completion, and thus an
adaptive correlation learning, which is scaling-invariant, is developed. More
importantly, after incorporating the correlation learning, the model can be
still solved by closed-form solutions such that it still converges fast.
Experiments show the effectiveness of the non-convex surrogate and adaptive
correlation learning.
Related papers
- Online Learning Guided Quasi-Newton Methods with Global Non-Asymptotic Convergence [20.766358513158206]
We prove a global convergence rate of $O(min1/k,sqrtd/k1.25)$ in terms of the duality gap.
These results are the first global convergence results to demonstrate a provable advantage of a quasi-Newton method over the extragradient method.
arXiv Detail & Related papers (2024-10-03T16:08:16Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Neural incomplete factorization: learning preconditioners for the conjugate gradient method [2.899792823251184]
We develop a data-driven approach to accelerate the generation of effective preconditioners.
We replace the typically hand-engineered preconditioners by the output of graph neural networks.
Our method generates an incomplete factorization of the matrix and is, therefore, referred to as neural incomplete factorization (NeuralIF)
arXiv Detail & Related papers (2023-05-25T11:45:46Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Low-rank Tensor Learning with Nonconvex Overlapped Nuclear Norm
Regularization [44.54772242784423]
We develop an efficient non regularization algorithm for low-rank learning matrix.
The proposed algorithm can avoid expensive folding/unfolding problems.
Experiments show that the proposed algorithm is efficient and more space than the existing state-of-the-world.
arXiv Detail & Related papers (2022-05-06T07:47:10Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion [60.52730146391456]
We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
arXiv Detail & Related papers (2020-08-14T18:47:58Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.