A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion
- URL: http://arxiv.org/abs/2008.06542v4
- Date: Mon, 22 Feb 2021 07:54:12 GMT
- Title: A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion
- Authors: Yaqing Wang and Quanming Yao and James T. Kwok
- Abstract summary: We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
- Score: 60.52730146391456
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Matrix learning is at the core of many machine learning problems. A number of
real-world applications such as collaborative filtering and text mining
can be formulated as a low-rank matrix completion problem, which recovers
incomplete matrix using low-rank assumptions. To ensure that the matrix
solution has a low rank, a recent trend is to use nonconvex regularizers that
adaptively penalize singular values. They offer good recovery performance and
have nice theoretical properties, but are computationally expensive due to
repeated access to individual singular values. In this paper, based on the key
insight that adaptive shrinkage on singular values improve empirical
performance, we propose a new nonconvex low-rank regularizer called "nuclear
norm minus Frobenius norm" regularizer, which is scalable, adaptive and sound.
We first show it provably holds the adaptive shrinkage property. Further, we
discover its factored form which bypasses the computation of singular values
and allows fast optimization by general optimization algorithms. Stable
recovery and convergence are guaranteed. Extensive low-rank matrix completion
experiments on a number of synthetic and real-world data sets show that the
proposed method obtains state-of-the-art recovery performance while being the
fastest in comparison to existing low-rank matrix learning methods.
Related papers
- Efficient Low-rank Identification via Accelerated Iteratively Reweighted Nuclear Norm Minimization [8.879403568685499]
We introduce an adaptive updating strategy for smoothing parameters.
This behavior transforms the algorithm into one that effectively solves problems after a few iterations.
We prove the global proposed experiment, guaranteeing that every iteration is a critical one.
arXiv Detail & Related papers (2024-06-22T02:37:13Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement
Learning [53.445068584013896]
We study matrix estimation problems arising in reinforcement learning (RL) with low-rank structure.
In low-rank bandits, the matrix to be recovered specifies the expected arm rewards, and for low-rank Markov Decision Processes (MDPs), it may for example characterize the transition kernel of the MDP.
We show that simple spectral-based matrix estimation approaches efficiently recover the singular subspaces of the matrix and exhibit nearly-minimal entry-wise error.
arXiv Detail & Related papers (2023-10-10T17:06:41Z) - Low-Rank Tensor Completion via Novel Sparsity-Inducing Regularizers [30.920908325825668]
To alleviate l1-norm in the low-rank tensor completion problem, non-rank surrogates/regularizers have been suggested.
These regularizers are applied to nuclear-rank restoration, and efficient algorithms based on the method of multipliers are proposed.
arXiv Detail & Related papers (2023-10-10T01:00:13Z) - Low-rank Tensor Learning with Nonconvex Overlapped Nuclear Norm
Regularization [44.54772242784423]
We develop an efficient non regularization algorithm for low-rank learning matrix.
The proposed algorithm can avoid expensive folding/unfolding problems.
Experiments show that the proposed algorithm is efficient and more space than the existing state-of-the-world.
arXiv Detail & Related papers (2022-05-06T07:47:10Z) - Matrix Completion via Non-Convex Relaxation and Adaptive Correlation
Learning [90.8576971748142]
We develop a novel surrogate that can be optimized by closed-form solutions.
We exploit upperwise correlation for completion, and thus an adaptive correlation learning model.
arXiv Detail & Related papers (2022-03-04T08:50:50Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Adversarially-Trained Nonnegative Matrix Factorization [77.34726150561087]
We consider an adversarially-trained version of the nonnegative matrix factorization.
In our formulation, an attacker adds an arbitrary matrix of bounded norm to the given data matrix.
We design efficient algorithms inspired by adversarial training to optimize for dictionary and coefficient matrices.
arXiv Detail & Related papers (2021-04-10T13:13:17Z) - Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled
Gradient Descent [34.0533596121548]
Low-rank matrix estimation converges convex problem that finds numerous applications in signal processing, machine learning and imaging science.
We show that ScaledGD achieves the best of the best in terms of the number of the low-rank matrix.
Our analysis is also applicable to general loss that are similar to low-rank gradient descent.
arXiv Detail & Related papers (2020-05-18T17:17:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.