Robust Low-Rank Matrix Completion via a New Sparsity-Inducing
Regularizer
- URL: http://arxiv.org/abs/2310.04762v1
- Date: Sat, 7 Oct 2023 09:47:55 GMT
- Title: Robust Low-Rank Matrix Completion via a New Sparsity-Inducing
Regularizer
- Authors: Zhi-Yong Wang, Hing Cheung So and Abdelhak M. Zoubir
- Abstract summary: This paper presents a novel loss function to as hybrid ordinary-Welsch (HOW) and a new sparsity-inducing matrix problem solver.
- Score: 30.920908325825668
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a novel loss function referred to as hybrid
ordinary-Welsch (HOW) and a new sparsity-inducing regularizer associated with
HOW. We theoretically show that the regularizer is quasiconvex and that the
corresponding Moreau envelope is convex. Moreover, the closed-form solution to
its Moreau envelope, namely, the proximity operator, is derived. Compared with
nonconvex regularizers like the lp-norm with 0<p<1 that requires iterations to
find the corresponding proximity operator, the developed regularizer has a
closed-form proximity operator. We apply our regularizer to the robust matrix
completion problem, and develop an efficient algorithm based on the alternating
direction method of multipliers. The convergence of the suggested method is
analyzed and we prove that any generated accumulation point is a stationary
point. Finally, experimental results based on synthetic and real-world datasets
demonstrate that our algorithm is superior to the state-of-the-art methods in
terms of restoration performance.
Related papers
- Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Low-Rank Tensor Completion via Novel Sparsity-Inducing Regularizers [30.920908325825668]
To alleviate l1-norm in the low-rank tensor completion problem, non-rank surrogates/regularizers have been suggested.
These regularizers are applied to nuclear-rank restoration, and efficient algorithms based on the method of multipliers are proposed.
arXiv Detail & Related papers (2023-10-10T01:00:13Z) - Matrix Completion via Non-Convex Relaxation and Adaptive Correlation
Learning [90.8576971748142]
We develop a novel surrogate that can be optimized by closed-form solutions.
We exploit upperwise correlation for completion, and thus an adaptive correlation learning model.
arXiv Detail & Related papers (2022-03-04T08:50:50Z) - Optimal Sample Complexity of Subgradient Descent for Amplitude Flow via
Non-Lipschitz Matrix Concentration [12.989855325491163]
We consider the problem of recovering a real-valued $n$-dimensional signal from $m$ phaseless, linear measurements.
We establish local convergence of subgradient descent with optimal sample complexity based on the uniform concentration of a random, discontinuous matrix-valued operator.
arXiv Detail & Related papers (2020-10-31T15:03:30Z) - A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion [60.52730146391456]
We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
arXiv Detail & Related papers (2020-08-14T18:47:58Z) - Provably Convergent Working Set Algorithm for Non-Convex Regularized
Regression [0.0]
This paper proposes a working set algorithm for non-regular regularizers with convergence guarantees.
Our results demonstrate high gain compared to the full problem solver for both block-coordinates or a gradient solver.
arXiv Detail & Related papers (2020-06-24T07:40:31Z) - Approximation Schemes for ReLU Regression [80.33702497406632]
We consider the fundamental problem of ReLU regression.
The goal is to output the best fitting ReLU with respect to square loss given to draws from some unknown distribution.
arXiv Detail & Related papers (2020-05-26T16:26:17Z) - Explicit Regularization of Stochastic Gradient Methods through Duality [9.131027490864938]
We propose randomized Dykstra-style algorithms based on randomized dual coordinate ascent.
For accelerated coordinate descent, we obtain a new algorithm that has better convergence properties than existing gradient methods in the interpolating regime.
arXiv Detail & Related papers (2020-03-30T20:44:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.