Low-rank Tensor Learning with Nonconvex Overlapped Nuclear Norm
Regularization
- URL: http://arxiv.org/abs/2205.03059v1
- Date: Fri, 6 May 2022 07:47:10 GMT
- Title: Low-rank Tensor Learning with Nonconvex Overlapped Nuclear Norm
Regularization
- Authors: Quanming Yao and Yaqing Wang and Bo Han and James Kwok
- Abstract summary: We develop an efficient non regularization algorithm for low-rank learning matrix.
The proposed algorithm can avoid expensive folding/unfolding problems.
Experiments show that the proposed algorithm is efficient and more space than the existing state-of-the-world.
- Score: 44.54772242784423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nonconvex regularization has been popularly used in low-rank matrix learning.
However, extending it for low-rank tensor learning is still computationally
expensive. To address this problem, we develop an efficient solver for use with
a nonconvex extension of the overlapped nuclear norm regularizer. Based on the
proximal average algorithm, the proposed algorithm can avoid expensive tensor
folding/unfolding operations. A special "sparse plus low-rank" structure is
maintained throughout the iterations, and allows fast computation of the
individual proximal steps. Empirical convergence is further improved with the
use of adaptive momentum. We provide convergence guarantees to critical points
on smooth losses and also on objectives satisfying the Kurdyka-{\L}ojasiewicz
condition. While the optimization problem is nonconvex and nonsmooth, we show
that its critical points still have good statistical performance on the tensor
completion problem. Experiments on various synthetic and real-world data sets
show that the proposed algorithm is efficient in both time and space and more
accurate than the existing state-of-the-art.
Related papers
- An Accelerated Algorithm for Stochastic Bilevel Optimization under Unbounded Smoothness [15.656614304616006]
This paper investigates a class of bilevel optimization problems where the upper-level function is non- unbounded smoothness and the lower-level problem is strongly convex.
These problems have significant applications in data learning, such as text classification using neural networks.
arXiv Detail & Related papers (2024-09-28T02:30:44Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Low-Rank Tensor Completion via Novel Sparsity-Inducing Regularizers [30.920908325825668]
To alleviate l1-norm in the low-rank tensor completion problem, non-rank surrogates/regularizers have been suggested.
These regularizers are applied to nuclear-rank restoration, and efficient algorithms based on the method of multipliers are proposed.
arXiv Detail & Related papers (2023-10-10T01:00:13Z) - Computationally Efficient and Statistically Optimal Robust
High-Dimensional Linear Regression [15.389011827844572]
High-tailed linear regression under heavy-tailed noise or objective corruption is challenging, both computationally statistically.
In this paper, we introduce an algorithm for both the noise Gaussian or heavy 1 + epsilon regression problems.
arXiv Detail & Related papers (2023-05-10T14:31:03Z) - Can Decentralized Stochastic Minimax Optimization Algorithms Converge
Linearly for Finite-Sum Nonconvex-Nonconcave Problems? [56.62372517641597]
Decentralized minimax optimization has been actively studied in the past few years due to its application in a wide range machine learning.
This paper develops two novel decentralized minimax optimization algorithms for the non-strongly-nonconcave problem.
arXiv Detail & Related papers (2023-04-24T02:19:39Z) - A Momentum-Assisted Single-Timescale Stochastic Approximation Algorithm
for Bilevel Optimization [112.59170319105971]
We propose a new algorithm -- the Momentum- Single-timescale Approximation (MSTSA) -- for tackling problems.
MSTSA allows us to control the error in iterations due to inaccurate solution to the lower level subproblem.
arXiv Detail & Related papers (2021-02-15T07:10:33Z) - A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion [60.52730146391456]
We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
arXiv Detail & Related papers (2020-08-14T18:47:58Z) - Provably Convergent Working Set Algorithm for Non-Convex Regularized
Regression [0.0]
This paper proposes a working set algorithm for non-regular regularizers with convergence guarantees.
Our results demonstrate high gain compared to the full problem solver for both block-coordinates or a gradient solver.
arXiv Detail & Related papers (2020-06-24T07:40:31Z) - Tensor Completion Made Practical [19.229475414802213]
We introduce a new variant of alternating minimization, which in turn is inspired by understanding how the progress measures that guide convergence need to be adapted to the tensor setting.
We show that our algorithm converges linearly to the true tensors even when the factors are highly correlated and can be implemented in nearly linear time.
In contrast, and somewhat surprisingly, we show that the standard version of alternating minimization, without our new twist, can converge at a drastically slower rate in practice.
arXiv Detail & Related papers (2020-06-04T21:09:44Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.