Orthogonally weighted $\ell_{2,1}$ regularization for rank-aware joint
sparse recovery: algorithm and analysis
- URL: http://arxiv.org/abs/2311.12282v1
- Date: Tue, 21 Nov 2023 01:52:15 GMT
- Title: Orthogonally weighted $\ell_{2,1}$ regularization for rank-aware joint
sparse recovery: algorithm and analysis
- Authors: Armenak Petrosyan and Konstantin Pieper and Hoang Tran
- Abstract summary: We propose and analyze an efficient algorithm for solving the joint sparse recovery problem using a new regularization-based method, named $ell_2,1$ ($mathitowell_2,1$)
This method has applications in feature extraction, matrix column selection, and dictionary learning, and it is distinct from commonly used $ell_2,1$ regularization.
We provide a proof of the method's rank-awareness, establish the existence of solutions to the proposed optimization problem, and develop an efficient algorithm for solving it, whose convergence is analyzed.
- Score: 7.7001263654719985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose and analyze an efficient algorithm for solving the joint sparse
recovery problem using a new regularization-based method, named orthogonally
weighted $\ell_{2,1}$ ($\mathit{ow}\ell_{2,1}$), which is specifically designed
to take into account the rank of the solution matrix. This method has
applications in feature extraction, matrix column selection, and dictionary
learning, and it is distinct from commonly used $\ell_{2,1}$ regularization and
other existing regularization-based approaches because it can exploit the full
rank of the row-sparse solution matrix, a key feature in many applications. We
provide a proof of the method's rank-awareness, establish the existence of
solutions to the proposed optimization problem, and develop an efficient
algorithm for solving it, whose convergence is analyzed. We also present
numerical experiments to illustrate the theory and demonstrate the
effectiveness of our method on real-life problems.
Related papers
- Predictive Low Rank Matrix Learning under Partial Observations: Mixed-Projection ADMM [5.877778007271621]
We formalize this problem as an optimization problem with an objective that balances the strength of the fit of the reconstruction to the observed entries.
We design an efficient, scalable alternating direction method of multipliers algorithm that produces high quality feasible solutions.
Our algorithm is able to solve problems with $n = 10000$ rows and $m = 10000$ columns in less than a minute.
arXiv Detail & Related papers (2024-07-18T17:33:14Z) - Matrix Completion with Convex Optimization and Column Subset Selection [0.0]
We present two algorithms that implement our Columns Selected Matrix Completion (CSMC) method.
To study the influence of the matrix size, rank computation, and the proportion of missing elements on the quality of the solution and the time, we performed experiments on synthetic data.
Our thorough analysis shows that CSMC provides solutions of comparable quality to matrix completion algorithms, which are based on convex optimization.
arXiv Detail & Related papers (2024-03-04T10:36:06Z) - Discretize Relaxed Solution of Spectral Clustering via a Non-Heuristic
Algorithm [77.53604156112144]
We develop a first-order term to bridge the original problem and discretization algorithm.
Since the non-heuristic method is aware of the original graph cut problem, the final discrete solution is more reliable.
arXiv Detail & Related papers (2023-10-19T13:57:38Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Random Manifold Sampling and Joint Sparse Regularization for Multi-label
Feature Selection [0.0]
The model proposed in this paper can obtain the most relevant few features by solving the joint constrained optimization problems of $ell_2,1$ and $ell_F$ regularization.
Comparative experiments on real-world data sets show that the proposed method outperforms other methods.
arXiv Detail & Related papers (2022-04-13T15:06:12Z) - Nearly Optimal Linear Convergence of Stochastic Primal-Dual Methods for
Linear Programming [5.126924253766052]
We show that the proposed method exhibits a linear convergence rate for solving sharp instances with a high probability.
We also propose an efficient coordinate-based oracle for unconstrained bilinear problems.
arXiv Detail & Related papers (2021-11-10T04:56:38Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Solving the Robust Matrix Completion Problem via a System of Nonlinear
Equations [28.83358353043287]
We consider the problem of robust matrix completion, which aims to recover a low rank matrix $L_*$ and a sparse matrix $S_*$ from incomplete observations of their sum $M=L_*+S_*inmathbbRmtimes n$.
The algorithm is highly parallelizable and suitable for large scale problems.
Numerical simulations show that the simple method works as expected and is comparable with state-of-the-art methods.
arXiv Detail & Related papers (2020-03-24T17:28:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.