Tractable downfall of basis pursuit in structured sparse optimization
- URL: http://arxiv.org/abs/2503.19126v1
- Date: Mon, 24 Mar 2025 20:27:54 GMT
- Title: Tractable downfall of basis pursuit in structured sparse optimization
- Authors: Maya V. Marmary, Christian Grussler,
- Abstract summary: We investigate the problem of finding the sparsest solution to a linear underdetermined system of equations.<n>In particular, we are able to correspond unrecoverable non-zero entries to the sparsest uniqueness of the solution.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The problem of finding the sparsest solution to a linear underdetermined system of equations, as it often appears in data analysis, optimal control and system identification problems, is considered. This non-convex problem is commonly solved by convexification via $\ell_1$-norm minimization, also known as basis pursuit. In this work, a class of structured matrices, representing the system of equations, is introduced for which the basis pursuit approach tractably fails to recover the sparsest solution. In particular, we are able to identify matrix columns that correspond to unrecoverable non-zero entries of the sparsest solution, as well as to conclude the uniqueness of the sparsest solution in polynomial time. These deterministic guarantees contrast popular probabilistic ones, and as such, provide valuable insights into the a priori design of sparse optimization problems. As our matrix structure appears naturally in optimal control problems, we exemplify our findings by showing that it is possible to verify a priori that basis pursuit may fail in finding fuel optimal regulators for a class of discrete-time linear time-invariant systems.
Related papers
- Computational Efficient Informative Nonignorable Matrix Completion: A Row- and Column-Wise Matrix U-Statistic Pseudo-Likelihood Approach [2.2306682526405868]
We establish a unified framework to deal with the high dimensional matrix completion problem.
We derive a row- and column-wise matrix U-statistics type loss function, with the nuclear norm for regularization.
A singular value proximal gradient algorithm is developed to solve the proposed optimization problem.
arXiv Detail & Related papers (2025-04-05T01:41:53Z) - Feature selection in linear SVMs via a hard cardinality constraint: a scalable SDP decomposition approach [3.7876216422538485]
We study the embedded feature selection problem in linear Support Vector Machines (SVMs) in which a cardinality constraint is employed.<n>The problem is NP-hard due to the presence of the cardinality constraint, even though the original linear SVM amounts to a solvable problem in time.<n>To handle the hard problem, we first introduce two mixed-integer formulations for which novel semidefinite relaxations are proposed.
arXiv Detail & Related papers (2024-04-15T19:15:32Z) - Orthogonally weighted $\ell_{2,1}$ regularization for rank-aware joint
sparse recovery: algorithm and analysis [7.7001263654719985]
We propose and analyze an efficient algorithm for solving the joint sparse recovery problem using a new regularization-based method, named $ell_2,1$ ($mathitowell_2,1$)
This method has applications in feature extraction, matrix column selection, and dictionary learning, and it is distinct from commonly used $ell_2,1$ regularization.
We provide a proof of the method's rank-awareness, establish the existence of solutions to the proposed optimization problem, and develop an efficient algorithm for solving it, whose convergence is analyzed.
arXiv Detail & Related papers (2023-11-21T01:52:15Z) - Discretize Relaxed Solution of Spectral Clustering via a Non-Heuristic
Algorithm [77.53604156112144]
We develop a first-order term to bridge the original problem and discretization algorithm.
Since the non-heuristic method is aware of the original graph cut problem, the final discrete solution is more reliable.
arXiv Detail & Related papers (2023-10-19T13:57:38Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Large-scale matrix optimization based multi microgrid topology design
with a constrained differential evolution algorithm [30.792124441010447]
A binary-matrix-based differential evolution algorithm is proposed to solve nonlinear problems.
To deal with the constraints, we propose an improved feasibility rule based environmental selection strategy.
arXiv Detail & Related papers (2022-07-18T00:35:29Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Eigendecomposition-Free Training of Deep Networks for Linear
Least-Square Problems [107.3868459697569]
We introduce an eigendecomposition-free approach to training a deep network.
We show that our approach is much more robust than explicit differentiation of the eigendecomposition.
Our method has better convergence properties and yields state-of-the-art results.
arXiv Detail & Related papers (2020-04-15T04:29:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.