Recovering Simultaneously Structured Data via Non-Convex Iteratively
Reweighted Least Squares
- URL: http://arxiv.org/abs/2306.04961v2
- Date: Thu, 18 Jan 2024 16:47:33 GMT
- Title: Recovering Simultaneously Structured Data via Non-Convex Iteratively
Reweighted Least Squares
- Authors: Christian K\"ummerle and Johannes Maly
- Abstract summary: We propose a new algorithm for recovering data that adheres to multiple, heterogeneous low-dimensional structures from linear observations.
We show that the IRLS method favorable in identifying low/comckuele state measurements.
- Score: 0.8702432681310401
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a new algorithm for the problem of recovering data that adheres to
multiple, heterogeneous low-dimensional structures from linear observations.
Focusing on data matrices that are simultaneously row-sparse and low-rank, we
propose and analyze an iteratively reweighted least squares (IRLS) algorithm
that is able to leverage both structures. In particular, it optimizes a
combination of non-convex surrogates for row-sparsity and rank, a balancing of
which is built into the algorithm. We prove locally quadratic convergence of
the iterates to a simultaneously structured data matrix in a regime of minimal
sample complexity (up to constants and a logarithmic factor), which is known to
be impossible for a combination of convex surrogates. In experiments, we show
that the IRLS method exhibits favorable empirical convergence, identifying
simultaneously row-sparse and low-rank matrices from fewer measurements than
state-of-the-art methods. Code is available at
https://github.com/ckuemmerle/simirls.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Learning Large Causal Structures from Inverse Covariance Matrix via
Sparse Matrix Decomposition [2.403264213118039]
This paper focuses on learning causal structures from the inverse covariance matrix.
The proposed method, called ICID, is based on continuous optimization of a matrix decomposition model.
We show that ICID efficiently identifies the sought directed acyclic graph (DAG) assuming the knowledge of noise variances.
arXiv Detail & Related papers (2022-11-25T16:32:56Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Nonlinear matrix recovery using optimization on the Grassmann manifold [18.655422834567577]
We investigate the problem of recovering a partially observed high-rank clustering matrix whose columns obey a nonlinear structure such as a union of subspaces.
We show that the alternating limit converges to a unique point using the Kurdyka-Lojasi property.
arXiv Detail & Related papers (2021-09-13T16:13:13Z) - A Scalable Second Order Method for Ill-Conditioned Matrix Completion
from Few Samples [0.0]
We propose an iterative algorithm for low-rank matrix completion.
It is able to complete very ill-conditioned matrices with a condition number of up to $10$ from few samples.
arXiv Detail & Related papers (2021-06-03T20:31:00Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Learning Mixtures of Low-Rank Models [89.39877968115833]
We study the problem of learning computational mixtures of low-rank models.
We develop an algorithm that is guaranteed to recover the unknown matrices with near-optimal sample.
In addition, the proposed algorithm is provably stable against random noise.
arXiv Detail & Related papers (2020-09-23T17:53:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.