Semi-Random Sparse Recovery in Nearly-Linear Time
- URL: http://arxiv.org/abs/2203.04002v1
- Date: Tue, 8 Mar 2022 10:56:46 GMT
- Title: Semi-Random Sparse Recovery in Nearly-Linear Time
- Authors: Jonathan A. Kelner, Jerry Li, Allen Liu, Aaron Sidford, Kevin Tian
- Abstract summary: We investigate the brittleness of fast sparse recovery algorithms to generative model changes.
Our approach differs from prior fast iterative methods with provable guarantees under semi-random generative models.
We design a new iterative method tailored to the geometry of sparse recovery which is provably robust to our semi-random model.
- Score: 37.61139884826181
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse recovery is one of the most fundamental and well-studied inverse
problems. Standard statistical formulations of the problem are provably solved
by general convex programming techniques and more practical, fast
(nearly-linear time) iterative methods. However, these latter "fast algorithms"
have previously been observed to be brittle in various real-world settings.
We investigate the brittleness of fast sparse recovery algorithms to
generative model changes through the lens of studying their robustness to a
"helpful" semi-random adversary, a framework which tests whether an algorithm
overfits to input assumptions. We consider the following basic model: let
$\mathbf{A} \in \mathbb{R}^{n \times d}$ be a measurement matrix which contains
an unknown subset of rows $\mathbf{G} \in \mathbb{R}^{m \times d}$ which are
bounded and satisfy the restricted isometry property (RIP), but is otherwise
arbitrary. Letting $x^\star \in \mathbb{R}^d$ be $s$-sparse, and given either
exact measurements $b = \mathbf{A} x^\star$ or noisy measurements $b =
\mathbf{A} x^\star + \xi$, we design algorithms recovering $x^\star$
information-theoretically optimally in nearly-linear time. We extend our
algorithm to hold for weaker generative models relaxing our planted RIP
assumption to a natural weighted variant, and show that our method's guarantees
naturally interpolate the quality of the measurement matrix to, in some
parameter regimes, run in sublinear time.
Our approach differs from prior fast iterative methods with provable
guarantees under semi-random generative models: natural conditions on a
submatrix which make sparse recovery tractable are NP-hard to verify. We design
a new iterative method tailored to the geometry of sparse recovery which is
provably robust to our semi-random model. We hope our approach opens the door
to new robust, efficient algorithms for natural statistical inverse problems.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Inverting the Leverage Score Gradient: An Efficient Approximate Newton Method [10.742859956268655]
This paper aims to recover the intrinsic model parameters given the leverage scores gradient.
We specifically scrutinize the inversion of the leverage score gradient, denoted as $g(x)$.
arXiv Detail & Related papers (2024-08-21T01:39:42Z) - Private estimation algorithms for stochastic block models and mixture
models [63.07482515700984]
General tools for designing efficient private estimation algorithms.
First efficient $(epsilon, delta)$-differentially private algorithm for both weak recovery and exact recovery.
arXiv Detail & Related papers (2023-01-11T09:12:28Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean
Estimation [58.24280149662003]
We study the problem of list-decodable mean estimation, where an adversary can corrupt a majority of the dataset.
We develop new algorithms for list-decodable mean estimation, achieving nearly-optimal statistical guarantees.
arXiv Detail & Related papers (2021-06-16T03:34:14Z) - Randomized Exploration for Reinforcement Learning with General Value
Function Approximation [122.70803181751135]
We propose a model-free reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm.
Our algorithm drives exploration by simply perturbing the training data with judiciously chosen i.i.d. scalar noises.
We complement the theory with an empirical evaluation across known difficult exploration tasks.
arXiv Detail & Related papers (2021-06-15T02:23:07Z) - Proximal and Federated Random Reshuffling [11.83842808044211]
We propose two new algorithms for Random Reshuffling.
ProxRR and FedRR solve composite convex finite-sum minimization problems.
ProxRR is faster than algorithms that evaluate the proximal operator in every iteration.
arXiv Detail & Related papers (2021-02-12T18:59:24Z) - Model Selection in Contextual Stochastic Bandit Problems [51.94632035240787]
We develop a meta-algorithm that selects between base algorithms.
We show through a lower bound that even when one of the base algorithms has $O(sqrtT)$ regret, in general it is impossible to get better than $Omega(sqrtT)$ regret.
arXiv Detail & Related papers (2020-03-03T18:46:34Z) - Learning Sparse Classifiers: Continuous and Mixed Integer Optimization
Perspectives [10.291482850329892]
Mixed integer programming (MIP) can be used to solve (to optimality) $ell_0$-regularized regression problems.
We propose two classes of scalable algorithms: an exact algorithm that can handlepapprox 50,000$ features in a few minutes, and approximate algorithms that can address instances with $papprox6$.
In addition, we present new estimation error bounds for $ell$-regularizeds.
arXiv Detail & Related papers (2020-01-17T18:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.