Support Recovery of Sparse Signals from a Mixture of Linear Measurements
- URL: http://arxiv.org/abs/2106.05951v1
- Date: Thu, 10 Jun 2021 17:48:13 GMT
- Title: Support Recovery of Sparse Signals from a Mixture of Linear Measurements
- Authors: Venkata Gandikota, Arya Mazumdar, Soumyabrata Pal
- Abstract summary: Recovery of support of a sparse vector from simple measurements is a widely studied problem.
We consider generalizations of this problem: mixtures of linear regressions, and mixtures of linear classifiers.
We provide algorithms that use a number of measurements in $k, log n$ and quasi-polynomial in $ell$ to recover the support of all the unknown vectors in the mixture.
- Score: 48.556633593447465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recovery of support of a sparse vector from simple measurements is a widely
studied problem, considered under the frameworks of compressed sensing, 1-bit
compressed sensing, and more general single index models. We consider
generalizations of this problem: mixtures of linear regressions, and mixtures
of linear classifiers, where the goal is to recover supports of multiple sparse
vectors using only a small number of possibly noisy linear, and 1-bit
measurements respectively. The key challenge is that the measurements from
different vectors are randomly mixed. Both of these problems were also
extensively studied recently. In mixtures of linear classifiers, the
observations correspond to the side of queried hyperplane a random unknown
vector lies in, whereas in mixtures of linear regressions we observe the
projection of a random unknown vector on the queried hyperplane. The primary
step in recovering the unknown vectors from the mixture is to first identify
the support of all the individual component vectors. In this work, we study the
number of measurements sufficient for recovering the supports of all the
component vectors in a mixture in both these models. We provide algorithms that
use a number of measurements polynomial in $k, \log n$ and quasi-polynomial in
$\ell$, to recover the support of all the $\ell$ unknown vectors in the mixture
with high probability when each individual component is a $k$-sparse
$n$-dimensional vector.
Related papers
- Non-convex matrix sensing: Breaking the quadratic rank barrier in the sample complexity [11.412228884390784]
We study the problem of reconstructing a low-rank quadratic convex matrix from a few measurements.
We show that factorized gradient with spectral specificity converges to truth with the number of samples.
arXiv Detail & Related papers (2024-08-20T14:09:28Z) - Precise Asymptotics for Spectral Methods in Mixed Generalized Linear Models [31.58736590532443]
We consider the problem of estimating two statistically independent signals in a mixed generalized linear model.
Our characterization exploits a mix of tools from random matrices, free probability and the theory of approximate message passing algorithms.
arXiv Detail & Related papers (2022-11-21T11:35:25Z) - Sparse Mixed Linear Regression with Guarantees: Taming an Intractable
Problem with Invex Relaxation [31.061339148448006]
In this paper, we study the problem of sparse mixed linear regression on an unlabeled dataset.
We provide a novel invex relaxation for this intractable problem which leads to a solution with provable theoretical guarantees.
arXiv Detail & Related papers (2022-06-02T17:38:11Z) - On Learning Mixture Models with Sparse Parameters [44.3425205248937]
We study mixtures with high dimensional sparse latent parameter vectors and consider the problem of support recovery of those vectors.
We provide efficient algorithms for support recovery that have a logarithmic sample complexity dependence on the dimensionality of the latent space.
arXiv Detail & Related papers (2022-02-24T07:44:23Z) - When Random Tensors meet Random Matrices [50.568841545067144]
This paper studies asymmetric order-$d$ spiked tensor models with Gaussian noise.
We show that the analysis of the considered model boils down to the analysis of an equivalent spiked symmetric textitblock-wise random matrix.
arXiv Detail & Related papers (2021-12-23T04:05:01Z) - Robust 1-bit Compressive Sensing with Partial Gaussian Circulant
Matrices and Generative Priors [54.936314353063494]
We provide recovery guarantees for a correlation-based optimization algorithm for robust 1-bit compressive sensing.
We make use of a practical iterative algorithm, and perform numerical experiments on image datasets to corroborate our results.
arXiv Detail & Related papers (2021-08-08T05:28:06Z) - Analysis of Truncated Orthogonal Iteration for Sparse Eigenvector
Problems [78.95866278697777]
We propose two variants of the Truncated Orthogonal Iteration to compute multiple leading eigenvectors with sparsity constraints simultaneously.
We then apply our algorithms to solve the sparse principle component analysis problem for a wide range of test datasets.
arXiv Detail & Related papers (2021-03-24T23:11:32Z) - Recovery of sparse linear classifiers from mixture of responses [42.228977798395846]
We show how to learn a collection of hyperplanes from a sequence of binary responses.
Each response is a result of querying with a vector and indicates the side of a randomly chosen hyperplane from the collection the query vector belongs to.
arXiv Detail & Related papers (2020-10-22T22:03:53Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.