Multi-Reference Alignment for sparse signals, Uniform Uncertainty
Principles and the Beltway Problem
- URL: http://arxiv.org/abs/2106.12996v1
- Date: Thu, 24 Jun 2021 13:13:10 GMT
- Title: Multi-Reference Alignment for sparse signals, Uniform Uncertainty
Principles and the Beltway Problem
- Authors: Subhro Ghosh and Philippe Rigollet
- Abstract summary: We show that emphsparse signals exhibit an intermediate $sigma4$ sample complexity even in the classical MRA model.
Our results explore and exploit connections of the MRA estimation problem with two classical topics in applied mathematics.
- Score: 10.591948377239921
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by cutting-edge applications like cryo-electron microscopy
(cryo-EM), the Multi-Reference Alignment (MRA) model entails the learning of an
unknown signal from repeated measurements of its images under the latent action
of a group of isometries and additive noise of magnitude $\sigma$. Despite
significant interest, a clear picture for understanding rates of estimation in
this model has emerged only recently, particularly in the high-noise regime
$\sigma \gg 1$ that is highly relevant in applications. Recent investigations
have revealed a remarkable asymptotic sample complexity of order $\sigma^6$ for
certain signals whose Fourier transforms have full support, in stark contrast
to the traditional $\sigma^2$ that arise in regular models. Often prohibitively
large in practice, these results have prompted the investigation of variations
around the MRA model where better sample complexity may be achieved. In this
paper, we show that \emph{sparse} signals exhibit an intermediate $\sigma^4$
sample complexity even in the classical MRA model. Our results explore and
exploit connections of the MRA estimation problem with two classical topics in
applied mathematics: the \textit{beltway problem} from combinatorial
optimization, and \textit{uniform uncertainty principles} from harmonic
analysis.
Related papers
- Minimax-optimal estimation for sparse multi-reference alignment with
collision-free signals [1.3658544194443192]
We investigate minimax optimality for signal estimation under the MRA model for so-called collision-free signals.
We demonstrate that the minimax optimal rate of estimation in for the sparse MRA problem in this setting is $sigma2/sqrtn$, where $n$ is the sample size.
arXiv Detail & Related papers (2023-12-13T02:06:52Z) - A Robustness Analysis of Blind Source Separation [91.3755431537592]
Blind source separation (BSS) aims to recover an unobserved signal from its mixture $X=f(S)$ under the condition that the transformation $f$ is invertible but unknown.
We present a general framework for analysing such violations and quantifying their impact on the blind recovery of $S$ from $X$.
We show that a generic BSS-solution in response to general deviations from its defining structural assumptions can be profitably analysed in the form of explicit continuity guarantees.
arXiv Detail & Related papers (2023-03-17T16:30:51Z) - Statistical-Computational Tradeoffs in Mixed Sparse Linear Regression [20.00109111254507]
We show that the problem suffers from a $frackSNR2$-to-$frack2SNR2$ statistical-to-computational gap.
We also analyze a simple thresholding algorithm which, outside of the narrow regime where the problem is hard, solves the associated mixed regression detection problem.
arXiv Detail & Related papers (2023-03-03T18:03:49Z) - Complexity Analysis of a Countable-armed Bandit Problem [9.163501953373068]
We study the classical problem of minimizing the expected cumulative regret over a horizon of play $n$.
We propose algorithms that achieve a rate-optimal finite-time instance-dependent regret of $mathcalOleft( log n right)$ when $K=2$.
While the order of regret and complexity of the problem suggests a great degree of similarity to the classical MAB problem, properties of the performance bounds and salient aspects of algorithm design are quite distinct from the latter.
arXiv Detail & Related papers (2023-01-18T00:53:46Z) - Optimal Online Generalized Linear Regression with Stochastic Noise and
Its Application to Heteroscedastic Bandits [88.6139446295537]
We study the problem of online generalized linear regression in the setting of a generalized linear model with possibly unbounded additive noise.
We provide a sharp analysis of the classical follow-the-regularized-leader (FTRL) algorithm to cope with the label noise.
We propose an algorithm based on FTRL to achieve the first variance-aware regret bound.
arXiv Detail & Related papers (2022-02-28T08:25:26Z) - Sampling Approximately Low-Rank Ising Models: MCMC meets Variational
Methods [35.24886589614034]
We consider quadratic definite Ising models on the hypercube with a general interaction $J$.
Our general result implies the first time sampling algorithms for low-rank Ising models.
arXiv Detail & Related papers (2022-02-17T21:43:50Z) - A fast asynchronous MCMC sampler for sparse Bayesian inference [10.535140830570256]
We propose a very fast approximate Markov Chain Monte Carlo (MCMC) sampling framework that is applicable to a large class of sparse Bayesian inference problems.
We show that in high-dimensional linear regression problems, the Markov chain generated by the proposed algorithm admits an invariant distribution that recovers correctly the main signal.
arXiv Detail & Related papers (2021-08-14T02:20:49Z) - Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors [59.33977545294148]
We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
arXiv Detail & Related papers (2021-06-29T12:49:54Z) - Learning Gaussian Mixtures with Generalised Linear Models: Precise
Asymptotics in High-dimensions [79.35722941720734]
Generalised linear models for multi-class classification problems are one of the fundamental building blocks of modern machine learning tasks.
We prove exacts characterising the estimator in high-dimensions via empirical risk minimisation.
We discuss how our theory can be applied beyond the scope of synthetic data.
arXiv Detail & Related papers (2021-06-07T16:53:56Z) - The Generalized Lasso with Nonlinear Observations and Generative Priors [63.541900026673055]
We make the assumption of sub-Gaussian measurements, which is satisfied by a wide range of measurement models.
We show that our result can be extended to the uniform recovery guarantee under the assumption of a so-called local embedding property.
arXiv Detail & Related papers (2020-06-22T16:43:35Z) - Learning Gaussian Graphical Models via Multiplicative Weights [54.252053139374205]
We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
arXiv Detail & Related papers (2020-02-20T10:50:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.