Multi-reference alignment in high dimensions: sample complexity and
phase transition
- URL: http://arxiv.org/abs/2007.11482v3
- Date: Thu, 30 Sep 2021 14:44:40 GMT
- Title: Multi-reference alignment in high dimensions: sample complexity and
phase transition
- Authors: Elad Romanov, Tamir Bendory, Or Ordentlich
- Abstract summary: Multi-reference alignment entails estimating a signal in $mathbbRL$ from its circularly-shifted and noisy copies.
Motivated by single-particle cryo-electron microscopy, we analyze the sample complexity of the problem in the high-dimensional regime.
Our analysis uncovers a phase transition phenomenon governed by the parameter $alpha = L/(sigma2log L)$, where $sigma2$ is the variance of the noise.
- Score: 31.841289319809814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-reference alignment entails estimating a signal in $\mathbb{R}^L$ from
its circularly-shifted and noisy copies. This problem has been studied
thoroughly in recent years, focusing on the finite-dimensional setting (fixed
$L$). Motivated by single-particle cryo-electron microscopy, we analyze the
sample complexity of the problem in the high-dimensional regime $L\to\infty$.
Our analysis uncovers a phase transition phenomenon governed by the parameter
$\alpha = L/(\sigma^2\log L)$, where $\sigma^2$ is the variance of the noise.
When $\alpha>2$, the impact of the unknown circular shifts on the sample
complexity is minor. Namely, the number of measurements required to achieve a
desired accuracy $\varepsilon$ approaches $\sigma^2/\varepsilon$ for small
$\varepsilon$; this is the sample complexity of estimating a signal in additive
white Gaussian noise, which does not involve shifts. In sharp contrast, when
$\alpha\leq 2$, the problem is significantly harder and the sample complexity
grows substantially quicker with $\sigma^2$.
Related papers
- Dimension-free Private Mean Estimation for Anisotropic Distributions [55.86374912608193]
Previous private estimators on distributions over $mathRd suffer from a curse of dimensionality.
We present an algorithm whose sample complexity has improved dependence on dimension.
arXiv Detail & Related papers (2024-11-01T17:59:53Z) - Measuring quantum relative entropy with finite-size effect [53.64687146666141]
We study the estimation of relative entropy $D(rho|sigma)$ when $sigma$ is known.
Our estimator attains the Cram'er-Rao type bound when the dimension $d$ is fixed.
arXiv Detail & Related papers (2024-06-25T06:07:20Z) - LWE with Quantum Amplitudes: Algorithm, Hardness, and Oblivious Sampling [6.219791262322396]
We show new algorithms, hardness results and applications for $sfS|LWErangle$ and $sfC|LWErangle$ with real Gaussian, Gaussian with linear or quadratic phase terms, and other related amplitudes.
arXiv Detail & Related papers (2023-10-01T11:53:24Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Sample Complexity Bounds for Learning High-dimensional Simplices in
Noisy Regimes [5.526935605535376]
We find a sample complexity bound for learning a simplex from noisy samples.
We show that as long as $mathrmSNRgeOmegaleft(K1/2right)$, the sample complexity of the noisy regime has the same order to that of the noiseless case.
arXiv Detail & Related papers (2022-09-09T23:35:25Z) - The Sample Complexity of Robust Covariance Testing [56.98280399449707]
We are given i.i.d. samples from a distribution of the form $Z = (1-epsilon) X + epsilon B$, where $X$ is a zero-mean and unknown covariance Gaussian $mathcalN(0, Sigma)$.
In the absence of contamination, prior work gave a simple tester for this hypothesis testing task that uses $O(d)$ samples.
We prove a sample complexity lower bound of $Omega(d2)$ for $epsilon$ an arbitrarily small constant and $gamma
arXiv Detail & Related papers (2020-12-31T18:24:41Z) - Estimation of Shortest Path Covariance Matrices [21.772761365915176]
We study the sample complexity of estimating the covariance matrix $mathbfSigma in mathbbRdtimes d$ of a distribution $mathcal D$ over $mathbbRd$ given independent samples.
We give a very simple algorithm for estimating $mathbfSigma$ up to norm error $epsilon left|mathbfSigmaright|$ using just $O(sqrtD)$ entry complexity and $tilde O(r
arXiv Detail & Related papers (2020-11-19T17:37:46Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.