Minimax-optimal estimation for sparse multi-reference alignment with
collision-free signals
- URL: http://arxiv.org/abs/2312.07839v1
- Date: Wed, 13 Dec 2023 02:06:52 GMT
- Title: Minimax-optimal estimation for sparse multi-reference alignment with
collision-free signals
- Authors: Subhro Ghosh, Soumendu Sundar Mukherjee, Jing Bin Pan
- Abstract summary: We investigate minimax optimality for signal estimation under the MRA model for so-called collision-free signals.
We demonstrate that the minimax optimal rate of estimation in for the sparse MRA problem in this setting is $sigma2/sqrtn$, where $n$ is the sample size.
- Score: 1.3658544194443192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Multi-Reference Alignment (MRA) problem aims at the recovery of an
unknown signal from repeated observations under the latent action of a group of
cyclic isometries, in the presence of additive noise of high intensity
$\sigma$. It is a more tractable version of the celebrated cryo EM model. In
the crucial high noise regime, it is known that its sample complexity scales as
$\sigma^6$. Recent investigations have shown that for the practically
significant setting of sparse signals, the sample complexity of the maximum
likelihood estimator asymptotically scales with the noise level as $\sigma^4$.
In this work, we investigate minimax optimality for signal estimation under the
MRA model for so-called collision-free signals. In particular, this signal
class covers the setting of generic signals of dilute sparsity (wherein the
support size $s=O(L^{1/3})$, where $L$ is the ambient dimension.
We demonstrate that the minimax optimal rate of estimation in for the sparse
MRA problem in this setting is $\sigma^2/\sqrt{n}$, where $n$ is the sample
size. In particular, this widely generalizes the sample complexity asymptotics
for the restricted MLE in this setting, establishing it as the statistically
optimal estimator. Finally, we demonstrate a concentration inequality for the
restricted MLE on its deviations from the ground truth.
Related papers
- Average case analysis of Lasso under ultra-sparse conditions [4.568911586155097]
We analyze the performance of the least absolute shrinkage and selection operator (Lasso) for the linear model when the number of regressors grows larger.
The obtained bound for perfect support recovery is a generalization of that given in previous literature.
arXiv Detail & Related papers (2023-02-25T14:50:32Z) - Subspace clustering in high-dimensions: Phase transitions \&
Statistical-to-Computational gap [24.073221004661427]
A simple model to study subspace clustering is the high-dimensional $k$-Gaussian mixture model.
We provide an exact characterization of the statistically optimal reconstruction error in this model in the high-dimensional regime with extensive sparsity.
arXiv Detail & Related papers (2022-05-26T17:47:35Z) - Approximate Function Evaluation via Multi-Armed Bandits [51.146684847667125]
We study the problem of estimating the value of a known smooth function $f$ at an unknown point $boldsymbolmu in mathbbRn$, where each component $mu_i$ can be sampled via a noisy oracle.
We design an instance-adaptive algorithm that learns to sample according to the importance of each coordinate, and with probability at least $1-delta$ returns an $epsilon$ accurate estimate of $f(boldsymbolmu)$.
arXiv Detail & Related papers (2022-03-18T18:50:52Z) - Optimal Online Generalized Linear Regression with Stochastic Noise and
Its Application to Heteroscedastic Bandits [88.6139446295537]
We study the problem of online generalized linear regression in the setting of a generalized linear model with possibly unbounded additive noise.
We provide a sharp analysis of the classical follow-the-regularized-leader (FTRL) algorithm to cope with the label noise.
We propose an algorithm based on FTRL to achieve the first variance-aware regret bound.
arXiv Detail & Related papers (2022-02-28T08:25:26Z) - Breaking the Sample Complexity Barrier to Regret-Optimal Model-Free
Reinforcement Learning [52.76230802067506]
A novel model-free algorithm is proposed to minimize regret in episodic reinforcement learning.
The proposed algorithm employs an em early-settled reference update rule, with the aid of two Q-learning sequences.
The design principle of our early-settled variance reduction method might be of independent interest to other RL settings.
arXiv Detail & Related papers (2021-10-09T21:13:48Z) - Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors [59.33977545294148]
We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
arXiv Detail & Related papers (2021-06-29T12:49:54Z) - Multi-Reference Alignment for sparse signals, Uniform Uncertainty
Principles and the Beltway Problem [10.591948377239921]
We show that emphsparse signals exhibit an intermediate $sigma4$ sample complexity even in the classical MRA model.
Our results explore and exploit connections of the MRA estimation problem with two classical topics in applied mathematics.
arXiv Detail & Related papers (2021-06-24T13:13:10Z) - Minimax Estimation of Partially-Observed Vector AutoRegressions [0.0]
We study the properties of a partially-observed state-space model.
We describe a sparse estimator based on the Dantzig selector and upper bound its non-asymptotic error.
An application to open railway data highlights the relevance of this model for public transport traffic analysis.
arXiv Detail & Related papers (2021-06-17T08:46:53Z) - Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal
Sample Complexity [67.02490430380415]
We show that model-based MARL achieves a sample complexity of $tilde O(|S||B|(gamma)-3epsilon-2)$ for finding the Nash equilibrium (NE) value up to some $epsilon$ error.
We also show that such a sample bound is minimax-optimal (up to logarithmic factors) if the algorithm is reward-agnostic, where the algorithm queries state transition samples without reward knowledge.
arXiv Detail & Related papers (2020-07-15T03:25:24Z) - Breaking the Sample Size Barrier in Model-Based Reinforcement Learning
with a Generative Model [50.38446482252857]
This paper is concerned with the sample efficiency of reinforcement learning, assuming access to a generative model (or simulator)
We first consider $gamma$-discounted infinite-horizon Markov decision processes (MDPs) with state space $mathcalS$ and action space $mathcalA$.
We prove that a plain model-based planning algorithm suffices to achieve minimax-optimal sample complexity given any target accuracy level.
arXiv Detail & Related papers (2020-05-26T17:53:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.