Efficient sampling from the Bingham distribution
- URL: http://arxiv.org/abs/2010.00137v2
- Date: Fri, 8 Dec 2023 19:43:42 GMT
- Title: Efficient sampling from the Bingham distribution
- Authors: Rong Ge, Holden Lee, Jianfeng Lu, Andrej Risteski
- Abstract summary: We give an exact sampling algorithm from the Bingham distribution $p(x)propto exp(xtop A x)$ on the sphere $mathcal Sd-1$ with expected runtime of $operatornamepoly(d, lambda_max(A)-lambda_min(A))$.
As a direct application, we use this to sample from the posterior distribution of a rank-1 matrix inference problem in time.
- Score: 38.50073658077009
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We give a algorithm for exact sampling from the Bingham distribution
$p(x)\propto \exp(x^\top A x)$ on the sphere $\mathcal S^{d-1}$ with expected
runtime of $\operatorname{poly}(d, \lambda_{\max}(A)-\lambda_{\min}(A))$. The
algorithm is based on rejection sampling, where the proposal distribution is a
polynomial approximation of the pdf, and can be sampled from by explicitly
evaluating integrals of polynomials over the sphere. Our algorithm gives exact
samples, assuming exact computation of an inverse function of a polynomial.
This is in contrast with Markov Chain Monte Carlo algorithms, which are not
known to enjoy rapid mixing on this problem, and only give approximate samples.
As a direct application, we use this to sample from the posterior
distribution of a rank-1 matrix inference problem in polynomial time.
Related papers
- On the query complexity of sampling from non-log-concave distributions [2.4253233571593547]
We study the problem of sampling from a $d$-dimensional distribution with density $p(x)propto e-f(x)$, which does not necessarily satisfy good isoperimetric conditions.
We show that for a wide range of parameters, sampling is strictly easier than optimization by a super-exponential factor in the dimension $d$.
arXiv Detail & Related papers (2025-02-10T06:54:16Z) - On the sampling complexity of coherent superpositions [0.4972323953932129]
We consider the problem of sampling from the distribution of measurement outcomes when applying a POVM to a superposition.
We give an algorithm which $-$ given $O(chi |c|2 log1/delta)$ such samples and calls to oracles to evaluate the probability density functions.
arXiv Detail & Related papers (2025-01-28T16:56:49Z) - Rényi-infinity constrained sampling with $d^3$ membership queries [2.209921757303168]
We propose a constrained proximal sampler, a principled and simple algorithm that possesses elegant convergence guarantees.
We show that it converges in the R'enyi-infinity divergence ($mathcal R_infty$) with no query complexity overhead when starting from a warm start.
arXiv Detail & Related papers (2024-07-17T19:20:08Z) - Diffusion Posterior Sampling is Computationally Intractable [9.483130965295324]
Posterior sampling is useful for tasks such as inpainting, super-resolution, and MRI reconstruction.
We show that posterior sampling is emphcomputationally intractable: under the most basic assumption in cryptography, that one-way functions exist.
We also show that the exponential-time rejection sampling is essentially optimal under the stronger plausible assumption that there are one-way functions that take exponential time to invert.
arXiv Detail & Related papers (2024-02-20T05:28:13Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Langevin Monte Carlo for Contextual Bandits [72.00524614312002]
Langevin Monte Carlo Thompson Sampling (LMC-TS) is proposed to directly sample from the posterior distribution in contextual bandits.
We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits.
arXiv Detail & Related papers (2022-06-22T17:58:23Z) - Robust Sparse Mean Estimation via Sum of Squares [42.526664955704746]
We study the problem of high-dimensional sparse mean estimation in the presence of an $epsilon$-fraction of adversarial outliers.
Our algorithms follow the Sum-of-Squares based, to algorithms approach.
arXiv Detail & Related papers (2022-06-07T16:49:54Z) - Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean
Estimation [58.24280149662003]
We study the problem of list-decodable mean estimation, where an adversary can corrupt a majority of the dataset.
We develop new algorithms for list-decodable mean estimation, achieving nearly-optimal statistical guarantees.
arXiv Detail & Related papers (2021-06-16T03:34:14Z) - SURF: A Simple, Universal, Robust, Fast Distribution Learning Algorithm [64.13217062232874]
SURF is an algorithm for approximating distributions by piecewises.
It outperforms state-of-the-art algorithms in experiments.
arXiv Detail & Related papers (2020-02-22T01:03:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.