Efficient sampling from the Bingham distribution
- URL: http://arxiv.org/abs/2010.00137v2
- Date: Fri, 8 Dec 2023 19:43:42 GMT
- Title: Efficient sampling from the Bingham distribution
- Authors: Rong Ge, Holden Lee, Jianfeng Lu, Andrej Risteski
- Abstract summary: We give an exact sampling algorithm from the Bingham distribution $p(x)propto exp(xtop A x)$ on the sphere $mathcal Sd-1$ with expected runtime of $operatornamepoly(d, lambda_max(A)-lambda_min(A))$.
As a direct application, we use this to sample from the posterior distribution of a rank-1 matrix inference problem in time.
- Score: 38.50073658077009
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We give a algorithm for exact sampling from the Bingham distribution
$p(x)\propto \exp(x^\top A x)$ on the sphere $\mathcal S^{d-1}$ with expected
runtime of $\operatorname{poly}(d, \lambda_{\max}(A)-\lambda_{\min}(A))$. The
algorithm is based on rejection sampling, where the proposal distribution is a
polynomial approximation of the pdf, and can be sampled from by explicitly
evaluating integrals of polynomials over the sphere. Our algorithm gives exact
samples, assuming exact computation of an inverse function of a polynomial.
This is in contrast with Markov Chain Monte Carlo algorithms, which are not
known to enjoy rapid mixing on this problem, and only give approximate samples.
As a direct application, we use this to sample from the posterior
distribution of a rank-1 matrix inference problem in polynomial time.
Related papers
- Rényi-infinity constrained sampling with $d^3$ membership queries [2.209921757303168]
We propose a constrained proximal sampler, a principled and simple algorithm that possesses elegant convergence guarantees.
We show that it converges in the R'enyi-infinity divergence ($mathcal R_infty$) with no query complexity overhead when starting from a warm start.
arXiv Detail & Related papers (2024-07-17T19:20:08Z) - Diffusion Posterior Sampling is Computationally Intractable [9.483130965295324]
Posterior sampling is useful for tasks such as inpainting, super-resolution, and MRI reconstruction.
We show that posterior sampling is emphcomputationally intractable: under the most basic assumption in cryptography, that one-way functions exist.
We also show that the exponential-time rejection sampling is essentially optimal under the stronger plausible assumption that there are one-way functions that take exponential time to invert.
arXiv Detail & Related papers (2024-02-20T05:28:13Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Stochastic Approximation Approaches to Group Distributionally Robust
Optimization [96.26317627118912]
Group distributionally robust optimization (GDRO)
Online learning techniques to reduce the number of samples required in each round from $m$ to $1$, keeping the same sample.
A novel formulation of weighted GDRO, which allows us to derive distribution-dependent convergence rates.
arXiv Detail & Related papers (2023-02-18T09:24:15Z) - Langevin Monte Carlo for Contextual Bandits [72.00524614312002]
Langevin Monte Carlo Thompson Sampling (LMC-TS) is proposed to directly sample from the posterior distribution in contextual bandits.
We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits.
arXiv Detail & Related papers (2022-06-22T17:58:23Z) - Robust Sparse Mean Estimation via Sum of Squares [42.526664955704746]
We study the problem of high-dimensional sparse mean estimation in the presence of an $epsilon$-fraction of adversarial outliers.
Our algorithms follow the Sum-of-Squares based, to algorithms approach.
arXiv Detail & Related papers (2022-06-07T16:49:54Z) - Optimal Sublinear Sampling of Spanning Trees and Determinantal Point
Processes via Average-Case Entropic Independence [3.9586758145580014]
We design fast algorithms for repeatedly sampling from strongly Rayleigh distributions.
For a graph $G=(V, E)$, we show how to approximately sample uniformly random spanning trees from $G$ in $widetildeO(lvert Vrvert)$ time per sample.
For a determinantal point process on subsets of size $k$ of a ground set of $n$ elements, we show how to approximately sample in $widetildeO(komega)$ time after an initial $widetildeO(nk
arXiv Detail & Related papers (2022-04-06T04:11:26Z) - Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean
Estimation [58.24280149662003]
We study the problem of list-decodable mean estimation, where an adversary can corrupt a majority of the dataset.
We develop new algorithms for list-decodable mean estimation, achieving nearly-optimal statistical guarantees.
arXiv Detail & Related papers (2021-06-16T03:34:14Z) - Faster Differentially Private Samplers via R\'enyi Divergence Analysis
of Discretized Langevin MCMC [35.050135428062795]
Langevin dynamics-based algorithms offer much faster alternatives under some distance measures such as statistical distance.
Our techniques simple and generic and apply to underdamped Langevin dynamics.
arXiv Detail & Related papers (2020-10-27T22:52:45Z) - SURF: A Simple, Universal, Robust, Fast Distribution Learning Algorithm [64.13217062232874]
SURF is an algorithm for approximating distributions by piecewises.
It outperforms state-of-the-art algorithms in experiments.
arXiv Detail & Related papers (2020-02-22T01:03:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.