Perfect Sampling from Pairwise Comparisons
- URL: http://arxiv.org/abs/2211.12868v1
- Date: Wed, 23 Nov 2022 11:20:30 GMT
- Title: Perfect Sampling from Pairwise Comparisons
- Authors: Dimitris Fotakis, Alkis Kalavasis, Christos Tzamos
- Abstract summary: We study how to efficiently obtain perfect samples from a discrete distribution $mathcalD$ given access only to pairwise comparisons of elements of its support.
We design a Markov chain whose stationary distribution coincides with $mathcalD$ and give an algorithm to obtain exact samples using the technique of Coupling from the Past.
- Score: 26.396901523831534
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we study how to efficiently obtain perfect samples from a
discrete distribution $\mathcal{D}$ given access only to pairwise comparisons
of elements of its support. Specifically, we assume access to samples $(x, S)$,
where $S$ is drawn from a distribution over sets $\mathcal{Q}$ (indicating the
elements being compared), and $x$ is drawn from the conditional distribution
$\mathcal{D}_S$ (indicating the winner of the comparison) and aim to output a
clean sample $y$ distributed according to $\mathcal{D}$. We mainly focus on the
case of pairwise comparisons where all sets $S$ have size 2. We design a Markov
chain whose stationary distribution coincides with $\mathcal{D}$ and give an
algorithm to obtain exact samples using the technique of Coupling from the
Past. However, the sample complexity of this algorithm depends on the structure
of the distribution $\mathcal{D}$ and can be even exponential in the support of
$\mathcal{D}$ in many natural scenarios. Our main contribution is to provide an
efficient exact sampling algorithm whose complexity does not depend on the
structure of $\mathcal{D}$. To this end, we give a parametric Markov chain that
mixes significantly faster given a good approximation to the stationary
distribution. We can obtain such an approximation using an efficient learning
from pairwise comparisons algorithm (Shah et al., JMLR 17, 2016). Our technique
for speeding up sampling from a Markov chain whose stationary distribution is
approximately known is simple, general and possibly of independent interest.
Related papers
- Dimension-free Private Mean Estimation for Anisotropic Distributions [55.86374912608193]
Previous private estimators on distributions over $mathRd suffer from a curse of dimensionality.
We present an algorithm whose sample complexity has improved dependence on dimension.
arXiv Detail & Related papers (2024-11-01T17:59:53Z) - Simple and Nearly-Optimal Sampling for Rank-1 Tensor Completion via Gauss-Jordan [49.1574468325115]
We revisit the sample and computational complexity of completing a rank-1 tensor in $otimes_i=1N mathbbRd$.
We present a characterization of the problem which admits an algorithm amounting to Gauss-Jordan on a pair of random linear systems.
arXiv Detail & Related papers (2024-08-10T04:26:19Z) - Rényi-infinity constrained sampling with $d^3$ membership queries [2.209921757303168]
We propose a constrained proximal sampler, a principled and simple algorithm that possesses elegant convergence guarantees.
We show that it converges in the R'enyi-infinity divergence ($mathcal R_infty$) with no query complexity overhead when starting from a warm start.
arXiv Detail & Related papers (2024-07-17T19:20:08Z) - Robust Mean Estimation Without Moments for Symmetric Distributions [7.105512316884493]
We show that for a large class of symmetric distributions, the same error as in the Gaussian setting can be achieved efficiently.
We propose a sequence of efficient algorithms that approaches this optimal error.
Our algorithms are based on a generalization of the well-known filtering technique.
arXiv Detail & Related papers (2023-02-21T17:52:23Z) - Replicable Clustering [57.19013971737493]
We propose algorithms for the statistical $k$-medians, statistical $k$-means, and statistical $k$-centers problems by utilizing approximation routines for their counterparts in a black-box manner.
We also provide experiments on synthetic distributions in 2D using the $k$-means++ implementation from sklearn as a black-box that validate our theoretical results.
arXiv Detail & Related papers (2023-02-20T23:29:43Z) - Stochastic Approximation Approaches to Group Distributionally Robust Optimization and Beyond [89.72693227960274]
This paper investigates group distributionally robust optimization (GDRO) with the goal of learning a model that performs well over $m$ different distributions.
To reduce the number of samples in each round from $m$ to 1, we cast GDRO as a two-player game, where one player conducts and the other executes an online algorithm for non-oblivious multi-armed bandits.
In the second scenario, we propose to optimize the average top-$k$ risk instead of the maximum risk, thereby mitigating the impact of distributions.
arXiv Detail & Related papers (2023-02-18T09:24:15Z) - Robust Sparse Mean Estimation via Sum of Squares [42.526664955704746]
We study the problem of high-dimensional sparse mean estimation in the presence of an $epsilon$-fraction of adversarial outliers.
Our algorithms follow the Sum-of-Squares based, to algorithms approach.
arXiv Detail & Related papers (2022-06-07T16:49:54Z) - Optimal Sublinear Sampling of Spanning Trees and Determinantal Point
Processes via Average-Case Entropic Independence [3.9586758145580014]
We design fast algorithms for repeatedly sampling from strongly Rayleigh distributions.
For a graph $G=(V, E)$, we show how to approximately sample uniformly random spanning trees from $G$ in $widetildeO(lvert Vrvert)$ time per sample.
For a determinantal point process on subsets of size $k$ of a ground set of $n$ elements, we show how to approximately sample in $widetildeO(komega)$ time after an initial $widetildeO(nk
arXiv Detail & Related papers (2022-04-06T04:11:26Z) - Estimation of Shortest Path Covariance Matrices [21.772761365915176]
We study the sample complexity of estimating the covariance matrix $mathbfSigma in mathbbRdtimes d$ of a distribution $mathcal D$ over $mathbbRd$ given independent samples.
We give a very simple algorithm for estimating $mathbfSigma$ up to norm error $epsilon left|mathbfSigmaright|$ using just $O(sqrtD)$ entry complexity and $tilde O(r
arXiv Detail & Related papers (2020-11-19T17:37:46Z) - Optimal Testing of Discrete Distributions with High Probability [49.19942805582874]
We study the problem of testing discrete distributions with a focus on the high probability regime.
We provide the first algorithms for closeness and independence testing that are sample-optimal, within constant factors.
arXiv Detail & Related papers (2020-09-14T16:09:17Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.