Uncertainty quantification in the Bradley-Terry-Luce model
- URL: http://arxiv.org/abs/2110.03874v1
- Date: Fri, 8 Oct 2021 03:06:30 GMT
- Title: Uncertainty quantification in the Bradley-Terry-Luce model
- Authors: Chao Gao, Yandi Shen, Anderson Y. Zhang
- Abstract summary: This paper focuses on two estimators that have received much recent attention: the maximum likelihood estimator (MLE) and the spectral estimator.
Using a unified proof strategy, we derive sharp and uniform non-asymptotic expansions for both estimators in the sparsest possible regime.
Our proof is based on a self-consistent equation of the second-order vector and a novel leave-two-out analysis.
- Score: 14.994932962403935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Bradley-Terry-Luce (BTL) model is a benchmark model for pairwise
comparisons between individuals. Despite recent progress on the first-order
asymptotics of several popular procedures, the understanding of uncertainty
quantification in the BTL model remains largely incomplete, especially when the
underlying comparison graph is sparse. In this paper, we fill this gap by
focusing on two estimators that have received much recent attention: the
maximum likelihood estimator (MLE) and the spectral estimator. Using a unified
proof strategy, we derive sharp and uniform non-asymptotic expansions for both
estimators in the sparsest possible regime (up to some poly-logarithmic
factors) of the underlying comparison graph. These expansions allow us to
obtain: (i) finite-dimensional central limit theorems for both estimators; (ii)
construction of confidence intervals for individual ranks; (iii) optimal
constant of $\ell_2$ estimation, which is achieved by the MLE but not by the
spectral estimator. Our proof is based on a self-consistent equation of the
second-order remainder vector and a novel leave-two-out analysis.
Related papers
- Quasi-Bayes meets Vines [2.3124143670964448]
We propose a different way to extend Quasi-Bayesian prediction to high dimensions through the use of Sklar's theorem.
We show that our proposed Quasi-Bayesian Vine (QB-Vine) is a fully non-parametric density estimator with emphan analytical form.
arXiv Detail & Related papers (2024-06-18T16:31:02Z) - Sampling and estimation on manifolds using the Langevin diffusion [45.57801520690309]
Two estimators of linear functionals of $mu_phi $ based on the discretized Markov process are considered.
Error bounds are derived for sampling and estimation using a discretization of an intrinsically defined Langevin diffusion.
arXiv Detail & Related papers (2023-12-22T18:01:11Z) - Spectral Ranking Inferences based on General Multiway Comparisons [7.222667862159246]
We show that a two-step spectral method can achieve the same vanilla efficiency as the Maximum Likelihood Estor.
It is noteworthy that this is the first time effective two-sample rank testing methods have been proposed.
arXiv Detail & Related papers (2023-08-05T16:31:32Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Mean-Square Analysis with An Application to Optimal Dimension Dependence
of Langevin Monte Carlo [60.785586069299356]
This work provides a general framework for the non-asymotic analysis of sampling error in 2-Wasserstein distance.
Our theoretical analysis is further validated by numerical experiments.
arXiv Detail & Related papers (2021-09-08T18:00:05Z) - Nonlinear Two-Time-Scale Stochastic Approximation: Convergence and
Finite-Time Performance [1.52292571922932]
We study the convergence and finite-time analysis of the nonlinear two-time-scale approximation.
In particular, we show that the method achieves a convergence in expectation at a rate $mathcalO (1/k2/3)$, where $k$ is the number of iterations.
arXiv Detail & Related papers (2020-11-03T17:43:39Z) - On Projection Robust Optimal Transport: Sample Complexity and Model
Misspecification [101.0377583883137]
Projection robust (PR) OT seeks to maximize the OT cost between two measures by choosing a $k$-dimensional subspace onto which they can be projected.
Our first contribution is to establish several fundamental statistical properties of PR Wasserstein distances.
Next, we propose the integral PR Wasserstein (IPRW) distance as an alternative to the PRW distance, by averaging rather than optimizing on subspaces.
arXiv Detail & Related papers (2020-06-22T14:35:33Z) - Learning Minimax Estimators via Online Learning [55.92459567732491]
We consider the problem of designing minimax estimators for estimating parameters of a probability distribution.
We construct an algorithm for finding a mixed-case Nash equilibrium.
arXiv Detail & Related papers (2020-06-19T22:49:42Z) - Nonparametric Score Estimators [49.42469547970041]
Estimating the score from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models.
We provide a unifying view of these estimators under the framework of regularized nonparametric regression.
We propose score estimators based on iterative regularization that enjoy computational benefits from curl-free kernels and fast convergence.
arXiv Detail & Related papers (2020-05-20T15:01:03Z) - Low-Rank Matrix Estimation From Rank-One Projections by Unlifted Convex
Optimization [9.492903649862761]
We study an estimator with a formulation convex for recovery of low-rank matrices from rank-one projections.
We show that under both models the estimator succeeds, with high probability, if the number of measurements exceeds $r2 (d+d_$2) up.
arXiv Detail & Related papers (2020-04-06T14:57:54Z) - Minimax Optimal Estimation of KL Divergence for Continuous Distributions [56.29748742084386]
Esting Kullback-Leibler divergence from identical and independently distributed samples is an important problem in various domains.
One simple and effective estimator is based on the k nearest neighbor between these samples.
arXiv Detail & Related papers (2020-02-26T16:37:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.