Resolving the Optimal Metric Distortion Conjecture
- URL: http://arxiv.org/abs/2004.07447v2
- Date: Mon, 7 Sep 2020 16:52:59 GMT
- Title: Resolving the Optimal Metric Distortion Conjecture
- Authors: Vasilis Gkatzelis, Daniel Halpern, and Nisarg Shah
- Abstract summary: We study the following metric distortion problem.
There are two finite sets of points, $V$ and $C$, that lie in the same metric space.
We propose algorithms that choose a point in $C$ using only these rankings as input.
- Score: 27.344649091365067
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the following metric distortion problem: there are two finite sets
of points, $V$ and $C$, that lie in the same metric space, and our goal is to
choose a point in $C$ whose total distance from the points in $V$ is as small
as possible. However, rather than having access to the underlying distance
metric, we only know, for each point in $V$, a ranking of its distances to the
points in $C$. We propose algorithms that choose a point in $C$ using only
these rankings as input and we provide bounds on their \emph{distortion}
(worst-case approximation ratio). A prominent motivation for this problem comes
from voting theory, where $V$ represents a set of voters, $C$ represents a set
of candidates, and the rankings correspond to ordinal preferences of the
voters. A major conjecture in this framework is that the optimal deterministic
algorithm has distortion $3$. We resolve this conjecture by providing a
polynomial-time algorithm that achieves distortion $3$, matching a known lower
bound. We do so by proving a novel lemma about matching voters to candidates,
which we refer to as the \emph{ranking-matching lemma}. This lemma induces a
family of novel algorithms, which may be of independent interest, and we show
that a special algorithm in this family achieves distortion $3$. We also
provide more refined, parameterized, bounds using the notion of
$\alpha$-decisiveness, which quantifies the extent to which a voter may prefer
her top choice relative to all others. Finally, we introduce a new randomized
algorithm with improved distortion compared to known results, and also provide
improved lower bounds on the distortion of all deterministic and randomized
algorithms.
Related papers
- A simple and improved algorithm for noisy, convex, zeroth-order optimisation [59.51990161522328]
We construct an algorithm that returns a point $hat xin barmathcal X$ such that $f(hat x)$ is as small as possible.
We prove that this method is such that the $f(hat x) - min_xin barmathcal X f(x)$ is of smaller order than $d2/sqrtn$ up to poly-logarithmic terms.
arXiv Detail & Related papers (2024-06-26T18:19:10Z) - Low-Rank Bandits via Tight Two-to-Infinity Singular Subspace Recovery [45.601316850669406]
We present efficient algorithms for policy evaluation, best policy identification and regret minimization.
For policy evaluation and best policy identification, we show that our algorithms are nearly minimax optimal.
All the proposed algorithms consist of two phases: they first leverage spectral methods to estimate the left and right singular subspaces of the low-rank reward matrix.
arXiv Detail & Related papers (2024-02-24T06:36:08Z) - A Scalable Algorithm for Individually Fair K-means Clustering [77.93955971520549]
We present a scalable algorithm for the individually fair ($p$, $k$)-clustering problem introduced by Jung et al. and Mahabadi et al.
A clustering is then called individually fair if it has centers within distance $delta(x)$ of $x$ for each $xin P$.
We show empirically that not only is our algorithm much faster than prior work, but it also produces lower-cost solutions.
arXiv Detail & Related papers (2024-02-09T19:01:48Z) - Efficient Algorithms for Generalized Linear Bandits with Heavy-tailed
Rewards [40.99322897009357]
We propose two novel algorithms based on truncation and mean of medians.
Our truncation-based algorithm supports online learning, distinguishing it from existing truncation-based approaches.
Our algorithms improve the regret bounds by a logarithmic factor compared to existing algorithms when $epsilon=1$.
arXiv Detail & Related papers (2023-10-28T13:01:10Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Corralling a Larger Band of Bandits: A Case Study on Switching Regret
for Linear Bandits [99.86860277006318]
We consider the problem of combining and learning over a set of adversarial algorithms with the goal of adaptively tracking the best one on the fly.
The CORRAL of Agarwal et al. achieves this goal with a regret overhead of order $widetildeO(sqrtd S T)$ where $M$ is the number of base algorithms and $T$ is the time horizon.
Motivated by this issue, we propose a new recipe to corral a larger band of bandit algorithms whose regret overhead has only emphlogarithmic dependence on $M$ as long
arXiv Detail & Related papers (2022-02-12T21:55:44Z) - Near-Optimal Regret Bounds for Contextual Combinatorial Semi-Bandits
with Linear Payoff Functions [53.77572276969548]
We show that the C$2$UCB algorithm has the optimal regret bound $tildeO(dsqrtkT + dk)$ for the partition matroid constraints.
For general constraints, we propose an algorithm that modifies the reward estimates of arms in the C$2$UCB algorithm.
arXiv Detail & Related papers (2021-01-20T04:29:18Z) - No quantum speedup over gradient descent for non-smooth convex
optimization [22.16973542453584]
Black-box access to a (not necessarily smooth) function $f:mathbbRn to mathbbR$ and its (sub)gradient.
Our goal is to find an $epsilon$-approximate minimum of $f$ starting from a point that is distance at most $R$ from the true minimum.
We show that although the function family used in the lower bound is hard for randomized algorithms, it can be solved using $O(GR/epsilon)$ quantum queries.
arXiv Detail & Related papers (2020-10-05T06:32:47Z) - Maximizing Determinants under Matroid Constraints [69.25768526213689]
We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
arXiv Detail & Related papers (2020-04-16T19:16:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.