Best of Both Distortion Worlds
- URL: http://arxiv.org/abs/2305.19453v1
- Date: Tue, 30 May 2023 23:24:01 GMT
- Title: Best of Both Distortion Worlds
- Authors: Vasilis Gkatzelis, Mohamad Latifian and Nisarg Shah
- Abstract summary: We study the problem of designing voting rules that take as input the ordinal preferences of $n$ agents over a set of $m$ alternatives.
The input to the voting rule is each agent's ranking of the alternatives from most to least preferred, yet the agents have more refined (cardinal) preferences that capture the intensity with which they prefer one alternative over another.
We prove that one can achieve the best of both worlds by designing new voting rules, that simultaneously achieve near-optimal distortion guarantees in both distortion worlds.
- Score: 29.185700008117173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of designing voting rules that take as input the ordinal
preferences of $n$ agents over a set of $m$ alternatives and output a single
alternative, aiming to optimize the overall happiness of the agents. The input
to the voting rule is each agent's ranking of the alternatives from most to
least preferred, yet the agents have more refined (cardinal) preferences that
capture the intensity with which they prefer one alternative over another. To
quantify the extent to which voting rules can optimize over the cardinal
preferences given access only to the ordinal ones, prior work has used the
distortion measure, i.e., the worst-case approximation ratio between a voting
rule's performance and the best performance achievable given the cardinal
preferences.
The work on the distortion of voting rules has been largely divided into two
worlds: utilitarian distortion and metric distortion. In the former, the
cardinal preferences of the agents correspond to general utilities and the goal
is to maximize a normalized social welfare. In the latter, the agents' cardinal
preferences correspond to costs given by distances in an underlying metric
space and the goal is to minimize the (unnormalized) social cost. Several
deterministic and randomized voting rules have been proposed and evaluated for
each of these worlds separately, gradually improving the achievable distortion
bounds, but none of the known voting rules perform well in both worlds
simultaneously.
In this work, we prove that one can achieve the best of both worlds by
designing new voting rules, that simultaneously achieve near-optimal distortion
guarantees in both distortion worlds. We also prove that this positive result
does not generalize to the case where the voting rule is provided with the
rankings of only the top-$t$ alternatives of each agent, for $t<m$.
Related papers
- Centralized Selection with Preferences in the Presence of Biases [25.725937202777267]
The paper focuses on the setting in which candidates are divided into multiple groups and the observed utilities of candidates in some groups are biased--systematically lower than their true utilities.
An algorithm is presented along with proof that it produces selections that achieve near-optimal group fairness with respect to preferences while also nearly maximizing the true utility under distributional assumptions.
arXiv Detail & Related papers (2024-09-07T19:47:13Z) - Fair Allocation in Dynamic Mechanism Design [57.66441610380448]
We consider a problem where an auctioneer sells an indivisible good to groups of buyers in every round, for a total of $T$ rounds.
The auctioneer aims to maximize their discounted overall revenue while adhering to a fairness constraint that guarantees a minimum average allocation for each group.
arXiv Detail & Related papers (2024-05-31T19:26:05Z) - Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Efficient Weighting Schemes for Auditing Instant-Runoff Voting Elections [57.67176250198289]
AWAIRE involves adaptively weighted averages of test statistics, essentially "learning" an effective set of hypotheses to test.
We explore schemes and settings more extensively, to identify and recommend efficient choices for practice.
A limitation of the current AWAIRE implementation is its restriction to a small number of candidates.
arXiv Detail & Related papers (2024-02-18T10:13:01Z) - Proportional Aggregation of Preferences for Sequential Decision Making [20.374669324368625]
We study the problem of fair sequential decision making given voter preferences.
In each round, a decision rule must choose a decision from a set of alternatives where each voter reports which of these alternatives they approve.
We formalize this aim using axioms based on Proportional Justified Representation.
arXiv Detail & Related papers (2023-06-26T17:10:10Z) - Social Mechanism Design: A Low-Level Introduction [31.564788318133264]
We show that agents have preferences over both decision outcomes and the rules or procedures used to make decisions.
We identify simple, intuitive preference structures at low levels that can be generalized to form the building blocks of preferences at higher levels.
We analyze algorithms for acceptance in two different domains: asymmetric dichotomous choice and constitutional amendment.
arXiv Detail & Related papers (2022-11-15T20:59:34Z) - Most Equitable Voting Rules [30.930621357547487]
ANR impossibility -- there is no voting rule that satisfies anonymity, neutrality, and resolvability -- holds even in the simple setting of two alternatives and two agents.
We propose a novel and strong notion of most equitable refinements that optimally preserves anonymity and neutrality for any irresolute rule that satisfies the two axioms.
arXiv Detail & Related papers (2022-05-30T03:56:54Z) - Inferring Lexicographically-Ordered Rewards from Preferences [82.42854687952115]
This paper proposes a method for inferring multi-objective reward-based representations of an agent's observed preferences.
We model the agent's priorities over different objectives as entering lexicographically, so that objectives with lower priorities matter only when the agent is indifferent with respect to objectives with higher priorities.
arXiv Detail & Related papers (2022-02-21T12:01:41Z) - Where is the Grass Greener? Revisiting Generalized Policy Iteration for
Offline Reinforcement Learning [81.15016852963676]
We re-implement state-of-the-art baselines in the offline RL regime under a fair, unified, and highly factorized framework.
We show that when a given baseline outperforms its competing counterparts on one end of the spectrum, it never does on the other end.
arXiv Detail & Related papers (2021-07-03T11:00:56Z) - Consensus-Guided Correspondence Denoising [67.35345850146393]
We propose to denoise correspondences with a local-to-global consensus learning framework to robustly identify correspondence.
A novel "pruning" block is introduced to distill reliable candidates from initial matches according to their consensus scores estimated by dynamic graphs from local to global regions.
Our method outperforms state-of-the-arts on robust line fitting, wide-baseline image matching and image localization benchmarks by noticeable margins.
arXiv Detail & Related papers (2021-01-03T09:10:00Z) - Optimal Policies for the Homogeneous Selective Labels Problem [19.54948759840131]
This paper reports work in progress on learning decision policies in the face of selective labels.
For maximizing discounted total reward, the optimal policy is shown to be a threshold policy.
For undiscounted infinite-horizon average reward, optimal policies have positive acceptance probability in all states.
arXiv Detail & Related papers (2020-11-02T23:32:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.