Optimized Distortion in Linear Social Choice
- URL: http://arxiv.org/abs/2510.20020v1
- Date: Wed, 22 Oct 2025 20:42:49 GMT
- Title: Optimized Distortion in Linear Social Choice
- Authors: Luise Ge, Gregory Kehne, Yevgeniy Vorobeychik,
- Abstract summary: We study distortion of linear social choice for deterministic and randomized voting rules.<n>We introduce poly-time instance-optimal algorithms for minimizing distortion given a collection of candidates and votes.
- Score: 28.227695590829086
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social choice theory offers a wealth of approaches for selecting a candidate on behalf of voters based on their reported preference rankings over options. When voters have underlying utilities for these options, however, using preference rankings may lead to suboptimal outcomes vis-\`a-vis utilitarian social welfare. Distortion is a measure of this suboptimality, and provides a worst-case approach for developing and analyzing voting rules when utilities have minimal structure. However in many settings, such as common paradigms for value alignment, alternatives admit a vector representation, and it is natural to suppose that utilities are parametric functions thereof. We undertake the first study of distortion for linear utility functions. Specifically, we investigate the distortion of linear social choice for deterministic and randomized voting rules. We obtain bounds that depend only on the dimension of the candidate embedding, and are independent of the numbers of candidates or voters. Additionally, we introduce poly-time instance-optimal algorithms for minimizing distortion given a collection of candidates and votes. We empirically evaluate these in two real-world domains: recommendation systems using collaborative filtering embeddings, and opinion surveys utilizing language model embeddings, benchmarking several standard rules against our instance-optimal algorithms.
Related papers
- What Voting Rules Actually Do: A Data-Driven Analysis of Multi-Winner Voting [5.880273374889066]
We propose a data-driven framework to evaluate how frequently voting rules violate axioms across diverse preference distributions.<n>We show that neural networks, acting as voting rules, can outperform traditional rules in minimizing axiom violations.
arXiv Detail & Related papers (2025-08-08T16:54:09Z) - A Principled Approach to Randomized Selection under Uncertainty: Applications to Peer Review and Grant Funding [61.86327960322782]
We propose a principled framework for randomized decision-making based on interval estimates of the quality of each item.<n>We introduce MERIT, an optimization-based method that maximizes the worst-case expected number of top candidates selected.<n>We prove that MERIT satisfies desirable axiomatic properties not guaranteed by existing approaches.
arXiv Detail & Related papers (2025-06-23T19:59:30Z) - Beyond RLHF and NLHF: Population-Proportional Alignment under an Axiomatic Framework [7.065259679465175]
We develop a novel preference learning framework capable of aligning aggregate opinions and policies proportionally with the true population distribution of evaluator preferences.<n>We propose a soft-max relaxation method that smoothly trade-offs population-proportional alignment with the selection of the Condorcet winner.
arXiv Detail & Related papers (2025-06-05T22:15:07Z) - Alternates, Assemble! Selecting Optimal Alternates for Citizens' Assemblies [1.5624421399300306]
Citizens' assemblies are an influential form of deliberative democracy, where randomly selected people discuss policy questions.<n> dropouts are replaced by preselected alternates, but existing methods do not address how to choose these alternates.<n>We introduce an optimization framework for alternate selection.
arXiv Detail & Related papers (2025-06-02T17:48:33Z) - On Symmetric Losses for Robust Policy Optimization with Noisy Preferences [55.8615920580824]
This work focuses on reward modeling, a core component in reinforcement learning from human feedback.<n>We propose a principled framework for robust policy optimization under noisy preferences.<n>We prove that symmetric losses enable successful policy optimization even under noisy labels.
arXiv Detail & Related papers (2025-05-30T15:30:43Z) - Geometric-Averaged Preference Optimization for Soft Preference Labels [78.2746007085333]
Many algorithms for aligning LLMs with human preferences assume that human preferences are binary and deterministic.<n>In this work, we introduce the distributional soft preference labels and improve Direct Preference Optimization (DPO) with a weighted geometric average of the LLM output likelihood in the loss function.
arXiv Detail & Related papers (2024-09-10T17:54:28Z) - DeepVoting: Learning and Fine-Tuning Voting Rules with Canonical Embeddings [5.312279415103033]
We recast the problem of designing voting rules with desirable properties into one of learning probabilistic functions.<n>We show that preference profile encoding has significant impact on the efficiency and ability of neural networks to learn rules.<n>We also show that our learned rules can be fine-tuned using axiomatic properties to create novel voting rules.
arXiv Detail & Related papers (2024-08-24T17:15:20Z) - Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Efficient Weighting Schemes for Auditing Instant-Runoff Voting Elections [57.67176250198289]
AWAIRE involves adaptively weighted averages of test statistics, essentially "learning" an effective set of hypotheses to test.
We explore schemes and settings more extensively, to identify and recommend efficient choices for practice.
A limitation of the current AWAIRE implementation is its restriction to a small number of candidates.
arXiv Detail & Related papers (2024-02-18T10:13:01Z) - Best of Both Distortion Worlds [29.185700008117173]
We study the problem of designing voting rules that take as input the ordinal preferences of $n$ agents over a set of $m$ alternatives.
The input to the voting rule is each agent's ranking of the alternatives from most to least preferred, yet the agents have more refined (cardinal) preferences that capture the intensity with which they prefer one alternative over another.
We prove that one can achieve the best of both worlds by designing new voting rules, that simultaneously achieve near-optimal distortion guarantees in both distortion worlds.
arXiv Detail & Related papers (2023-05-30T23:24:01Z) - Off-Policy Evaluation with Policy-Dependent Optimization Response [90.28758112893054]
We develop a new framework for off-policy evaluation with a textitpolicy-dependent linear optimization response.
We construct unbiased estimators for the policy-dependent estimand by a perturbation method.
We provide a general algorithm for optimizing causal interventions.
arXiv Detail & Related papers (2022-02-25T20:25:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.