Objective Social Choice: Using Auxiliary Information to Improve Voting
Outcomes
- URL: http://arxiv.org/abs/2001.10092v1
- Date: Mon, 27 Jan 2020 21:21:19 GMT
- Title: Objective Social Choice: Using Auxiliary Information to Improve Voting
Outcomes
- Authors: Silviu Pitis and Michael R. Zhang
- Abstract summary: How should one combine noisy information from diverse sources to make an inference about an objective ground truth?
We propose a multi-arm bandit noise model and count-based auxiliary information set.
We find that our rules successfully use auxiliary information to outperform the naive baselines.
- Score: 16.764511357821043
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How should one combine noisy information from diverse sources to make an
inference about an objective ground truth? This frequently recurring, normative
question lies at the core of statistics, machine learning, policy-making, and
everyday life. It has been called "combining forecasts", "meta-analysis",
"ensembling", and the "MLE approach to voting", among other names. Past studies
typically assume that noisy votes are identically and independently distributed
(i.i.d.), but this assumption is often unrealistic. Instead, we assume that
votes are independent but not necessarily identically distributed and that our
ensembling algorithm has access to certain auxiliary information related to the
underlying model governing the noise in each vote. In our present work, we: (1)
define our problem and argue that it reflects common and socially relevant real
world scenarios, (2) propose a multi-arm bandit noise model and count-based
auxiliary information set, (3) derive maximum likelihood aggregation rules for
ranked and cardinal votes under our noise model, (4) propose, alternatively, to
learn an aggregation rule using an order-invariant neural network, and (5)
empirically compare our rules to common voting rules and naive
experience-weighted modifications. We find that our rules successfully use
auxiliary information to outperform the naive baselines.
Related papers
- NoVo: Norm Voting off Hallucinations with Attention Heads in Large Language Models [70.02816541347251]
This paper presents a lightweight method, Norm Voting (NoVo), which harnesses the untapped potential of attention head norms to enhance factual accuracy.
On TruthfulQA MC1, NoVo surpasses the current state-of-the-art and all previous methods by an astounding margin -- at least 19 accuracy points.
arXiv Detail & Related papers (2024-10-11T16:40:03Z) - DeepVoting: Learning Voting Rules with Tailored Embeddings [13.037431161285971]
We recast the problem of designing a good voting rule into one of learning probabilistic versions of voting rules.
We show that embeddings of preference profiles derived from the social choice literature allows us to learn existing voting rules more efficiently.
We also show that rules learned using embeddings can be tweaked to create novel voting rules with improved axiomatic properties.
arXiv Detail & Related papers (2024-08-24T17:15:20Z) - Data as voters: instance selection using approval-based multi-winner voting [1.597617022056624]
We present a novel approach to the instance selection problem in machine learning (or data mining)
In our model, instances play a double role as voters and candidates.
For SVMs, we have obtained slight increases in the average accuracy by using several voting rules that satisfy EJR or PJR.
arXiv Detail & Related papers (2023-04-19T22:00:23Z) - Canary in a Coalmine: Better Membership Inference with Ensembled
Adversarial Queries [53.222218035435006]
We use adversarial tools to optimize for queries that are discriminative and diverse.
Our improvements achieve significantly more accurate membership inference than existing methods.
arXiv Detail & Related papers (2022-10-19T17:46:50Z) - Distant finetuning with discourse relations for stance classification [55.131676584455306]
We propose a new method to extract data with silver labels from raw text to finetune a model for stance classification.
We also propose a 3-stage training framework where the noisy level in the data used for finetuning decreases over different stages.
Our approach ranks 1st among 26 competing teams in the stance classification track of the NLPCC 2021 shared task Argumentative Text Understanding for AI Debater.
arXiv Detail & Related papers (2022-04-27T04:24:35Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - Truth-tracking via Approval Voting: Size Matters [3.113227275600838]
We consider a simple setting where votes consist of approval ballots.
Each voter approves a set of alternatives which they believe can possibly be the ground truth.
We define several noise models that are approval voting variants of the Mallows model.
arXiv Detail & Related papers (2021-12-07T12:29:49Z) - Obvious Manipulability of Voting Rules [105.35249497503527]
The Gibbard-Satterthwaite theorem states that no unanimous and non-dictatorial voting rule is strategyproof.
We revisit voting rules and consider a weaker notion of strategyproofness called not obvious manipulability.
arXiv Detail & Related papers (2021-11-03T02:41:48Z) - Learning to Elect [7.893831644671976]
Voting systems have a wide range of applications including recommender systems, web search, product design and elections.
We show that set-input neural network architectures such as Set Transformers, fully-connected graph networks and DeepSets are both theoretically and empirically well-suited for learning voting rules.
arXiv Detail & Related papers (2021-08-05T17:55:46Z) - Learning with Group Noise [106.56780716961732]
We propose a novel Max-Matching method for learning with group noise.
The performance on arange of real-world datasets in the area of several learning paradigms demonstrates the effectiveness of Max-Matching.
arXiv Detail & Related papers (2021-03-17T06:57:10Z) - Evaluating approval-based multiwinner voting in terms of robustness to
noise [10.135719343010177]
We show that approval-based multiwinner voting is always robust to reasonable noise.
We further refine this finding by presenting a hierarchy of rules in terms of how robust to noise they are.
arXiv Detail & Related papers (2020-02-05T13:17:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.