Multi-winner Approval Voting Goes Epistemic
- URL: http://arxiv.org/abs/2201.06655v1
- Date: Mon, 17 Jan 2022 23:07:14 GMT
- Title: Multi-winner Approval Voting Goes Epistemic
- Authors: Tahar Allouche, J\'er\^ome Lang, Florian Yger
- Abstract summary: We consider contexts where the truth consists of a set of objective winners, knowing a lower and upper bound on its cardinality.
A prototypical problem for this setting is the aggre-gation of multi-label annotations with prior knowledge on the size of the ground truth.
We posit noisemodels, for which we define rules that output an optimal set of winners.
- Score: 6.933322579961287
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Epistemic voting interprets votes as noisy signals about a ground truth. We
consider contexts where the truth consists of a set of objective winners,
knowing a lower and upper bound on its cardinality. A prototypical problem for
this setting is the aggre-gation of multi-label annotations with prior
knowledge on the size of the ground truth. We posit noisemodels, for which we
define rules that output an optimal set of winners. We report on experiments on
multi-label annotations (which we collected).
Related papers
- Diverging Preferences: When do Annotators Disagree and do Models Know? [92.24651142187989]
We develop a taxonomy of disagreement sources spanning 10 categories across four high-level classes.
We find that the majority of disagreements are in opposition with standard reward modeling approaches.
We develop methods for identifying diverging preferences to mitigate their influence on evaluation and training.
arXiv Detail & Related papers (2024-10-18T17:32:22Z) - NoVo: Norm Voting off Hallucinations with Attention Heads in Large Language Models [70.02816541347251]
This paper presents a lightweight method, Norm Voting (NoVo), which harnesses the untapped potential of attention head norms to enhance factual accuracy.
On TruthfulQA MC1, NoVo surpasses the current state-of-the-art and all previous methods by an astounding margin -- at least 19 accuracy points.
arXiv Detail & Related papers (2024-10-11T16:40:03Z) - Abductive and Contrastive Explanations for Scoring Rules in Voting [5.928530455750507]
We design algorithms for computing abductive and contrastive explanations for scoring rules.
For the Borda rule, we find a lower bound on the size of the smallest abductive explanations.
We conduct simulations to identify correlations between properties of preference profiles and the size of their smallest abductive explanations.
arXiv Detail & Related papers (2024-08-23T09:12:58Z) - Probabilistic Test-Time Generalization by Variational Neighbor-Labeling [62.158807685159736]
This paper strives for domain generalization, where models are trained exclusively on source domains before being deployed on unseen target domains.
Probability pseudo-labeling of target samples to generalize the source-trained model to the target domain at test time.
Variational neighbor labels that incorporate the information of neighboring target samples to generate more robust pseudo labels.
arXiv Detail & Related papers (2023-07-08T18:58:08Z) - Neighborhood Collective Estimation for Noisy Label Identification and
Correction [92.20697827784426]
Learning with noisy labels (LNL) aims at designing strategies to improve model performance and generalization by mitigating the effects of model overfitting to noisy labels.
Recent advances employ the predicted label distributions of individual samples to perform noise verification and noisy label correction, easily giving rise to confirmation bias.
We propose Neighborhood Collective Estimation, in which the predictive reliability of a candidate sample is re-estimated by contrasting it against its feature-space nearest neighbors.
arXiv Detail & Related papers (2022-08-05T14:47:22Z) - Truth-tracking via Approval Voting: Size Matters [3.113227275600838]
We consider a simple setting where votes consist of approval ballots.
Each voter approves a set of alternatives which they believe can possibly be the ground truth.
We define several noise models that are approval voting variants of the Mallows model.
arXiv Detail & Related papers (2021-12-07T12:29:49Z) - Obvious Manipulability of Voting Rules [105.35249497503527]
The Gibbard-Satterthwaite theorem states that no unanimous and non-dictatorial voting rule is strategyproof.
We revisit voting rules and consider a weaker notion of strategyproofness called not obvious manipulability.
arXiv Detail & Related papers (2021-11-03T02:41:48Z) - The Complexity of Learning Approval-Based Multiwinner Voting Rules [9.071560867542647]
We study the learnability of multiwinner voting, focusing on the class of approval-based committee scoring (ABCS) rules.
Our goal is to learn a target rule (i.e., to learn the corresponding scoring function) using information about the winning committees of a small number of profiles.
We prove that deciding whether there exists some ABCS rule that makes a given committee winning in a given profile is a hard problem.
arXiv Detail & Related papers (2021-10-01T08:25:05Z) - Consensus-Guided Correspondence Denoising [67.35345850146393]
We propose to denoise correspondences with a local-to-global consensus learning framework to robustly identify correspondence.
A novel "pruning" block is introduced to distill reliable candidates from initial matches according to their consensus scores estimated by dynamic graphs from local to global regions.
Our method outperforms state-of-the-arts on robust line fitting, wide-baseline image matching and image localization benchmarks by noticeable margins.
arXiv Detail & Related papers (2021-01-03T09:10:00Z) - Evaluating approval-based multiwinner voting in terms of robustness to
noise [10.135719343010177]
We show that approval-based multiwinner voting is always robust to reasonable noise.
We further refine this finding by presenting a hierarchy of rules in terms of how robust to noise they are.
arXiv Detail & Related papers (2020-02-05T13:17:43Z) - Objective Social Choice: Using Auxiliary Information to Improve Voting
Outcomes [16.764511357821043]
How should one combine noisy information from diverse sources to make an inference about an objective ground truth?
We propose a multi-arm bandit noise model and count-based auxiliary information set.
We find that our rules successfully use auxiliary information to outperform the naive baselines.
arXiv Detail & Related papers (2020-01-27T21:21:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.