Measuring and Controlling Divisiveness in Rank Aggregation
- URL: http://arxiv.org/abs/2306.08511v1
- Date: Wed, 14 Jun 2023 13:55:25 GMT
- Title: Measuring and Controlling Divisiveness in Rank Aggregation
- Authors: Rachael Colley, Umberto Grandi, C\'esar Hidalgo, Mariana Macedo and
Carlos Navarrete
- Abstract summary: In rank aggregation, members of a population rank issues to decide which are collectively preferred.
We focus instead on identifying divisive issues that express disagreements among the preferences of individuals.
- Score: 2.75005999729995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In rank aggregation, members of a population rank issues to decide which are
collectively preferred. We focus instead on identifying divisive issues that
express disagreements among the preferences of individuals. We analyse the
properties of our divisiveness measures and their relation to existing notions
of polarisation. We also study their robustness under incomplete preferences
and algorithms for control and manipulation of divisiveness. Our results
advance our understanding of how to quantify disagreements in collective
decision-making.
Related papers
- Policy Aggregation [21.21314301021803]
We consider the challenge of AI value alignment with multiple individuals with different reward functions and optimal policies in an underlying Markov decision process.
We formalize this problem as one of policy aggregation, where the goal is to identify a desirable collective policy.
Key insight is that social choice methods can be reinterpreted by identifying ordinal preferences with volumes of subsets of the state-action occupancy polytope.
arXiv Detail & Related papers (2024-11-06T04:19:50Z) - Diverging Preferences: When do Annotators Disagree and do Models Know? [92.24651142187989]
We develop a taxonomy of disagreement sources spanning 10 categories across four high-level classes.
We find that the majority of disagreements are in opposition with standard reward modeling approaches.
We develop methods for identifying diverging preferences to mitigate their influence on evaluation and training.
arXiv Detail & Related papers (2024-10-18T17:32:22Z) - Harm Ratio: A Novel and Versatile Fairness Criterion [27.18270261374462]
Envy-freeness has become the cornerstone of fair division research.
We propose a novel fairness criterion, individual harm ratio, inspired by envy-freeness.
Our criterion is powerful enough to differentiate between prominent decision-making algorithms.
arXiv Detail & Related papers (2024-10-03T20:36:05Z) - Rater Cohesion and Quality from a Vicarious Perspective [22.445283423317754]
Vicarious annotation is a method for breaking down disagreement by asking raters how they think others would annotate the data.
We employ rater cohesion metrics to study the potential influence of political affiliations and demographic backgrounds on raters' perceptions of offense.
We study how the rater quality metrics influence the in-group and cross-group rater cohesion across the personal and vicarious levels.
arXiv Detail & Related papers (2024-08-15T20:37:36Z) - Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome
Homogenization? [90.35044668396591]
A recurring theme in machine learning is algorithmic monoculture: the same systems, or systems that share components, are deployed by multiple decision-makers.
We propose the component-sharing hypothesis: if decision-makers share components like training data or specific models, then they will produce more homogeneous outcomes.
We test this hypothesis on algorithmic fairness benchmarks, demonstrating that sharing training data reliably exacerbates homogenization.
We conclude with philosophical analyses of and societal challenges for outcome homogenization, with an eye towards implications for deployed machine learning systems.
arXiv Detail & Related papers (2022-11-25T09:33:11Z) - On the Complexity of Adversarial Decision Making [101.14158787665252]
We show that the Decision-Estimation Coefficient is necessary and sufficient to obtain low regret for adversarial decision making.
We provide new structural results that connect the Decision-Estimation Coefficient to variants of other well-known complexity measures.
arXiv Detail & Related papers (2022-06-27T06:20:37Z) - Let's Agree to Agree: Targeting Consensus for Incomplete Preferences
through Majority Dynamics [13.439086686599891]
We focus on a process of majority dynamics where issues are addressed one at a time and undecided agents follow the opinion of the majority.
We show that in the worst case, myopic adherence to the majority damages existing consensus; yet, simulation experiments indicate that the damage is often mild.
arXiv Detail & Related papers (2022-04-28T10:47:21Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Egalitarian Judgment Aggregation [10.42629447317569]
Egalitarian considerations play a central role in many areas of social choice theory.
We introduce axioms capturing two classical interpretations of egalitarianism in judgment aggregation.
We then explore the relationship between these axioms and several notions of strategyproofness from social choice theory.
arXiv Detail & Related papers (2021-02-04T18:07:31Z) - Inverse Active Sensing: Modeling and Understanding Timely
Decision-Making [111.07204912245841]
We develop a framework for the general setting of evidence-based decision-making under endogenous, context-dependent time pressure.
We demonstrate how it enables modeling intuitive notions of surprise, suspense, and optimality in decision strategies.
arXiv Detail & Related papers (2020-06-25T02:30:45Z) - Towards Quantifying the Distance between Opinions [66.29568619199074]
We find that measures based solely on text similarity or on overall sentiment often fail to effectively capture the distance between opinions.
We propose a new distance measure for capturing the similarity between opinions that leverages the nuanced observation.
In an unsupervised setting, our distance measure achieves significantly better Adjusted Rand Index scores (up to 56x) and Silhouette coefficients (up to 21x) compared to existing approaches.
arXiv Detail & Related papers (2020-01-27T16:01:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.