Characterization of Group-Fair Social Choice Rules under Single-Peaked
Preferences
- URL: http://arxiv.org/abs/2207.07984v1
- Date: Sat, 16 Jul 2022 17:12:54 GMT
- Title: Characterization of Group-Fair Social Choice Rules under Single-Peaked
Preferences
- Authors: Gogulapati Sreedurga, Soumyarup Sadhukhan, Souvik Roy, Yadati Narahari
- Abstract summary: We study fairness in social choice settings under single-peaked preferences.
We provide two separate characterizations of random social choice rules that satisfy group-fairness.
- Score: 0.5161531917413706
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We study fairness in social choice settings under single-peaked preferences.
Construction and characterization of social choice rules in the single-peaked
domain has been extensively studied in prior works. In fact, in the
single-peaked domain, it is known that unanimous and strategy-proof
deterministic rules have to be min-max rules and those that also satisfy
anonymity have to be median rules. Further, random social choice rules
satisfying these properties have been shown to be convex combinations of
respective deterministic rules. We non-trivially add to this body of results by
including fairness considerations in social choice. Our study directly
addresses fairness for groups of agents. To study group-fairness, we consider
an existing partition of the agents into logical groups, based on natural
attributes such as gender, race, and location. To capture fairness within each
group, we introduce the notion of group-wise anonymity. To capture fairness
across the groups, we propose a weak notion as well as a strong notion of
fairness. The proposed fairness notions turn out to be natural generalizations
of existing individual-fairness notions and moreover provide non-trivial
outcomes for strict ordinal preferences, unlike the existing group-fairness
notions. We provide two separate characterizations of random social choice
rules that satisfy group-fairness: (i) direct characterization (ii) extreme
point characterization (as convex combinations of fair deterministic social
choice rules). We also explore the special case where there are no groups and
provide sharper characterizations of rules that achieve individual-fairness.
Related papers
- Harm Ratio: A Novel and Versatile Fairness Criterion [27.18270261374462]
Envy-freeness has become the cornerstone of fair division research.
We propose a novel fairness criterion, individual harm ratio, inspired by envy-freeness.
Our criterion is powerful enough to differentiate between prominent decision-making algorithms.
arXiv Detail & Related papers (2024-10-03T20:36:05Z) - Group fairness without demographics using social networks [29.073125057536014]
Group fairness is a popular approach to prevent unfavorable treatment of individuals based on sensitive attributes such as race, gender, and disability.
We propose a "group-free" measure of fairness that does not rely on sensitive attributes and, instead, is based on homophily in social networks.
arXiv Detail & Related papers (2023-05-19T00:45:55Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Proportional Fairness in Obnoxious Facility Location [70.64736616610202]
We propose a hierarchy of distance-based proportional fairness concepts for the problem.
We consider deterministic and randomized mechanisms, and compute tight bounds on the price of proportional fairness.
We prove existence results for two extensions to our model.
arXiv Detail & Related papers (2023-01-11T07:30:35Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - Distributive Justice as the Foundational Premise of Fair ML:
Unification, Extension, and Interpretation of Group Fairness Metrics [0.0]
Group fairness metrics are an established way of assessing the fairness of prediction-based decision-making systems.
We propose a comprehensive framework for group fairness metrics, which links them to more theories of distributive justice.
arXiv Detail & Related papers (2022-06-06T20:44:02Z) - Enforcing Group Fairness in Algorithmic Decision Making: Utility
Maximization Under Sufficiency [0.0]
This paper focuses on the fairness concepts of PPV parity, false omission rate (FOR) parity, and sufficiency.
We show that group-specific threshold rules are optimal for PPV parity and FOR parity.
We also provide a solution for the optimal decision rules satisfying the fairness constraint sufficiency.
arXiv Detail & Related papers (2022-06-05T18:47:34Z) - On Disentangled and Locally Fair Representations [95.6635227371479]
We study the problem of performing classification in a manner that is fair for sensitive groups, such as race and gender.
We learn a locally fair representation, such that, under the learned representation, the neighborhood of each sample is balanced in terms of the sensitive attribute.
arXiv Detail & Related papers (2022-05-05T14:26:50Z) - Fairness for Image Generation with Uncertain Sensitive Attributes [97.81354305427871]
This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
arXiv Detail & Related papers (2021-06-23T06:17:17Z) - Algorithmic Decision Making with Conditional Fairness [48.76267073341723]
We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
arXiv Detail & Related papers (2020-06-18T12:56:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.