Harm Ratio: A Novel and Versatile Fairness Criterion
- URL: http://arxiv.org/abs/2410.02977v1
- Date: Thu, 3 Oct 2024 20:36:05 GMT
- Title: Harm Ratio: A Novel and Versatile Fairness Criterion
- Authors: Soroush Ebadian, Rupert Freeman, Nisarg Shah,
- Abstract summary: Envy-freeness has become the cornerstone of fair division research.
We propose a novel fairness criterion, individual harm ratio, inspired by envy-freeness.
Our criterion is powerful enough to differentiate between prominent decision-making algorithms.
- Score: 27.18270261374462
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Envy-freeness has become the cornerstone of fair division research. In settings where each individual is allocated a disjoint share of collective resources, it is a compelling fairness axiom which demands that no individual strictly prefer the allocation of another individual to their own. Unfortunately, in many real-life collective decision-making problems, the goal is to choose a (common) public outcome that is equally applicable to all individuals, and the notion of envy becomes vacuous. Consequently, this literature has avoided studying fairness criteria that focus on individuals feeling a sense of jealousy or resentment towards other individuals (rather than towards the system), missing out on a key aspect of fairness. In this work, we propose a novel fairness criterion, individual harm ratio, which is inspired by envy-freeness but applies to a broad range of collective decision-making settings. Theoretically, we identify minimal conditions under which this criterion and its groupwise extensions can be guaranteed, and study the computational complexity of related problems. Empirically, we conduct experiments with real data to show that our fairness criterion is powerful enough to differentiate between prominent decision-making algorithms for a range of tasks from voting and fair division to participatory budgeting and peer review.
Related papers
- Implementing Fairness: the view from a FairDream [0.0]
We train an AI model and develop our own fairness package FairDream to detect inequalities and then to correct for them.
Our experiments show that it is a property of FairDream to fulfill fairness objectives which are conditional on the ground truth.
arXiv Detail & Related papers (2024-07-20T06:06:24Z) - A Universal Unbiased Method for Classification from Aggregate
Observations [115.20235020903992]
This paper presents a novel universal method of CFAO, which holds an unbiased estimator of the classification risk for arbitrary losses.
Our proposed method not only guarantees the risk consistency due to the unbiased risk estimator but also can be compatible with arbitrary losses.
arXiv Detail & Related papers (2023-06-20T07:22:01Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome
Homogenization? [90.35044668396591]
A recurring theme in machine learning is algorithmic monoculture: the same systems, or systems that share components, are deployed by multiple decision-makers.
We propose the component-sharing hypothesis: if decision-makers share components like training data or specific models, then they will produce more homogeneous outcomes.
We test this hypothesis on algorithmic fairness benchmarks, demonstrating that sharing training data reliably exacerbates homogenization.
We conclude with philosophical analyses of and societal challenges for outcome homogenization, with an eye towards implications for deployed machine learning systems.
arXiv Detail & Related papers (2022-11-25T09:33:11Z) - Fairness and robustness in anti-causal prediction [73.693135253335]
Robustness to distribution shift and fairness have independently emerged as two important desiderata required of machine learning models.
While these two desiderata seem related, the connection between them is often unclear in practice.
By taking this perspective, we draw explicit connections between a common fairness criterion - separation - and a common notion of robustness.
arXiv Detail & Related papers (2022-09-20T02:41:17Z) - Characterization of Group-Fair Social Choice Rules under Single-Peaked
Preferences [0.5161531917413706]
We study fairness in social choice settings under single-peaked preferences.
We provide two separate characterizations of random social choice rules that satisfy group-fairness.
arXiv Detail & Related papers (2022-07-16T17:12:54Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Metric-Free Individual Fairness with Cooperative Contextual Bandits [17.985752744098267]
Group fairness requires that different groups should be treated similarly which might be unfair to some individuals within a group.
Individual fairness remains understudied due to its reliance on problem-specific similarity metrics.
We propose a metric-free individual fairness and a cooperative contextual bandits algorithm.
arXiv Detail & Related papers (2020-11-13T03:10:35Z) - On the Fairness of Causal Algorithmic Recourse [36.519629650529666]
We propose two new fairness criteria at the group and individual level.
We show that fairness of recourse is complementary to fairness of prediction.
We discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions.
arXiv Detail & Related papers (2020-10-13T16:35:06Z) - Metric-Free Individual Fairness in Online Learning [32.56688029679103]
We study an online learning problem subject to the constraint of individual fairness.
We do not assume the similarity measure among individuals is known, nor do we assume that such measure takes a certain parametric form.
We leverage the existence of an auditor who detects fairness violations without enunciating the quantitative measure.
arXiv Detail & Related papers (2020-02-13T12:25:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.