On the (In)Compatibility between Group Fairness and Individual Fairness
- URL: http://arxiv.org/abs/2401.07174v1
- Date: Sat, 13 Jan 2024 23:38:10 GMT
- Title: On the (In)Compatibility between Group Fairness and Individual Fairness
- Authors: Shizhou Xu and Thomas Strohmer
- Abstract summary: We study the compatibility between the optimal statistical parity solutions and individual fairness.
We provide individual fairness guarantees for the composition of a trained model and the optimal post-processing step.
- Score: 3.6052935394000234
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the compatibility between the optimal statistical parity solutions
and individual fairness. While individual fairness seeks to treat similar
individuals similarly, optimal statistical parity aims to provide similar
treatment to individuals who share relative similarity within their respective
sensitive groups. The two fairness perspectives, while both desirable from a
fairness perspective, often come into conflict in applications. Our goal in
this work is to analyze the existence of this conflict and its potential
solution. In particular, we establish sufficient (sharp) conditions for the
compatibility between the optimal (post-processing) statistical parity $L^2$
learning and the ($K$-Lipschitz or $(\epsilon,\delta)$) individual fairness
requirements. Furthermore, when there exists a conflict between the two, we
first relax the former to the Pareto frontier (or equivalently the optimal
trade-off) between $L^2$ error and statistical disparity, and then analyze the
compatibility between the frontier and the individual fairness requirements.
Our analysis identifies regions along the Pareto frontier that satisfy
individual fairness requirements. (Lastly, we provide individual fairness
guarantees for the composition of a trained model and the optimal
post-processing step so that one can determine the compatibility of the
post-processed model.) This provides practitioners with a valuable approach to
attain Pareto optimality for statistical parity while adhering to the
constraints of individual fairness.
Related papers
- Fairness-aware organ exchange and kidney paired donation [10.277630436997365]
The kidney paired donation (KPD) program provides an innovative solution to overcome incompatibility challenges in kidney transplants.
To address unequal access to transplant opportunities, there are two widely used fairness criteria: group fairness and individual fairness.
Motivated by the calibration principle in machine learning, we introduce a new fairness criterion: the matching outcome should be conditionally independent of the protected feature.
arXiv Detail & Related papers (2025-03-09T04:01:08Z) - Emulating Full Participation: An Effective and Fair Client Selection Strategy for Federated Learning [50.060154488277036]
In federated learning, client selection is a critical problem that significantly impacts both model performance and fairness.
We propose two guiding principles that tackle the inherent conflict between the two metrics while reinforcing each other.
Our approach adaptively enhances this diversity by selecting clients based on their data distributions, thereby improving both model performance and fairness.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Assessing Group Fairness with Social Welfare Optimization [0.9217021281095907]
This paper explores whether a broader conception of social justice, based on optimizing a social welfare function, can be useful for assessing various definitions of parity.
We show that it can justify demographic parity or equalized odds under certain conditions, but frequently requires a departure from these types of parity.
In addition, we find that predictive rate parity is of limited usefulness.
arXiv Detail & Related papers (2024-05-19T01:41:04Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness [13.894631477590362]
Group fairness is achieved by equalising prediction distributions between protected sub-populations.
individual fairness requires treating similar individuals alike.
This procedure may provide two similar individuals from the same protected group with classification odds that are disparately different.
arXiv Detail & Related papers (2023-04-19T16:02:00Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Proportional Fairness in Obnoxious Facility Location [70.64736616610202]
We propose a hierarchy of distance-based proportional fairness concepts for the problem.
We consider deterministic and randomized mechanisms, and compute tight bounds on the price of proportional fairness.
We prove existence results for two extensions to our model.
arXiv Detail & Related papers (2023-01-11T07:30:35Z) - Fairness and robustness in anti-causal prediction [73.693135253335]
Robustness to distribution shift and fairness have independently emerged as two important desiderata required of machine learning models.
While these two desiderata seem related, the connection between them is often unclear in practice.
By taking this perspective, we draw explicit connections between a common fairness criterion - separation - and a common notion of robustness.
arXiv Detail & Related papers (2022-09-20T02:41:17Z) - Accurate Fairness: Improving Individual Fairness without Trading
Accuracy [4.0415037006237595]
We propose a new fairness criterion, accurate fairness, to align individual fairness with accuracy.
We prove that accurate fairness also implies typical group fairness criteria over a union of similar sub-populations.
To the best of our knowledge, this is the first time that a Siamese approach is adapted for bias mitigation.
arXiv Detail & Related papers (2022-05-18T03:24:16Z) - Multi-Stage Decentralized Matching Markets: Uncertain Preferences and
Strategic Behaviors [91.3755431537592]
This article develops a framework for learning optimal strategies in real-world matching markets.
We show that there exists a welfare-versus-fairness trade-off that is characterized by the uncertainty level of acceptance.
We prove that participants can be better off with multi-stage matching compared to single-stage matching.
arXiv Detail & Related papers (2021-02-13T19:25:52Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - Distributional Individual Fairness in Clustering [7.303841123034983]
We introduce a framework for assigning individuals, embedded in a metric space, to probability distributions over a bounded number of cluster centers.
We provide an algorithm for clustering with $p$-norm objective and individual fairness constraints with provable approximation guarantee.
arXiv Detail & Related papers (2020-06-22T20:02:09Z) - FACT: A Diagnostic for Group Fairness Trade-offs [23.358566041117083]
Group fairness is a class of fairness notions that measure how different groups of individuals are treated differently according to their protected attributes.
We propose a general diagnostic that enables systematic characterization of these trade-offs in group fairness.
arXiv Detail & Related papers (2020-04-07T14:15:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.