Obtaining Dyadic Fairness by Optimal Transport
- URL: http://arxiv.org/abs/2202.04520v1
- Date: Wed, 9 Feb 2022 15:33:59 GMT
- Title: Obtaining Dyadic Fairness by Optimal Transport
- Authors: Moyi Yang, Junjie Sheng, Xiangfeng Wang, Wenyan Liu, Bo Jin, Jun Wang,
Hongyuan Zha
- Abstract summary: This paper considers obtaining fairness for link prediction tasks, which can be measured by dyadic fairness.
We propose a pre-processing methodology to obtain dyadic fairness through data repairing and optimal transport.
The optimal transport-based dyadic fairness algorithm is proposed for graph link prediction.
- Score: 40.88844078769325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness has been taken as a critical metric on machine learning models. Many
works studying how to obtain fairness for different tasks emerge. This paper
considers obtaining fairness for link prediction tasks, which can be measured
by dyadic fairness. We aim to propose a pre-processing methodology to obtain
dyadic fairness through data repairing and optimal transport. To obtain dyadic
fairness with satisfying flexibility and unambiguity requirements, we transform
the dyadic repairing to the conditional distribution alignment problem based on
optimal transport and obtain theoretical results on the connection between the
proposed alignment and dyadic fairness. The optimal transport-based dyadic
fairness algorithm is proposed for graph link prediction. Our proposed
algorithm shows superior results on obtaining fairness compared with the other
pre-processing methods on two benchmark graph datasets.
Related papers
- Fair Bilevel Neural Network (FairBiNN): On Balancing fairness and accuracy via Stackelberg Equilibrium [0.3350491650545292]
Current methods for mitigating bias often result in information loss and an inadequate balance between accuracy and fairness.
We propose a novel methodology grounded in bilevel optimization principles.
Our deep learning-based approach concurrently optimize for both accuracy and fairness objectives.
arXiv Detail & Related papers (2024-10-21T18:53:39Z) - Achievable Fairness on Your Data With Utility Guarantees [16.78730663293352]
In machine learning fairness, training models that minimize disparity across different sensitive groups often leads to diminished accuracy.
We present a computationally efficient approach to approximate the fairness-accuracy trade-off curve tailored to individual datasets.
We introduce a novel methodology for quantifying uncertainty in our estimates, thereby providing practitioners with a robust framework for auditing model fairness.
arXiv Detail & Related papers (2024-02-27T00:59:32Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk
Minimization Framework [12.734559823650887]
In the presence of distribution shifts, fair machine learning models may behave unfairly on test data.
Existing algorithms require full access to data and cannot be used when small batches are used.
This paper proposes the first distributionally robust fairness framework with convergence guarantees that do not require knowledge of the causal graph.
arXiv Detail & Related papers (2023-09-20T23:25:28Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Conformalized Fairness via Quantile Regression [8.180169144038345]
We propose a novel framework to learn a real-valued quantile function under the fairness requirement of Demographic Parity.
We establish theoretical guarantees of distribution-free coverage and exact fairness for the induced prediction interval constructed by fair quantiles.
Our results show the model's ability to uncover the mechanism underlying the fairness-accuracy trade-off in a wide range of societal and medical applications.
arXiv Detail & Related papers (2022-10-05T04:04:15Z) - Fair Densities via Boosting the Sufficient Statistics of Exponential
Families [72.34223801798422]
We introduce a boosting algorithm to pre-process data for fairness.
Our approach shifts towards better data fitting while still ensuring a minimal fairness guarantee.
Empirical results are present to display the quality of result on real-world data.
arXiv Detail & Related papers (2020-12-01T00:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.