DeBayes: a Bayesian Method for Debiasing Network Embeddings
- URL: http://arxiv.org/abs/2002.11442v3
- Date: Fri, 30 Apr 2021 10:18:43 GMT
- Title: DeBayes: a Bayesian Method for Debiasing Network Embeddings
- Authors: Maarten Buyl, Tijl De Bie
- Abstract summary: We propose DeBayes: a conceptually elegant Bayesian method that is capable of learning debiased embeddings by using a biased prior.
Our experiments show that these representations can then be used to perform link prediction that is significantly more fair in terms of popular metrics.
- Score: 16.588468396705366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning algorithms are increasingly deployed for high-impact
automated decision making, ethical and increasingly also legal standards demand
that they treat all individuals fairly, without discrimination based on their
age, gender, race or other sensitive traits. In recent years much progress has
been made on ensuring fairness and reducing bias in standard machine learning
settings. Yet, for network embedding, with applications in vulnerable domains
ranging from social network analysis to recommender systems, current options
remain limited both in number and performance. We thus propose DeBayes: a
conceptually elegant Bayesian method that is capable of learning debiased
embeddings by using a biased prior. Our experiments show that these
representations can then be used to perform link prediction that is
significantly more fair in terms of popular metrics such as demographic parity
and equalized opportunity.
Related papers
- Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Parametric Fairness with Statistical Guarantees [0.46040036610482665]
We extend the concept of Demographic Parity to incorporate distributional properties in predictions, allowing expert knowledge to be used in the fair solution.
We illustrate the use of this new metric through a practical example of wages, and develop a parametric method that efficiently addresses practical challenges.
arXiv Detail & Related papers (2023-10-31T14:52:39Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Optimising Equal Opportunity Fairness in Model Training [60.0947291284978]
Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias.
We propose two novel training objectives which directly optimise for the widely-used criterion of it equal opportunity, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.
arXiv Detail & Related papers (2022-05-05T01:57:58Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Fair Representation Learning for Heterogeneous Information Networks [35.80367469624887]
We propose a comprehensive set of de-biasing methods for fair HINs representation learning.
We study the behavior of these algorithms, especially their capability in balancing the trade-off between fairness and prediction accuracy.
We evaluate the performance of the proposed methods in an automated career counseling application.
arXiv Detail & Related papers (2021-04-18T08:28:18Z) - All of the Fairness for Edge Prediction with Optimal Transport [11.51786288978429]
We study the problem of fairness for the task of edge prediction in graphs.
We propose an embedding-agnostic repairing procedure for the adjacency matrix of an arbitrary graph with a trade-off between the group and individual fairness.
arXiv Detail & Related papers (2020-10-30T15:33:13Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z) - A survey of bias in Machine Learning through the prism of Statistical
Parity for the Adult Data Set [5.277804553312449]
We show the importance of understanding how a bias can be introduced into automatic decisions.
We first present a mathematical framework for the fair learning problem, specifically in the binary classification setting.
We then propose to quantify the presence of bias by using the standard Disparate Impact index on the real and well-known Adult income data set.
arXiv Detail & Related papers (2020-03-31T14:48:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.