Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP
- URL: http://arxiv.org/abs/2302.05711v1
- Date: Sat, 11 Feb 2023 14:54:00 GMT
- Title: Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP
- Authors: Xudong Han, Timothy Baldwin, Trevor Cohn
- Abstract summary: Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
- Score: 64.45845091719002
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern NLP systems exhibit a range of biases, which a growing literature on
model debiasing attempts to correct. However current progress is hampered by a
plurality of definitions of bias, means of quantification, and oftentimes vague
relation between debiasing algorithms and theoretical measures of bias. This
paper seeks to clarify the current situation and plot a course for meaningful
progress in fair learning, with two key contributions: (1) making clear
inter-relations among the current gamut of methods, and their relation to
fairness theory; and (2) addressing the practical problem of model selection,
which involves a trade-off between fairness and accuracy and has led to
systemic issues in fairness research. Putting them together, we make several
recommendations to help shape future work.
Related papers
- On the Interplay between Fairness and Explainability [28.37896468795247]
We perform a first study to understand how fairness and explainability influence each other.
We fine-tune pre-trained language models with several methods for bias mitigation.
We find that bias mitigation algorithms do not always lead to fairer models.
arXiv Detail & Related papers (2023-10-25T12:59:51Z) - Fairness Explainability using Optimal Transport with Applications in
Image Classification [0.46040036610482665]
We propose a comprehensive approach to uncover the causes of discrimination in Machine Learning applications.
We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions.
This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence emphon the bias.
arXiv Detail & Related papers (2023-08-22T00:10:23Z) - Fairness and Explainability: Bridging the Gap Towards Fair Model
Explanations [12.248793742165278]
We bridge the gap between fairness and explainability by presenting a novel perspective of procedure-oriented fairness based on explanations.
We propose a Comprehensive Fairness Algorithm (CFA), which simultaneously fulfills multiple objectives - improving traditional fairness, satisfying explanation fairness, and maintaining the utility performance.
arXiv Detail & Related papers (2022-12-07T18:35:54Z) - Survey on Fairness Notions and Related Tensions [4.257210316104905]
Automated decision systems are increasingly used to take consequential decisions in problems such as job hiring and loan granting.
However, objective machine learning (ML) algorithms are prone to bias, which results in yet unfair decisions.
This paper surveys the commonly used fairness notions and discusses the tensions among them with privacy and accuracy.
arXiv Detail & Related papers (2022-09-16T13:36:05Z) - Optimising Equal Opportunity Fairness in Model Training [60.0947291284978]
Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias.
We propose two novel training objectives which directly optimise for the widely-used criterion of it equal opportunity, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.
arXiv Detail & Related papers (2022-05-05T01:57:58Z) - The SAME score: Improved cosine based bias score for word embeddings [49.75878234192369]
We introduce SAME, a novel bias score for semantic bias in embeddings.
We show that SAME is capable of measuring semantic bias and identify potential causes for social bias in downstream tasks.
arXiv Detail & Related papers (2022-03-28T09:28:13Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural
Networks [7.763173131630868]
We propose two metrics to quantitatively evaluate the class-wise bias of two models in comparison to one another.
By evaluating the performance of these new metrics and by demonstrating their practical application, we show that they can be used to measure fairness as well as bias.
arXiv Detail & Related papers (2021-10-08T22:35:34Z) - Fairness-aware Class Imbalanced Learning [57.45784950421179]
We evaluate long-tail learning methods for tweet sentiment and occupation classification.
We extend a margin-loss based approach with methods to enforce fairness.
arXiv Detail & Related papers (2021-09-21T22:16:30Z) - Balancing out Bias: Achieving Fairness Through Training Reweighting [58.201275105195485]
Bias in natural language processing arises from models learning characteristics of the author such as gender and race.
Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables.
This paper introduces a very simple but highly effective method for countering bias using instance reweighting.
arXiv Detail & Related papers (2021-09-16T23:40:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.