LiFT: A Scalable Framework for Measuring Fairness in ML Applications
- URL: http://arxiv.org/abs/2008.07433v1
- Date: Fri, 14 Aug 2020 03:55:31 GMT
- Title: LiFT: A Scalable Framework for Measuring Fairness in ML Applications
- Authors: Sriram Vasudevan, Krishnaram Kenthapadi
- Abstract summary: We present the LinkedIn Fairness Toolkit (LiFT), a framework for scalable computation of fairness metrics as part of large ML systems.
We discuss the challenges encountered in incorporating fairness tools in practice and the lessons learned during deployment at LinkedIn.
- Score: 18.54302159142362
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many internet applications are powered by machine learned models, which are
usually trained on labeled datasets obtained through either implicit / explicit
user feedback signals or human judgments. Since societal biases may be present
in the generation of such datasets, it is possible for the trained models to be
biased, thereby resulting in potential discrimination and harms for
disadvantaged groups. Motivated by the need for understanding and addressing
algorithmic bias in web-scale ML systems and the limitations of existing
fairness toolkits, we present the LinkedIn Fairness Toolkit (LiFT), a framework
for scalable computation of fairness metrics as part of large ML systems. We
highlight the key requirements in deployed settings, and present the design of
our fairness measurement system. We discuss the challenges encountered in
incorporating fairness tools in practice and the lessons learned during
deployment at LinkedIn. Finally, we provide open problems based on practical
experience.
Related papers
- Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly [62.473245910234304]
This paper takes a hardware-centric approach to explore how Large Language Models can be brought to modern edge computing systems.
We provide a micro-level hardware benchmark, compare the model FLOP utilization to a state-of-the-art data center GPU, and study the network utilization in realistic conditions.
arXiv Detail & Related papers (2023-10-04T20:27:20Z) - Simultaneous Machine Translation with Large Language Models [51.470478122113356]
We investigate the possibility of applying Large Language Models to SimulMT tasks.
We conducted experiments using the textttLlama2-7b-chat model on nine different languages from the MUST-C dataset.
The results show that LLM outperforms dedicated MT models in terms of BLEU and LAAL metrics.
arXiv Detail & Related papers (2023-09-13T04:06:47Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - FairLay-ML: Intuitive Remedies for Unfairness in Data-Driven
Social-Critical Algorithms [13.649336187121095]
This thesis explores whether open-sourced machine learning (ML) model explanation tools can allow a layman to visualize, understand, and suggest intuitive remedies to unfairness in ML-based decision-support systems.
This thesis presents FairLay-ML, a proof-of-concept GUI integrating some of the most promising tools to provide intuitive explanations for unfair logic in ML models.
arXiv Detail & Related papers (2023-07-11T06:05:06Z) - Uncertainty-aware predictive modeling for fair data-driven decisions [5.371337604556311]
We show how fairML systems can be safeML systems.
For fair decisions, we argue that a safe fail option should be used for individuals with uncertain categorization.
arXiv Detail & Related papers (2022-11-04T20:04:39Z) - Fairness Reprogramming [42.65700878967251]
We propose a new generic fairness learning paradigm, called FairReprogram, which incorporates the model reprogramming technique.
Specifically, FairReprogram considers the case where models can not be changed and appends to the input a set of perturbations, called the fairness trigger.
We show both theoretically and empirically that the fairness trigger can effectively obscure demographic biases in the output prediction of fixed ML models.
arXiv Detail & Related papers (2022-09-21T09:37:00Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Fair Classification via Transformer Neural Networks: Case Study of an
Educational Domain [0.0913755431537592]
This paper presents a preliminary investigation of fairness constraint in transformer networks on Law School Student neural datasets.
We have employed fairness metrics for evaluation and check the trade-off between fairness and accuracy.
arXiv Detail & Related papers (2022-06-03T06:34:16Z) - Visual Identification of Problematic Bias in Large Label Spaces [5.841861400363261]
Key challenge in scaling common fairness metrics to modern models and datasets is the requirement of exhaustive ground truth labeling.
domain experts need to be able to extract and reason about bias throughout models and datasets to make informed decisions.
We propose guidelines for designing visualizations for such large label spaces, considering both technical and ethical issues.
arXiv Detail & Related papers (2022-01-17T12:51:08Z) - Leveraging Semi-Supervised Learning for Fairness using Neural Networks [49.604038072384995]
There has been a growing concern about the fairness of decision-making systems based on machine learning.
In this paper, we propose a semi-supervised algorithm using neural networks benefiting from unlabeled data.
The proposed model, called SSFair, exploits the information in the unlabeled data to mitigate the bias in the training data.
arXiv Detail & Related papers (2019-12-31T09:11:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.