Assessing Perceived Fairness from Machine Learning Developer's
Perspective
- URL: http://arxiv.org/abs/2304.03745v1
- Date: Fri, 7 Apr 2023 17:30:37 GMT
- Title: Assessing Perceived Fairness from Machine Learning Developer's
Perspective
- Authors: Anoop Mishra, Deepak Khazanchi
- Abstract summary: unfairness is triggered due to bias in the data, curation process, erroneous assumptions, and implicit bias rendered within the algorithmic development process.
In particular, ML developers have not been the focus of research relating to perceived fairness.
This paper performs an exploratory pilot study to assess the attributes of this construct using a systematic focus group of developers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fairness in machine learning (ML) applications is an important practice for
developers in research and industry. In ML applications, unfairness is
triggered due to bias in the data, curation process, erroneous assumptions, and
implicit bias rendered within the algorithmic development process. As ML
applications come into broader use developing fair ML applications is critical.
Literature suggests multiple views on how fairness in ML is described from the
users perspective and students as future developers. In particular, ML
developers have not been the focus of research relating to perceived fairness.
This paper reports on a pilot investigation of ML developers perception of
fairness. In describing the perception of fairness, the paper performs an
exploratory pilot study to assess the attributes of this construct using a
systematic focus group of developers. In the focus group, we asked participants
to discuss three questions- 1) What are the characteristics of fairness in ML?
2) What factors influence developers belief about the fairness of ML? and 3)
What practices and tools are utilized for fairness in ML development? The
findings of this exploratory work from the focus group show that to assess
fairness developers generally focus on the overall ML application design and
development, i.e., business-specific requirements, data collection,
pre-processing, in-processing, and post-processing. Thus, we conclude that the
procedural aspects of organizational justice theory can explain developers
perception of fairness. The findings of this study can be utilized further to
assist development teams in integrating fairness in the ML application
development lifecycle. It will also motivate ML developers and organizations to
develop best practices for assessing the fairness of ML-based applications.
Related papers
- The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - Lazy Data Practices Harm Fairness Research [49.02318458244464]
We present a comprehensive analysis of fair ML datasets, demonstrating how unreflective practices hinder the reach and reliability of algorithmic fairness findings.
Our analyses identify three main areas of concern: (1) a textbflack of representation for certain protected attributes in both data and evaluations; (2) the widespread textbf of minorities during data preprocessing; and (3) textbfopaque data processing threatening the generalization of fairness research.
This study underscores the need for a critical reevaluation of data practices in fair ML and offers directions to improve both the sourcing and usage of datasets.
arXiv Detail & Related papers (2024-04-26T09:51:24Z) - Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware
Classification [7.696798306913988]
We introduce a framework outlining fairness regulations aligned with various fairness definitions.
We explore the configuration for in-context learning and the procedure for selecting in-context demonstrations using RAG.
Experiments conducted with different LLMs indicate that GPT-4 delivers superior results in terms of both accuracy and fairness compared to other models.
arXiv Detail & Related papers (2024-02-28T17:29:27Z) - Identifying Concerns When Specifying Machine Learning-Enabled Systems: A
Perspective-Based Approach [1.2184324428571227]
PerSpecML is a perspective-based approach for specifying ML-enabled systems.
It helps practitioners identify which attributes, including ML and non-ML components, are important to contribute to the overall system's quality.
arXiv Detail & Related papers (2023-09-14T18:31:16Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - A First Look at Fairness of Machine Learning Based Code Reviewer
Recommendation [14.50773969815661]
This paper conducts the first study toward investigating the issue of fairness of ML applications in the software engineering (SE) domain.
Our empirical study demonstrates that current state-of-the-art ML-based code reviewer recommendation techniques exhibit unfairness and discriminating behaviors.
This paper also discusses the reasons why the studied ML-based code reviewer recommendation systems are unfair and provides solutions to mitigate the unfairness.
arXiv Detail & Related papers (2023-07-21T01:57:51Z) - FairLay-ML: Intuitive Remedies for Unfairness in Data-Driven
Social-Critical Algorithms [13.649336187121095]
This thesis explores whether open-sourced machine learning (ML) model explanation tools can allow a layman to visualize, understand, and suggest intuitive remedies to unfairness in ML-based decision-support systems.
This thesis presents FairLay-ML, a proof-of-concept GUI integrating some of the most promising tools to provide intuitive explanations for unfair logic in ML models.
arXiv Detail & Related papers (2023-07-11T06:05:06Z) - How Can Recommender Systems Benefit from Large Language Models: A Survey [82.06729592294322]
Large language models (LLM) have shown impressive general intelligence and human-like capabilities.
We conduct a comprehensive survey on this research direction from the perspective of the whole pipeline in real-world recommender systems.
arXiv Detail & Related papers (2023-06-09T11:31:50Z) - LiFT: A Scalable Framework for Measuring Fairness in ML Applications [18.54302159142362]
We present the LinkedIn Fairness Toolkit (LiFT), a framework for scalable computation of fairness metrics as part of large ML systems.
We discuss the challenges encountered in incorporating fairness tools in practice and the lessons learned during deployment at LinkedIn.
arXiv Detail & Related papers (2020-08-14T03:55:31Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.