On the Robustness of Fairness Practices: A Causal Framework for Systematic Evaluation
- URL: http://arxiv.org/abs/2601.03621v1
- Date: Wed, 07 Jan 2026 06:02:53 GMT
- Title: On the Robustness of Fairness Practices: A Causal Framework for Systematic Evaluation
- Authors: Verya Monjezi, Ashish Kumar, Ashutosh Trivedi, Gang Tan, Saeid Tizpaz-Niari,
- Abstract summary: Machine learning algorithms are increasingly deployed to make critical decisions in socioeconomic applications.<n>Due to their data-driven and pattern-seeking nature, ML algorithms may develop decision logic that disproportionately distributes opportunities, benefits, resources, or information among different population groups.<n>The software engineering and ML communities have made significant efforts to establish the best practices for creating fair ML software.
- Score: 17.020173966728507
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning (ML) algorithms are increasingly deployed to make critical decisions in socioeconomic applications such as finance, criminal justice, and autonomous driving. However, due to their data-driven and pattern-seeking nature, ML algorithms may develop decision logic that disproportionately distributes opportunities, benefits, resources, or information among different population groups, potentially harming marginalized communities. In response to such fairness concerns, the software engineering and ML communities have made significant efforts to establish the best practices for creating fair ML software. These include fairness interventions for training ML models, such as including sensitive features, selecting non-sensitive attributes, and applying bias mitigators. But how reliably can software professionals tasked with developing data-driven systems depend on these recommendations? And how well do these practices generalize in the presence of faulty labels, missing data, or distribution shifts? These questions form the core theme of this paper.
Related papers
- A Gray Literature Study on Fairness Requirements in AI-enabled Software Engineering [3.5429774642987915]
This paper presents a review of existing gray literature, examining fairness requirements in AI context.<n>Our gray literature investigation shows various definitions of fairness requirements in AI systems.<n>Fairness requirement violations are frequently linked, but not limited, to data representation bias, algorithmic and model design bias, human judgment, and evaluation and transparency gaps.
arXiv Detail & Related papers (2025-12-08T19:22:01Z) - Addressing Bias in LLMs: Strategies and Application to Fair AI-based Recruitment [49.81946749379338]
This work seeks to analyze the capacity of Transformers-based systems to learn demographic biases present in the data.<n>We propose a privacy-enhancing framework to reduce gender information from the learning pipeline as a way to mitigate biased behaviors in the final tools.
arXiv Detail & Related papers (2025-06-13T15:29:43Z) - Contextual Fairness-Aware Practices in ML: A Cost-Effective Empirical Evaluation [48.943054662940916]
We investigate fairness-aware practices from two perspectives: contextual and cost-effectiveness.<n>Our findings provide insights into how context influences the effectiveness of fairness-aware practices.<n>This research aims to guide SE practitioners in selecting practices that achieve fairness with minimal performance costs.
arXiv Detail & Related papers (2025-03-19T18:10:21Z) - Causality Is Key to Understand and Balance Multiple Goals in Trustworthy ML and Foundation Models [91.24296813969003]
This paper advocates integrating causal methods into machine learning to navigate the trade-offs among key principles of trustworthy ML.<n>We argue that a causal approach is essential for balancing multiple competing objectives in both trustworthy ML and foundation models.
arXiv Detail & Related papers (2025-02-28T14:57:33Z) - An Overview of Large Language Models for Statisticians [109.38601458831545]
Large Language Models (LLMs) have emerged as transformative tools in artificial intelligence (AI)<n>This paper explores potential areas where statisticians can make important contributions to the development of LLMs.<n>We focus on issues such as uncertainty quantification, interpretability, fairness, privacy, watermarking and model adaptation.
arXiv Detail & Related papers (2025-02-25T03:40:36Z) - Analyzing Fairness of Computer Vision and Natural Language Processing Models [1.0923877073891446]
This research utilizes two fairness libraries, Fairlearn by Microsoft and AIF360 by IBM.<n>The study focuses on assessing and mitigating biases for unstructured datasets using Computer Vision (CV) and Natural Language Processing (NLP) models.<n>Results reveal that some sequential applications improve the performance of mitigation algorithms by effectively reducing bias while maintaining the model's performance.
arXiv Detail & Related papers (2024-12-13T06:35:55Z) - Analyzing Fairness of Classification Machine Learning Model with Structured Dataset [1.0923877073891446]
This study investigates the fairness of machine learning models applied to structured datasets in classification tasks.<n>Three fairness libraries; Fairlearn by Microsoft, AIF360 by IBM, and the What If Tool by Google were employed.<n>The research aims to assess the extent of bias in the ML models, compare the effectiveness of these libraries, and derive actionable insights for practitioners.
arXiv Detail & Related papers (2024-12-13T06:31:09Z) - A First Look at Fairness of Machine Learning Based Code Reviewer
Recommendation [14.50773969815661]
This paper conducts the first study toward investigating the issue of fairness of ML applications in the software engineering (SE) domain.
Our empirical study demonstrates that current state-of-the-art ML-based code reviewer recommendation techniques exhibit unfairness and discriminating behaviors.
This paper also discusses the reasons why the studied ML-based code reviewer recommendation systems are unfair and provides solutions to mitigate the unfairness.
arXiv Detail & Related papers (2023-07-21T01:57:51Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Towards a Fairness-Aware Scoring System for Algorithmic Decision-Making [35.21763166288736]
We propose a general framework to create data-driven fairness-aware scoring systems.
We show that the proposed framework provides practitioners or policymakers great flexibility to select their desired fairness requirements.
arXiv Detail & Related papers (2021-09-21T09:46:35Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - LiFT: A Scalable Framework for Measuring Fairness in ML Applications [18.54302159142362]
We present the LinkedIn Fairness Toolkit (LiFT), a framework for scalable computation of fairness metrics as part of large ML systems.
We discuss the challenges encountered in incorporating fairness tools in practice and the lessons learned during deployment at LinkedIn.
arXiv Detail & Related papers (2020-08-14T03:55:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.