Toward Operationalizing Pipeline-aware ML Fairness: A Research Agenda
for Developing Practical Guidelines and Tools
- URL: http://arxiv.org/abs/2309.17337v1
- Date: Fri, 29 Sep 2023 15:48:26 GMT
- Title: Toward Operationalizing Pipeline-aware ML Fairness: A Research Agenda
for Developing Practical Guidelines and Tools
- Authors: Emily Black, Rakshit Naidu, Rayid Ghani, Kit T. Rodolfa, Daniel E. Ho,
Hoda Heidari
- Abstract summary: Recent work has called on the ML community to take a more holistic approach to tackle fairness issues.
We first demonstrate that without clear guidelines and toolkits, even individuals with specialized ML knowledge find it challenging to hypothesize how various design choices influence model behavior.
We then consult the fair-ML literature to understand the progress to date toward operationalizing the pipeline-aware approach.
- Score: 18.513353100744823
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While algorithmic fairness is a thriving area of research, in practice,
mitigating issues of bias often gets reduced to enforcing an arbitrarily chosen
fairness metric, either by enforcing fairness constraints during the
optimization step, post-processing model outputs, or by manipulating the
training data. Recent work has called on the ML community to take a more
holistic approach to tackle fairness issues by systematically investigating the
many design choices made through the ML pipeline, and identifying interventions
that target the issue's root cause, as opposed to its symptoms. While we share
the conviction that this pipeline-based approach is the most appropriate for
combating algorithmic unfairness on the ground, we believe there are currently
very few methods of \emph{operationalizing} this approach in practice. Drawing
on our experience as educators and practitioners, we first demonstrate that
without clear guidelines and toolkits, even individuals with specialized ML
knowledge find it challenging to hypothesize how various design choices
influence model behavior. We then consult the fair-ML literature to understand
the progress to date toward operationalizing the pipeline-aware approach: we
systematically collect and organize the prior work that attempts to detect,
measure, and mitigate various sources of unfairness through the ML pipeline. We
utilize this extensive categorization of previous contributions to sketch a
research agenda for the community. We hope this work serves as the stepping
stone toward a more comprehensive set of resources for ML researchers,
practitioners, and students interested in exploring, designing, and testing
pipeline-oriented approaches to algorithmic fairness.
Related papers
- EVOLvE: Evaluating and Optimizing LLMs For Exploration [76.66831821738927]
Large language models (LLMs) remain under-studied in scenarios requiring optimal decision-making under uncertainty.
We measure LLMs' (in)ability to make optimal decisions in bandits, a state-less reinforcement learning setting relevant to many applications.
Motivated by the existence of optimal exploration algorithms, we propose efficient ways to integrate this algorithmic knowledge into LLMs.
arXiv Detail & Related papers (2024-10-08T17:54:03Z) - Debiasing Multimodal Large Language Models [61.6896704217147]
Large Vision-Language Models (LVLMs) have become indispensable tools in computer vision and natural language processing.
Our investigation reveals a noteworthy bias in the generated content, where the output is primarily influenced by the underlying Large Language Models (LLMs) prior to the input image.
To rectify these biases and redirect the model's focus toward vision information, we introduce two simple, training-free strategies.
arXiv Detail & Related papers (2024-03-08T12:35:07Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Causality-Aided Trade-off Analysis for Machine Learning Fairness [11.149507394656709]
This paper uses causality analysis as a principled method for analyzing trade-offs between fairness parameters and other crucial metrics in machine learning pipelines.
We propose a set of domain-specific optimizations to facilitate accurate causal discovery and a unified, novel interface for trade-off analysis based on well-established causal inference methods.
arXiv Detail & Related papers (2023-05-22T14:14:43Z) - Individual Fairness under Uncertainty [26.183244654397477]
Algorithmic fairness is an established area in machine learning (ML) algorithms.
We propose an individual fairness measure and a corresponding algorithm that deal with the challenges of uncertainty arising from censorship in class labels.
We argue that this perspective represents a more realistic model of fairness research for real-world application deployment.
arXiv Detail & Related papers (2023-02-16T01:07:58Z) - A Framework for Fairness: A Systematic Review of Existing Fair AI
Solutions [4.594159253008448]
A large portion of fairness research has gone to producing tools that machine learning practitioners can use to audit for bias while designing their algorithms.
There is a lack of application of these fairness solutions in practice.
This review provides an in-depth summary of the algorithmic bias issues that have been defined and the fairness solution space that has been proposed.
arXiv Detail & Related papers (2021-12-10T17:51:20Z) - Toward a Perspectivist Turn in Ground Truthing for Predictive Computing [1.3985293623849522]
We call data perspectivism, which moves away from traditional gold standard datasets, towards the adoption of methods that integrate the opinions and perspectives of the human subjects involved in the knowledge representation step of machine learning processes.
We present the main advantages of adopting a perspectivist stance in ML, as well as possible disadvantages, and various ways in which such a stance can be implemented in practice.
arXiv Detail & Related papers (2021-09-09T13:42:27Z) - MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven
Reinforcement Learning [65.52675802289775]
We show that an uncertainty aware classifier can solve challenging reinforcement learning problems.
We propose a novel method for computing the normalized maximum likelihood (NML) distribution.
We show that the resulting algorithm has a number of intriguing connections to both count-based exploration methods and prior algorithms for learning reward functions.
arXiv Detail & Related papers (2021-07-15T08:19:57Z) - An Empirical Comparison of Bias Reduction Methods on Real-World Problems
in High-Stakes Policy Settings [13.037143215464132]
We investigate the performance of several methods that operate at different points in the machine learning pipeline across four real-world public policy and social good problems.
We find a wide degree of variability and inconsistency in the ability of many of these methods to improve model fairness, but post-processing by choosing group-specific score thresholds consistently removes disparities.
arXiv Detail & Related papers (2021-05-13T17:33:28Z) - Off-Policy Imitation Learning from Observations [78.30794935265425]
Learning from Observations (LfO) is a practical reinforcement learning scenario from which many applications can benefit.
We propose a sample-efficient LfO approach that enables off-policy optimization in a principled manner.
Our approach is comparable with state-of-the-art locomotion in terms of both sample-efficiency and performance.
arXiv Detail & Related papers (2021-02-25T21:33:47Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.