FairCompass: Operationalising Fairness in Machine Learning
- URL: http://arxiv.org/abs/2312.16726v1
- Date: Wed, 27 Dec 2023 21:29:53 GMT
- Title: FairCompass: Operationalising Fairness in Machine Learning
- Authors: Jessica Liu, Huaming Chen, Jun Shen, Kim-Kwang Raymond Choo
- Abstract summary: There is a growing imperative to develop responsible AI solutions.
Despite a diverse assortment of machine learning fairness solutions is proposed in the literature.
There is reportedly a lack of practical implementation of these tools in real-world applications.
- Score: 34.964477625987136
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As artificial intelligence (AI) increasingly becomes an integral part of our
societal and individual activities, there is a growing imperative to develop
responsible AI solutions. Despite a diverse assortment of machine learning
fairness solutions is proposed in the literature, there is reportedly a lack of
practical implementation of these tools in real-world applications. Industry
experts have participated in thorough discussions on the challenges associated
with operationalising fairness in the development of machine learning-empowered
solutions, in which a shift toward human-centred approaches is promptly
advocated to mitigate the limitations of existing techniques. In this work, we
propose a human-in-the-loop approach for fairness auditing, presenting a mixed
visual analytical system (hereafter referred to as 'FairCompass'), which
integrates both subgroup discovery technique and the decision tree-based schema
for end users. Moreover, we innovatively integrate an Exploration, Guidance and
Informed Analysis loop, to facilitate the use of the Knowledge Generation Model
for Visual Analytics in FairCompass. We evaluate the effectiveness of
FairCompass for fairness auditing in a real-world scenario, and the findings
demonstrate the system's potential for real-world deployability. We anticipate
this work will address the current gaps in research for fairness and facilitate
the operationalisation of fairness in machine learning systems.
Related papers
- The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Towards Involving End-users in Interactive Human-in-the-loop AI Fairness [1.889930012459365]
Ensuring fairness in artificial intelligence (AI) is important to counteract bias and discrimination in far-reaching applications.
Recent work has started to investigate how humans judge fairness and how to support machine learning (ML) experts in making their AI models fairer.
Our work explores designing interpretable and interactive human-in-the-loop interfaces that allow ordinary end-users to identify potential fairness issues.
arXiv Detail & Related papers (2022-04-22T02:24:11Z) - A Framework for Fairness: A Systematic Review of Existing Fair AI
Solutions [4.594159253008448]
A large portion of fairness research has gone to producing tools that machine learning practitioners can use to audit for bias while designing their algorithms.
There is a lack of application of these fairness solutions in practice.
This review provides an in-depth summary of the algorithmic bias issues that have been defined and the fairness solution space that has been proposed.
arXiv Detail & Related papers (2021-12-10T17:51:20Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - Fairness On The Ground: Applying Algorithmic Fairness Approaches to
Production Systems [4.288137349392433]
This paper presents an example of applying algorithmic fairness approaches to complex production systems within the context of a large technology company.
We discuss how we disentangle normative questions of product and policy design from empirical questions of system implementation.
We also present an approach for answering questions of the latter sort, which allows us to measure how machine learning systems and human labelers are making these tradeoffs.
arXiv Detail & Related papers (2021-03-10T16:42:20Z) - Artificial Intelligence for IT Operations (AIOPS) Workshop White Paper [50.25428141435537]
Artificial Intelligence for IT Operations (AIOps) is an emerging interdisciplinary field arising in the intersection between machine learning, big data, streaming analytics, and the management of IT operations.
Main aim of the AIOPS workshop is to bring together researchers from both academia and industry to present their experiences, results, and work in progress in this field.
arXiv Detail & Related papers (2021-01-15T10:43:10Z) - A Tale of Fairness Revisited: Beyond Adversarial Learning for Deep
Neural Network Fairness [0.0]
Motivated by the need for fair algorithmic decision making in the age of automation and artificially-intelligent technology, this technical report provides a theoretical insight into adversarial training for fairness in deep learning.
We build upon previous work in adversarial fairness, show the persistent tradeoff between fair predictions and model performance, and explore further mechanisms that help in offsetting this tradeoff.
arXiv Detail & Related papers (2021-01-08T03:13:44Z) - Learning to Complement Humans [67.38348247794949]
A rising vision for AI in the open world centers on the development of systems that can complement humans for perceptual, diagnostic, and reasoning tasks.
We demonstrate how an end-to-end learning strategy can be harnessed to optimize the combined performance of human-machine teams.
arXiv Detail & Related papers (2020-05-01T20:00:23Z) - Getting Fairness Right: Towards a Toolbox for Practitioners [2.4364387374267427]
The potential risk of AI systems unintentionally embedding and reproducing bias has attracted the attention of machine learning practitioners and society at large.
This paper proposes to draft a toolbox which helps practitioners to ensure fair AI practices.
arXiv Detail & Related papers (2020-03-15T20:53:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.