Globalizing Fairness Attributes in Machine Learning: A Case Study on
Health in Africa
- URL: http://arxiv.org/abs/2304.02190v1
- Date: Wed, 5 Apr 2023 02:10:53 GMT
- Title: Globalizing Fairness Attributes in Machine Learning: A Case Study on
Health in Africa
- Authors: Mercy Nyamewaa Asiedu, Awa Dieng, Abigail Oppong, Maria Nagawa, Sanmi
Koyejo, Katherine Heller
- Abstract summary: Fairness has implications for global health in Africa, which already has inequitable power imbalances between the Global North and South.
We propose fairness attributes for consideration in the African context and delineate where they may come into play in different ML-enabled medical modalities.
- Score: 8.566023181495929
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With growing machine learning (ML) applications in healthcare, there have
been calls for fairness in ML to understand and mitigate ethical concerns these
systems may pose. Fairness has implications for global health in Africa, which
already has inequitable power imbalances between the Global North and South.
This paper seeks to explore fairness for global health, with Africa as a case
study. We propose fairness attributes for consideration in the African context
and delineate where they may come into play in different ML-enabled medical
modalities. This work serves as a basis and call for action for furthering
research into fairness in global health.
Related papers
- A tutorial on fairness in machine learning in healthcare [0.6311610943467981]
This tutorial is designed to introduce the medical informatics community to the common notions of fairness within machine learning.
We describe the fundamental concepts and methods used to define fairness in ML, including an overview of why models in healthcare may be unfair.
We provide a user-friendly R package for comprehensive group fairness evaluation, enabling researchers and clinicians to assess fairness in their own ML work.
arXiv Detail & Related papers (2024-06-13T16:41:30Z) - The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - The Case for Globalizing Fairness: A Mixed Methods Study on Colonialism,
AI, and Health in Africa [16.7528939567041]
We conduct a scoping review to propose axes of disparities for fairness consideration in the African context.
We then conduct qualitative research studies with 672 general population study participants and 28 experts inML, health, and policy.
Our analysis focuses on colonialism as the attribute of interest and examines the interplay between artificial intelligence (AI), health, and colonialism.
arXiv Detail & Related papers (2024-03-05T22:54:15Z) - Connecting Fairness in Machine Learning with Public Health Equity [0.0]
biases in data and model design can result in disparities for certain protected groups and amplify existing inequalities in healthcare.
This study summarizes seminal literature on ML fairness and presents a framework for identifying and mitigating biases in the data and model.
Case studies suggest how the framework can be used to prevent these biases and highlight the need for fair and equitable ML models in public health.
arXiv Detail & Related papers (2023-04-08T10:21:49Z) - Can Fairness be Automated? Guidelines and Opportunities for
Fairness-aware AutoML [52.86328317233883]
We present a comprehensive overview of different ways in which fairness-related harm can arise.
We highlight several open technical challenges for future work in this direction.
arXiv Detail & Related papers (2023-03-15T09:40:08Z) - Auditing Algorithmic Fairness in Machine Learning for Health with
Severity-Based LOGAN [70.76142503046782]
We propose supplementing machine learning-based (ML) healthcare tools for bias with SLOGAN, an automatic tool for capturing local biases in a clinical prediction task.
LOGAN adapts an existing tool, LOcal Group biAs detectioN, by contextualizing group bias detection in patient illness severity and past medical history.
On average, SLOGAN identifies larger fairness disparities in over 75% of patient groups than LOGAN while maintaining clustering quality.
arXiv Detail & Related papers (2022-11-16T08:04:12Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - What Is Fairness? On the Role of Protected Attributes and Fictitious Worlds [8.223468651994352]
A growing body of literature in fairness-aware machine learning (fairML) aims to mitigate machine learning (ML)-related unfairness in automated decision-making (ADM)
However, the underlying concept of fairness is rarely discussed, leaving a significant gap between centuries of philosophical discussion and the recent adoption of the concept in the ML community.
We try to bridge this gap by formalizing a consistent concept of fairness and by translating the philosophical considerations into a formal framework for the training and evaluation of ML models in ADM systems.
arXiv Detail & Related papers (2022-05-19T15:37:26Z) - The zoo of Fairness metrics in Machine Learning [62.997667081978825]
In recent years, the problem of addressing fairness in Machine Learning (ML) and automatic decision-making has attracted a lot of attention.
A plethora of different definitions of fairness in ML have been proposed, that consider different notions of what is a "fair decision" in situations impacting individuals in the population.
In this work, we try to make some order out of this zoo of definitions.
arXiv Detail & Related papers (2021-06-01T13:19:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.