The Case for Globalizing Fairness: A Mixed Methods Study on Colonialism,
AI, and Health in Africa
- URL: http://arxiv.org/abs/2403.03357v2
- Date: Mon, 11 Mar 2024 16:16:22 GMT
- Title: The Case for Globalizing Fairness: A Mixed Methods Study on Colonialism,
AI, and Health in Africa
- Authors: Mercy Asiedu, Awa Dieng, Iskandar Haykel, Negar Rostamzadeh, Stephen
Pfohl, Chirag Nagpal, Maria Nagawa, Abigail Oppong, Sanmi Koyejo, Katherine
Heller
- Abstract summary: We conduct a scoping review to propose axes of disparities for fairness consideration in the African context.
We then conduct qualitative research studies with 672 general population study participants and 28 experts inML, health, and policy.
Our analysis focuses on colonialism as the attribute of interest and examines the interplay between artificial intelligence (AI), health, and colonialism.
- Score: 16.7528939567041
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With growing application of machine learning (ML) technologies in healthcare,
there have been calls for developing techniques to understand and mitigate
biases these systems may exhibit. Fair-ness considerations in the development
of ML-based solutions for health have particular implications for Africa, which
already faces inequitable power imbalances between the Global North and
South.This paper seeks to explore fairness for global health, with Africa as a
case study. We conduct a scoping review to propose axes of disparities for
fairness consideration in the African context and delineate where they may come
into play in different ML-enabled medical modalities. We then conduct
qualitative research studies with 672 general population study participants and
28 experts inML, health, and policy focused on Africa to obtain corroborative
evidence on the proposed axes of disparities. Our analysis focuses on
colonialism as the attribute of interest and examines the interplay between
artificial intelligence (AI), health, and colonialism. Among the pre-identified
attributes, we found that colonial history, country of origin, and national
income level were specific axes of disparities that participants believed would
cause an AI system to be biased.However, there was also divergence of opinion
between experts and general population participants. Whereas experts generally
expressed a shared view about the relevance of colonial history for the
development and implementation of AI technologies in Africa, the majority of
the general population participants surveyed did not think there was a direct
link between AI and colonialism. Based on these findings, we provide practical
recommendations for developing fairness-aware ML solutions for health in
Africa.
Related papers
- Nteasee: A mixed methods study of expert and general population perspectives on deploying AI for health in African countries [5.554587779732823]
We conduct a qualitative study to investigate the best practices, fairness indicators, and potential biases to mitigate when deploying AI for health in Africa.
We use a mixed methods approach combining in-depth interviews (IDIs) and surveys.
We administer a blinded 30-minute survey with case studies to 672 general population participants across 5 countries in Africa.
arXiv Detail & Related papers (2024-09-04T13:56:49Z) - Artificial Intelligence for Public Health Surveillance in Africa: Applications and Opportunities [0.0]
This paper investigates the applications of AI in public health surveillance across the continent.
Our paper highlights AI's potential to enhance disease monitoring and health outcomes.
Key barriers to the widespread adoption of AI in African public health systems have been identified.
arXiv Detail & Related papers (2024-08-05T15:48:51Z) - An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems [49.1574468325115]
We show that human rights already underpin the decisions in the field of data use.
This work presents a methodology and a model for a Human Rights Impact Assessment (HRIA)
The proposed methodology is tested in concrete case-studies to prove its feasibility and effectiveness.
arXiv Detail & Related papers (2024-07-30T16:27:52Z) - The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - Case Studies of AI Policy Development in Africa [1.3194391758295114]
Artificial Intelligence (AI) requires new ways of evaluating national technology use and strategy for African nations.
We conclude that existing global readiness assessments do not fully capture African states' progress in AI readiness.
arXiv Detail & Related papers (2024-02-29T19:17:11Z) - What We Know So Far: Artificial Intelligence in African Healthcare [0.0]
Artificial intelligence (AI) applied to healthcare has the potential to transform healthcare in Africa.
This paper reviews the current state of how AI Algorithms can be used to improve diagnostics, treatment, and disease monitoring.
There is a need for a well-coordinated effort by the governments, private sector, healthcare providers, and international organizations to create sustainable AI solutions.
arXiv Detail & Related papers (2023-05-10T19:27:40Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Globalizing Fairness Attributes in Machine Learning: A Case Study on
Health in Africa [8.566023181495929]
Fairness has implications for global health in Africa, which already has inequitable power imbalances between the Global North and South.
We propose fairness attributes for consideration in the African context and delineate where they may come into play in different ML-enabled medical modalities.
arXiv Detail & Related papers (2023-04-05T02:10:53Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.