Nteasee: A mixed methods study of expert and general population perspectives on deploying AI for health in African countries
- URL: http://arxiv.org/abs/2409.12197v2
- Date: Tue, 15 Oct 2024 19:01:46 GMT
- Title: Nteasee: A mixed methods study of expert and general population perspectives on deploying AI for health in African countries
- Authors: Mercy Nyamewaa Asiedu, Iskandar Haykel, Awa Dieng, Kerrie Kauer, Tousif Ahmed, Florence Ofori, Charisma Chan, Stephen Pfohl, Negar Rostamzadeh, Katherine Heller,
- Abstract summary: We conduct a qualitative study to investigate the best practices, fairness indicators, and potential biases to mitigate when deploying AI for health in Africa.
We use a mixed methods approach combining in-depth interviews (IDIs) and surveys.
We administer a blinded 30-minute survey with case studies to 672 general population participants across 5 countries in Africa.
- Score: 5.554587779732823
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence (AI) for health has the potential to significantly change and improve healthcare. However in most African countries, identifying culturally and contextually attuned approaches for deploying these solutions is not well understood. To bridge this gap, we conduct a qualitative study to investigate the best practices, fairness indicators, and potential biases to mitigate when deploying AI for health in African countries, as well as explore opportunities where artificial intelligence could make a positive impact in health. We used a mixed methods approach combining in-depth interviews (IDIs) and surveys. We conduct 1.5-2 hour long IDIs with 50 experts in health, policy, and AI across 17 countries, and through an inductive approach we conduct a qualitative thematic analysis on expert IDI responses. We administer a blinded 30-minute survey with case studies to 672 general population participants across 5 countries in Africa and analyze responses on quantitative scales, statistically comparing responses by country, age, gender, and level of familiarity with AI. We thematically summarize open-ended responses from surveys. Our results find generally positive attitudes, high levels of trust, accompanied by moderate levels of concern among general population participants for AI usage for health in Africa. This contrasts with expert responses, where major themes revolved around trust/mistrust, ethical concerns, and systemic barriers to integration, among others. This work presents the first-of-its-kind qualitative research study of the potential of AI for health in Africa from an algorithmic fairness angle, with perspectives from both experts and the general population. We hope that this work guides policymakers and drives home the need for further research and the inclusion of general population perspectives in decision-making around AI usage.
Related papers
- Democratising Artificial Intelligence for Pandemic Preparedness and Global Governance in Latin American and Caribbean Countries [0.0]
Infectious diseases, transmitted directly or indirectly, are among the leading causes of epidemics and pandemics.
The Global South AI for Pandemic & Epidemic Preparedness & Response Network (AI4PEP) has developed an initiative comprising 16 projects across 16 countries in the Global South.
This opinion introduces our branches in Latin American and Caribbean (LAC) countries and discusses AI governance in LAC in the light of biotechnology.
arXiv Detail & Related papers (2024-09-21T15:59:13Z) - Artificial Intelligence for Public Health Surveillance in Africa: Applications and Opportunities [0.0]
This paper investigates the applications of AI in public health surveillance across the continent.
Our paper highlights AI's potential to enhance disease monitoring and health outcomes.
Key barriers to the widespread adoption of AI in African public health systems have been identified.
arXiv Detail & Related papers (2024-08-05T15:48:51Z) - Towards Clinical AI Fairness: Filling Gaps in the Puzzle [15.543248260582217]
This review systematically pinpoints several deficiencies concerning both healthcare data and the provided AI fairness solutions.
We highlight the scarcity of research on AI fairness in many medical domains where AI technology is increasingly utilized.
To bridge these gaps, our review advances actionable strategies for both the healthcare and AI research communities.
arXiv Detail & Related papers (2024-05-28T07:42:55Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - The Case for Globalizing Fairness: A Mixed Methods Study on Colonialism,
AI, and Health in Africa [16.7528939567041]
We conduct a scoping review to propose axes of disparities for fairness consideration in the African context.
We then conduct qualitative research studies with 672 general population study participants and 28 experts inML, health, and policy.
Our analysis focuses on colonialism as the attribute of interest and examines the interplay between artificial intelligence (AI), health, and colonialism.
arXiv Detail & Related papers (2024-03-05T22:54:15Z) - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [73.78776682247187]
Concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI.
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
arXiv Detail & Related papers (2023-08-11T10:49:05Z) - Ensuring Trustworthy Medical Artificial Intelligence through Ethical and
Philosophical Principles [4.705984758887425]
AI-based computer-assisted diagnosis and treatment tools can democratize healthcare by matching the clinical level or surpassing clinical experts.
The democratization of such AI tools can reduce the cost of care, optimize resource allocation, and improve the quality of care.
integrating AI into healthcare raises several ethical and philosophical concerns, such as bias, transparency, autonomy, responsibility, and accountability.
arXiv Detail & Related papers (2023-04-23T04:14:18Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.