Interdisciplinary Expertise to Advance Equitable Explainable AI
- URL: http://arxiv.org/abs/2406.18563v1
- Date: Wed, 29 May 2024 17:45:38 GMT
- Title: Interdisciplinary Expertise to Advance Equitable Explainable AI
- Authors: Chloe R. Bennett, Heather Cole-Lewis, Stephanie Farquhar, Naama Haamel, Boris Babenko, Oran Lang, Mat Fleck, Ilana Traynis, Charles Lau, Ivor Horn, Courtney Lyles,
- Abstract summary: In this paper, we focus on explainable AI (XAI) and describe a framework for interdisciplinary expert panel review.
We emphasize the importance of the interdisciplinary expert panel to produce more accurate, equitable interpretations.
- Score: 3.4195896673488395
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The field of artificial intelligence (AI) is rapidly influencing health and healthcare, but bias and poor performance persists for populations who face widespread structural oppression. Previous work has clearly outlined the need for more rigorous attention to data representativeness and model performance to advance equity and reduce bias. However, there is an opportunity to also improve the explainability of AI by leveraging best practices of social epidemiology and health equity to help us develop hypotheses for associations found. In this paper, we focus on explainable AI (XAI) and describe a framework for interdisciplinary expert panel review to discuss and critically assess AI model explanations from multiple perspectives and identify areas of bias and directions for future research. We emphasize the importance of the interdisciplinary expert panel to produce more accurate, equitable interpretations which are historically and contextually informed. Interdisciplinary panel discussions can help reduce bias, identify potential confounders, and identify opportunities for additional research where there are gaps in the literature. In turn, these insights can suggest opportunities for AI model improvement.
Related papers
- AI Readiness in Healthcare through Storytelling XAI [0.5120567378386615]
We develop an approach that combines multi-task distillation with interpretability techniques to enable audience-centric explainability.
Our methods increase the trust of both the domain experts and the machine learning experts to enable a responsible AI.
arXiv Detail & Related papers (2024-10-24T13:30:18Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Explainable AI in Orthopedics: Challenges, Opportunities, and Prospects [0.5277024349608834]
This work emphasizes the need for interdisciplinary collaborations between AI practitioners, orthopedic specialists, and regulatory entities to establish standards and guidelines for the adoption of XAI in orthopedics.
arXiv Detail & Related papers (2023-08-09T04:15:10Z) - HiTZ@Antidote: Argumentation-driven Explainable Artificial Intelligence
for Digital Medicine [7.089952396422835]
ANTIDOTE fosters an integrated vision of explainable AI, where low-level characteristics of the deep learning process are combined with higher level schemes proper of the human argumentation capacity.
As a first result of the project, we publish the Antidote CasiMedicos dataset to facilitate research on explainable AI in general, and argumentation in the medical domain in particular.
arXiv Detail & Related papers (2023-06-09T16:50:02Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources,
Impacts, And Mitigation Strategies [11.323961700172175]
This survey paper offers a succinct, comprehensive overview of fairness and bias in AI.
We review sources of bias, such as data, algorithm, and human decision biases.
We assess the societal impact of biased AI systems, focusing on the perpetuation of inequalities and the reinforcement of harmful stereotypes.
arXiv Detail & Related papers (2023-04-16T03:23:55Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
Stakeholder Perspective on XAI and a Conceptual Model Guiding
Interdisciplinary XAI Research [0.8707090176854576]
Main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems.
It often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata.
arXiv Detail & Related papers (2021-02-15T19:54:33Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.