HiTZ@Antidote: Argumentation-driven Explainable Artificial Intelligence
for Digital Medicine
- URL: http://arxiv.org/abs/2306.06029v1
- Date: Fri, 9 Jun 2023 16:50:02 GMT
- Title: HiTZ@Antidote: Argumentation-driven Explainable Artificial Intelligence
for Digital Medicine
- Authors: Rodrigo Agerri, I\~nigo Alonso, Aitziber Atutxa, Ander Berrondo,
Ainara Estarrona, Iker Garcia-Ferrero, Iakes Goenaga, Koldo Gojenola, Maite
Oronoz, Igor Perez-Tejedor, German Rigau and Anar Yeginbergenova
- Abstract summary: ANTIDOTE fosters an integrated vision of explainable AI, where low-level characteristics of the deep learning process are combined with higher level schemes proper of the human argumentation capacity.
As a first result of the project, we publish the Antidote CasiMedicos dataset to facilitate research on explainable AI in general, and argumentation in the medical domain in particular.
- Score: 7.089952396422835
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Providing high quality explanations for AI predictions based on machine
learning is a challenging and complex task. To work well it requires, among
other factors: selecting a proper level of generality/specificity of the
explanation; considering assumptions about the familiarity of the explanation
beneficiary with the AI task under consideration; referring to specific
elements that have contributed to the decision; making use of additional
knowledge (e.g. expert evidence) which might not be part of the prediction
process; and providing evidence supporting negative hypothesis. Finally, the
system needs to formulate the explanation in a clearly interpretable, and
possibly convincing, way. Given these considerations, ANTIDOTE fosters an
integrated vision of explainable AI, where low-level characteristics of the
deep learning process are combined with higher level schemes proper of the
human argumentation capacity. ANTIDOTE will exploit cross-disciplinary
competences in deep learning and argumentation to support a broader and
innovative view of explainable AI, where the need for high-quality explanations
for clinical cases deliberation is critical. As a first result of the project,
we publish the Antidote CasiMedicos dataset to facilitate research on
explainable AI in general, and argumentation in the medical domain in
particular.
Related papers
- An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems [0.3480973072524161]
Recent research in explainability has focused on explaining the workings of AI models or model explainability.
This thesis seeks to bridge some gaps between model and user-centered explainability.
arXiv Detail & Related papers (2024-10-23T02:03:49Z) - Do great minds think alike? Investigating Human-AI Complementarity in Question Answering with CAIMIRA [43.116608441891096]
Humans outperform AI systems in knowledge-grounded abductive and conceptual reasoning.
State-of-the-art LLMs like GPT-4 and LLaMA show superior performance on targeted information retrieval.
arXiv Detail & Related papers (2024-10-09T03:53:26Z) - Explainable AI: Definition and attributes of a good explanation for health AI [0.18846515534317265]
understanding how and why an AI system makes a recommendation may require complex explanations of its inner workings and reasoning processes.
To fully realize the potential of AI, it is critical to address two fundamental questions about explanations for safety-critical AI applications.
The research outputs include (1) a definition of what constitutes an explanation in health-AI and (2) a comprehensive list of attributes that characterize a good explanation in health-AI.
arXiv Detail & Related papers (2024-09-09T16:56:31Z) - The Explanation Necessity for Healthcare AI [3.8953842074141387]
We propose a novel categorization system with four distinct classes of explanation necessity.
Three key factors are considered: the robustness of the evaluation protocol, the variability of expert observations, and the representation dimensionality of the application.
arXiv Detail & Related papers (2024-05-31T22:20:10Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Machine Reasoning Explainability [100.78417922186048]
Machine Reasoning (MR) uses largely symbolic means to formalize and emulate abstract reasoning.
Studies in early MR have notably started inquiries into Explainable AI (XAI)
This document reports our work in-progress on MR explainability.
arXiv Detail & Related papers (2020-09-01T13:45:05Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - The role of explainability in creating trustworthy artificial
intelligence for health care: a comprehensive survey of the terminology,
design choices, and evaluation strategies [1.2762298148425795]
Lack of transparency is identified as one of the main barriers to implementation of AI systems in health care.
We review the recent literature to provide guidance to researchers and practitioners on the design of explainable AI systems.
We conclude that explainable modelling can contribute to trustworthy AI, but the benefits of explainability still need to be proven in practice.
arXiv Detail & Related papers (2020-07-31T09:08:27Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.