An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems
- URL: http://arxiv.org/abs/2410.17504v1
- Date: Wed, 23 Oct 2024 02:03:49 GMT
- Title: An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems
- Authors: Shruthi Chari,
- Abstract summary: Recent research in explainability has focused on explaining the workings of AI models or model explainability.
This thesis seeks to bridge some gaps between model and user-centered explainability.
- Score: 0.3480973072524161
- License:
- Abstract: Explainable Artificial Intelligence (AI) focuses on helping humans understand the working of AI systems or their decisions and has been a cornerstone of AI for decades. Recent research in explainability has focused on explaining the workings of AI models or model explainability. There have also been several position statements and review papers detailing the needs of end-users for user-centered explainability but fewer implementations. Hence, this thesis seeks to bridge some gaps between model and user-centered explainability. We create an explanation ontology (EO) to represent literature-derived explanation types via their supporting components. We implement a knowledge-augmented question-answering (QA) pipeline to support contextual explanations in a clinical setting. Finally, we are implementing a system to combine explanations from different AI methods and data modalities. Within the EO, we can represent fifteen different explanation types, and we have tested these representations in six exemplar use cases. We find that knowledge augmentations improve the performance of base large language models in the contextualized QA, and the performance is variable across disease groups. In the same setting, clinicians also indicated that they prefer to see actionability as one of the main foci in explanations. In our explanations combination method, we plan to use similarity metrics to determine the similarity of explanations in a chronic disease detection setting. Overall, through this thesis, we design methods that can support knowledge-enabled explanations across different use cases, accounting for the methods in today's AI era that can generate the supporting components of these explanations and domain knowledge sources that can enhance them.
Related papers
- Fool Me Once? Contrasting Textual and Visual Explanations in a Clinical Decision-Support Setting [43.110187812734864]
We evaluate three types of explanations: visual explanations (saliency maps), natural language explanations, and a combination of both modalities.
We find that text-based explanations lead to significant over-reliance, which is alleviated by combining them with saliency maps.
We also observe that the quality of explanations, that is, how much factually correct information they entail, and how much this aligns with AI correctness, significantly impacts the usefulness of the different explanation types.
arXiv Detail & Related papers (2024-10-16T06:43:02Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - A Theoretical Framework for AI Models Explainability with Application in
Biomedicine [3.5742391373143474]
We propose a novel definition of explanation that is a synthesis of what can be found in the literature.
We fit explanations into the properties of faithfulness (i.e., the explanation being a true description of the model's inner workings and decision-making process) and plausibility (i.e., how much the explanation looks convincing to the user)
arXiv Detail & Related papers (2022-12-29T20:05:26Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Explanation Ontology: A Model of Explanations for User-Centered AI [3.1783442097247345]
Explanations have often added to an AI system in a non-principled, post-hoc manner.
With greater adoption of these systems and emphasis on user-centric explainability, there is a need for a structured representation that treats explainability as a primary consideration.
We design an explanation ontology to model both the role of explanations, accounting for the system and user attributes in the process, and the range of different literature-derived explanation types.
arXiv Detail & Related papers (2020-10-04T03:53:35Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - The role of explainability in creating trustworthy artificial
intelligence for health care: a comprehensive survey of the terminology,
design choices, and evaluation strategies [1.2762298148425795]
Lack of transparency is identified as one of the main barriers to implementation of AI systems in health care.
We review the recent literature to provide guidance to researchers and practitioners on the design of explainable AI systems.
We conclude that explainable modelling can contribute to trustworthy AI, but the benefits of explainability still need to be proven in practice.
arXiv Detail & Related papers (2020-07-31T09:08:27Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.