Evaluation of Human-Understandability of Global Model Explanations using
Decision Tree
- URL: http://arxiv.org/abs/2309.09917v1
- Date: Mon, 18 Sep 2023 16:30:14 GMT
- Title: Evaluation of Human-Understandability of Global Model Explanations using
Decision Tree
- Authors: Adarsa Sivaprasad, Ehud Reiter, Nava Tintarev and Nir Oren
- Abstract summary: We generate model explanations that are narrative, patient-specific and global.
We find a strong individual preference for a specific type of explanation.
This guides the design of health informatics systems that are both trustworthy and actionable.
- Score: 8.263545324859969
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In explainable artificial intelligence (XAI) research, the predominant focus
has been on interpreting models for experts and practitioners. Model agnostic
and local explanation approaches are deemed interpretable and sufficient in
many applications. However, in domains like healthcare, where end users are
patients without AI or domain expertise, there is an urgent need for model
explanations that are more comprehensible and instil trust in the model's
operations. We hypothesise that generating model explanations that are
narrative, patient-specific and global(holistic of the model) would enable
better understandability and enable decision-making. We test this using a
decision tree model to generate both local and global explanations for patients
identified as having a high risk of coronary heart disease. These explanations
are presented to non-expert users. We find a strong individual preference for a
specific type of explanation. The majority of participants prefer global
explanations, while a smaller group prefers local explanations. A task based
evaluation of mental models of these participants provide valuable feedback to
enhance narrative global explanations. This, in turn, guides the design of
health informatics systems that are both trustworthy and actionable.
Related papers
- Finding Uncommon Ground: A Human-Centered Model for Extrospective Explanations [17.427385114802753]
AI agents need to focus on individuals and their preferences as well as the context in which the explanations are given.<n>This paper proposes a personalized approach to explanation, where the agent tailors the information provided to the user based on what is most likely pertinent to them.
arXiv Detail & Related papers (2025-07-29T07:59:54Z) - XForecast: Evaluating Natural Language Explanations for Time Series Forecasting [72.57427992446698]
Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions.
Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge.
evaluating forecast NLEs is difficult due to the complex causal relationships in time series data.
arXiv Detail & Related papers (2024-10-18T05:16:39Z) - Evaluating the Explainability of Attributes and Prototypes for a Medical Classification Model [0.0]
We evaluate attribute- and prototype-based explanations with the Proto-Caps model.
We can conclude that attribute scores and visual prototypes enhance confidence in the model.
arXiv Detail & Related papers (2024-04-15T16:43:24Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Self-explaining Neural Network with Plausible Explanations [2.724141845301679]
We propose a novel, self-explaining neural network for longitudinal in-hospital mortality prediction.
We use domain-knowledge driven Sequential Organ Failure Assessment (SOFA) organ-specific scores as the atomic units of explanation.
Our results provide interesting insights into how each of the SOFA organ scores contribute to mortality at different timesteps within longitudinal patient trajectory.
arXiv Detail & Related papers (2021-10-09T15:32:17Z) - Explanatory Pluralism in Explainable AI [0.0]
I chart a taxonomy of types of explanation and the associated XAI methods that can address them.
When we look to expose the inner mechanisms of AI models, we produce Diagnostic-explanations.
When we wish to form stable generalizations of our models, we produce Expectation-explanations.
Finally, when we want to justify the usage of a model, we produce Role-explanations.
arXiv Detail & Related papers (2021-06-26T09:02:06Z) - Faithful and Plausible Explanations of Medical Code Predictions [12.156363504753244]
Explanations must balance faithfulness to the model's decision-making with their plausibility to a domain expert.
We train a proxy model that mimics the behavior of the trained model and provides fine-grained control over these trade-offs.
We evaluate our approach on the task of assigning ICD codes to clinical notes to demonstrate that explanations from the proxy model are faithful and replicate the trained model behavior.
arXiv Detail & Related papers (2021-04-16T05:13:36Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.