Explanation Ontology in Action: A Clinical Use-Case
- URL: http://arxiv.org/abs/2010.01478v1
- Date: Sun, 4 Oct 2020 03:52:39 GMT
- Title: Explanation Ontology in Action: A Clinical Use-Case
- Authors: Shruthi Chari, Oshani Seneviratne, Daniel M. Gruen, Morgan A. Foreman,
Amar K. Das, Deborah L. McGuinness
- Abstract summary: We provide step-by-step guidance for system designers to utilize our Explanation Ontology.
We provide a detailed example with our utilization of this guidance in a clinical setting.
- Score: 3.1783442097247345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We addressed the problem of a lack of semantic representation for
user-centric explanations and different explanation types in our Explanation
Ontology (https://purl.org/heals/eo). Such a representation is increasingly
necessary as explainability has become an important problem in Artificial
Intelligence with the emergence of complex methods and an uptake in
high-precision and user-facing settings. In this submission, we provide
step-by-step guidance for system designers to utilize our ontology, introduced
in our resource track paper, to plan and model for explanations during the
design of their Artificial Intelligence systems. We also provide a detailed
example with our utilization of this guidance in a clinical setting.
Related papers
- An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems [0.3480973072524161]
Recent research in explainability has focused on explaining the workings of AI models or model explainability.
This thesis seeks to bridge some gaps between model and user-centered explainability.
arXiv Detail & Related papers (2024-10-23T02:03:49Z) - IID Relaxation by Logical Expressivity: A Research Agenda for Fitting Logics to Neurosymbolic Requirements [50.57072342894621]
We discuss the benefits of exploiting known data dependencies and distribution constraints for Neurosymbolic use cases.
This opens a new research agenda with general questions about Neurosymbolic background knowledge and the expressivity required of its logic.
arXiv Detail & Related papers (2024-04-30T12:09:53Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Exemplars and Counterexemplars Explanations for Image Classifiers,
Targeting Skin Lesion Labeling [26.17582232842832]
Explainable AI consists in developing mechanisms allowing for an interaction between decision systems and humans.
This is particularly important in sensitive contexts like in the medical domain.
We propose a use case study, for skin lesion diagnosis, illustrating how it is possible to provide the practitioner with explanations.
arXiv Detail & Related papers (2023-01-18T11:14:42Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Explanation Ontology: A Model of Explanations for User-Centered AI [3.1783442097247345]
Explanations have often added to an AI system in a non-principled, post-hoc manner.
With greater adoption of these systems and emphasis on user-centric explainability, there is a need for a structured representation that treats explainability as a primary consideration.
We design an explanation ontology to model both the role of explanations, accounting for the system and user attributes in the process, and the range of different literature-derived explanation types.
arXiv Detail & Related papers (2020-10-04T03:53:35Z) - Directions for Explainable Knowledge-Enabled Systems [3.7250420821969827]
We leverage our survey of explanation literature in Artificial Intelligence and closely related fields to generate a set of explanation types.
We define each type and provide an example question that would motivate the need for this style of explanation.
We believe this set of explanation types will help future system designers in their generation and prioritization of requirements.
arXiv Detail & Related papers (2020-03-17T04:34:29Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.