From Philosophy to Interfaces: an Explanatory Method and a Tool Inspired
by Achinstein's Theory of Explanation
- URL: http://arxiv.org/abs/2109.04171v1
- Date: Thu, 9 Sep 2021 11:10:03 GMT
- Title: From Philosophy to Interfaces: an Explanatory Method and a Tool Inspired
by Achinstein's Theory of Explanation
- Authors: Francesco Sovrano and Fabio Vitali
- Abstract summary: We propose a new method for explanations in Artificial Intelligence (AI)
We show a new approach for the generation of interactive explanations based on a pipeline of AI algorithms.
We tested our hypothesis on a well-known XAI-powered credit approval system by IBM.
- Score: 3.04585143845864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new method for explanations in Artificial Intelligence (AI) and
a tool to test its expressive power within a user interface. In order to bridge
the gap between philosophy and human-computer interfaces, we show a new
approach for the generation of interactive explanations based on a
sophisticated pipeline of AI algorithms for structuring natural language
documents into knowledge graphs, answering questions effectively and
satisfactorily. Among the mainstream philosophical theories of explanation we
identified one that in our view is more easily applicable as a practical model
for user-centric tools: Achinstein's Theory of Explanation. With this work we
aim to prove that the theory proposed by Achinstein can be actually adapted for
being implemented into a concrete software application, as an interactive
process answering questions. To this end we found a way to handle the generic
(archetypal) questions that implicitly characterise an explanatory processes as
preliminary overviews rather than as answers to explicit questions, as commonly
understood. To show the expressive power of this approach we designed and
implemented a pipeline of AI algorithms for the generation of interactive
explanations under the form of overviews, focusing on this aspect of
explanations rather than on existing interfaces and presentation logic layers
for question answering. We tested our hypothesis on a well-known XAI-powered
credit approval system by IBM, comparing CEM, a static explanatory tool for
post-hoc explanations, with an extension we developed adding interactive
explanations based on our model. The results of the user study, involving more
than 100 participants, showed that our proposed solution produced a
statistically relevant improvement on effectiveness (U=931.0, p=0.036) over the
baseline, thus giving evidence in favour of our theory.
Related papers
- An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems [0.3480973072524161]
Recent research in explainability has focused on explaining the workings of AI models or model explainability.
This thesis seeks to bridge some gaps between model and user-centered explainability.
arXiv Detail & Related papers (2024-10-23T02:03:49Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Large Language Models as Analogical Reasoners [155.9617224350088]
Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks.
We introduce a new prompting approach, analogical prompting, designed to automatically guide the reasoning process of large language models.
arXiv Detail & Related papers (2023-10-03T00:57:26Z) - ASQ-IT: Interactive Explanations for Reinforcement-Learning Agents [7.9603223299524535]
We present ASQ-IT -- an interactive tool that presents video clips of the agent acting in its environment based on queries given by the user that describe temporal properties of behaviors of interest.
Our approach is based on formal methods: queries in ASQ-IT's user interface map to a fragment of Linear Temporal Logic over finite traces (LTLf), which we developed, and our algorithm for query processing is based on automata theory.
arXiv Detail & Related papers (2023-01-24T11:57:37Z) - A Theoretical Framework for AI Models Explainability with Application in
Biomedicine [3.5742391373143474]
We propose a novel definition of explanation that is a synthesis of what can be found in the literature.
We fit explanations into the properties of faithfulness (i.e., the explanation being a true description of the model's inner workings and decision-making process) and plausibility (i.e., how much the explanation looks convincing to the user)
arXiv Detail & Related papers (2022-12-29T20:05:26Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Generating User-Centred Explanations via Illocutionary Question
Answering: From Philosophy to Interfaces [3.04585143845864]
We show a new approach for the generation of interactive explanations based on a sophisticated pipeline of AI algorithms.
Our contribution is an approach to frame illocution in a computer-friendly way, to achieve user-centrality with statistical question answering.
We tested our hypotheses with a user-study involving more than 60 participants, on two XAI-based systems.
arXiv Detail & Related papers (2021-10-02T09:06:36Z) - CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing
Human Trust in Image Recognition Models [84.32751938563426]
We propose a new explainable AI (XAI) framework for explaining decisions made by a deep convolutional neural network (CNN)
In contrast to the current methods in XAI that generate explanations as a single shot response, we pose explanation as an iterative communication process.
Our framework generates sequence of explanations in a dialog by mediating the differences between the minds of machine and human user.
arXiv Detail & Related papers (2021-09-03T09:46:20Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.