Generating User-Centred Explanations via Illocutionary Question
Answering: From Philosophy to Interfaces
- URL: http://arxiv.org/abs/2110.00762v1
- Date: Sat, 2 Oct 2021 09:06:36 GMT
- Title: Generating User-Centred Explanations via Illocutionary Question
Answering: From Philosophy to Interfaces
- Authors: Francesco Sovrano, Fabio Vitali
- Abstract summary: We show a new approach for the generation of interactive explanations based on a sophisticated pipeline of AI algorithms.
Our contribution is an approach to frame illocution in a computer-friendly way, to achieve user-centrality with statistical question answering.
We tested our hypotheses with a user-study involving more than 60 participants, on two XAI-based systems.
- Score: 3.04585143845864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new method for generating explanations with Artificial
Intelligence (AI) and a tool to test its expressive power within a user
interface. In order to bridge the gap between philosophy and human-computer
interfaces, we show a new approach for the generation of interactive
explanations based on a sophisticated pipeline of AI algorithms for structuring
natural language documents into knowledge graphs, answering questions
effectively and satisfactorily. With this work we aim to prove that the
philosophical theory of explanations presented by Achinstein can be actually
adapted for being implemented into a concrete software application, as an
interactive and illocutionary process of answering questions. Specifically, our
contribution is an approach to frame illocution in a computer-friendly way, to
achieve user-centrality with statistical question answering. In fact, we frame
illocution, in an explanatory process, as that mechanism responsible for
anticipating the needs of the explainee in the form of unposed, implicit,
archetypal questions, hence improving the user-centrality of the underlying
explanatory process. More precisely, we hypothesise that given an arbitrary
explanatory process, increasing its goal-orientedness and degree of illocution
results in the generation of more usable (as per ISO 9241-210) explanations. We
tested our hypotheses with a user-study involving more than 60 participants, on
two XAI-based systems, one for credit approval (finance) and one for heart
disease prediction (healthcare). The results showed that our proposed solution
produced a statistically significant improvement (hence with a p-value lower
than 0.05) on effectiveness. This, combined with a visible alignment between
the increments in effectiveness and satisfaction, suggests that our
understanding of illocution can be correct, giving evidence in favour of our
theory.
Related papers
- An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems [0.3480973072524161]
Recent research in explainability has focused on explaining the workings of AI models or model explainability.
This thesis seeks to bridge some gaps between model and user-centered explainability.
arXiv Detail & Related papers (2024-10-23T02:03:49Z) - From Pixels to Words: Leveraging Explainability in Face Recognition through Interactive Natural Language Processing [2.7568948557193287]
Face Recognition (FR) has advanced significantly with the development of deep learning, achieving high accuracy in several applications.
The lack of interpretability of these systems raises concerns about their accountability, fairness, and reliability.
We propose an interactive framework to enhance the explainability of FR models by combining model-agnostic Explainable Artificial Intelligence (XAI) and Natural Language Processing (NLP) techniques.
arXiv Detail & Related papers (2024-09-24T13:40:39Z) - Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs [58.620269228776294]
We propose a task-agnostic framework for resolving ambiguity by asking users clarifying questions.
We evaluate systems across three NLP applications: question answering, machine translation and natural language inference.
We find that intent-sim is robust, demonstrating improvements across a wide range of NLP tasks and LMs.
arXiv Detail & Related papers (2023-11-16T00:18:50Z) - Neural Amortized Inference for Nested Multi-agent Reasoning [54.39127942041582]
We propose a novel approach to bridge the gap between human-like inference capabilities and computational limitations.
We evaluate our method in two challenging multi-agent interaction domains.
arXiv Detail & Related papers (2023-08-21T22:40:36Z) - Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement [50.62461749446111]
Self-Polish (SP) is a novel method that facilitates the model's reasoning by guiding it to progressively refine the given problems to be more comprehensible and solvable.
SP is to all other prompting methods of answer/reasoning side like CoT, allowing for seamless integration with state-of-the-art techniques for further improvement.
arXiv Detail & Related papers (2023-05-23T19:58:30Z) - ASQ-IT: Interactive Explanations for Reinforcement-Learning Agents [7.9603223299524535]
We present ASQ-IT -- an interactive tool that presents video clips of the agent acting in its environment based on queries given by the user that describe temporal properties of behaviors of interest.
Our approach is based on formal methods: queries in ASQ-IT's user interface map to a fragment of Linear Temporal Logic over finite traces (LTLf), which we developed, and our algorithm for query processing is based on automata theory.
arXiv Detail & Related papers (2023-01-24T11:57:37Z) - From Philosophy to Interfaces: an Explanatory Method and a Tool Inspired
by Achinstein's Theory of Explanation [3.04585143845864]
We propose a new method for explanations in Artificial Intelligence (AI)
We show a new approach for the generation of interactive explanations based on a pipeline of AI algorithms.
We tested our hypothesis on a well-known XAI-powered credit approval system by IBM.
arXiv Detail & Related papers (2021-09-09T11:10:03Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.