Directive Explanations for Actionable Explainability in Machine Learning
Applications
- URL: http://arxiv.org/abs/2102.02671v1
- Date: Wed, 3 Feb 2021 01:46:55 GMT
- Title: Directive Explanations for Actionable Explainability in Machine Learning
Applications
- Authors: Ronal Singh, Paul Dourish, Piers Howe, Tim Miller, Liz Sonenberg,
Eduardo Velloso and Frank Vetere
- Abstract summary: This paper investigates the prospects of using directive explanations to assist people in achieving recourse of machine learning decisions.
We present the results of an online study investigating people's perception of directive explanations.
We propose a conceptual model to generate such explanations.
- Score: 21.436319317774768
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates the prospects of using directive explanations to
assist people in achieving recourse of machine learning decisions. Directive
explanations list which specific actions an individual needs to take to achieve
their desired outcome. If a machine learning model makes a decision that is
detrimental to an individual (e.g. denying a loan application), then it needs
to both explain why it made that decision and also explain how the individual
could obtain their desired outcome (if possible). At present, this is often
done using counterfactual explanations, but such explanations generally do not
tell individuals how to act. We assert that counterfactual explanations can be
improved by explicitly providing people with actions they could use to achieve
their desired goal. This paper makes two contributions. First, we present the
results of an online study investigating people's perception of directive
explanations. Second, we propose a conceptual model to generate such
explanations. Our online study showed a significant preference for directive
explanations ($p<0.001$). However, the participants' preferred explanation type
was affected by multiple factors, such as individual preferences, social
factors, and the feasibility of the directives. Our findings highlight the need
for a human-centred and context-specific approach for creating directive
explanations.
Related papers
- Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Complementary Explanations for Effective In-Context Learning [77.83124315634386]
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts.
This work aims to better understand the mechanisms by which explanations are used for in-context learning.
arXiv Detail & Related papers (2022-11-25T04:40:47Z) - Learning to Scaffold: Optimizing Model Explanations for Teaching [74.25464914078826]
We train models on three natural language processing and computer vision tasks.
We find that students trained with explanations extracted with our framework are able to simulate the teacher significantly more effectively than ones produced with previous methods.
arXiv Detail & Related papers (2022-04-22T16:43:39Z) - Causal Explanations and XAI [8.909115457491522]
An important goal of Explainable Artificial Intelligence (XAI) is to compensate for mismatches by offering explanations.
I take a step further by formally defining the causal notions of sufficient explanations and counterfactual explanations.
I also touch upon the significance of this work for fairness in AI by showing how actual causation can be used to improve the idea of path-specific counterfactual fairness.
arXiv Detail & Related papers (2022-01-31T12:32:10Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Are Training Resources Insufficient? Predict First Then Explain! [54.184609286094044]
We argue that the predict-then-explain (PtE) architecture is a more efficient approach in terms of the modelling perspective.
We show that the PtE structure is the most data-efficient approach when explanation data are lacking.
arXiv Detail & Related papers (2021-08-29T07:04:50Z) - Evaluating Explanations: How much do explanations from the teacher aid
students? [103.05037537415811]
We formalize the value of explanations using a student-teacher paradigm that measures the extent to which explanations improve student models in learning.
Unlike many prior proposals to evaluate explanations, our approach cannot be easily gamed, enabling principled, scalable, and automatic evaluation of attributions.
arXiv Detail & Related papers (2020-12-01T23:40:21Z) - Sequential Explanations with Mental Model-Based Policies [20.64968620536829]
We apply a reinforcement learning framework to provide explanations based on the explainee's mental model.
We conduct novel online human experiments where explanations are selected and presented to participants.
Our results suggest that mental model-based policies may increase interpretability over multiple sequential explanations.
arXiv Detail & Related papers (2020-07-17T14:43:46Z) - Order Matters: Generating Progressive Explanations for Planning Tasks in
Human-Robot Teaming [11.35869940310993]
We aim to investigate effects during explanation generation when an explanation is broken into multiple parts that are communicated sequentially.
We first evaluate our approach on a scavenger-hunt domain to demonstrate its effectively capturing the humans' preferences.
Results confirmed our hypothesis that the process of understanding an explanation was a dynamic process.
arXiv Detail & Related papers (2020-04-16T00:17:02Z) - Algorithmic Recourse: from Counterfactual Explanations to Interventions [16.9979815165902]
We argue that counterfactual explanations inform an individual where they need to get to, but not how to get there.
Instead, we propose a shift of paradigm from recourse via nearest counterfactual explanations to recourse through minimal interventions.
arXiv Detail & Related papers (2020-02-14T22:49:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.