Clash of the Explainers: Argumentation for Context-Appropriate
Explanations
- URL: http://arxiv.org/abs/2312.07635v1
- Date: Tue, 12 Dec 2023 09:52:30 GMT
- Title: Clash of the Explainers: Argumentation for Context-Appropriate
Explanations
- Authors: Leila Methnani, Virginia Dignum, Andreas Theodorou
- Abstract summary: There is no single approach that is best suited for a given context.
For AI explainability to be effective, explanations and how they are presented needs to be oriented towards the stakeholder receiving the explanation.
We propose a modular reasoning system consisting of a given mental model of the relevant stakeholder, a reasoner component that solves the argumentation problem generated by a multi-explainer component, and an AI model that is to be explained suitably to the stakeholder of interest.
- Score: 6.8285745209093145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding when and why to apply any given eXplainable Artificial
Intelligence (XAI) technique is not a straightforward task. There is no single
approach that is best suited for a given context. This paper aims to address
the challenge of selecting the most appropriate explainer given the context in
which an explanation is required. For AI explainability to be effective,
explanations and how they are presented needs to be oriented towards the
stakeholder receiving the explanation. If -- in general -- no single
explanation technique surpasses the rest, then reasoning over the available
methods is required in order to select one that is context-appropriate. Due to
the transparency they afford, we propose employing argumentation techniques to
reach an agreement over the most suitable explainers from a given set of
possible explainers.
In this paper, we propose a modular reasoning system consisting of a given
mental model of the relevant stakeholder, a reasoner component that solves the
argumentation problem generated by a multi-explainer component, and an AI model
that is to be explained suitably to the stakeholder of interest. By formalising
supporting premises -- and inferences -- we can map stakeholder characteristics
to those of explanation techniques. This allows us to reason over the
techniques and prioritise the best one for the given context, while also
offering transparency into the selection decision.
Related papers
- HOP, UNION, GENERATE: Explainable Multi-hop Reasoning without Rationale
Supervision [118.0818807474809]
This work proposes a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision.
Our approach performs multi-hop reasoning by explicitly modeling rationales as sets, enabling the model to capture interactions between documents and sentences within a document.
arXiv Detail & Related papers (2023-05-23T16:53:49Z) - In Search of Verifiability: Explanations Rarely Enable Complementary
Performance in AI-Advised Decision Making [25.18203172421461]
We argue explanations are only useful to the extent that they allow a human decision maker to verify the correctness of an AI's prediction.
We also compare the objective of complementary performance with that of appropriate reliance, decomposing the latter into the notions of outcome-graded and strategy-graded reliance.
arXiv Detail & Related papers (2023-05-12T18:28:04Z) - Disagreement amongst counterfactual explanations: How transparency can
be deceptive [0.0]
Counterfactual explanations are increasingly used as Explainable Artificial Intelligence technique.
Not every algorithm creates uniform explanations for the same instance.
Ethical issues arise when malicious agents use this diversity to fairwash an unfair machine learning model.
arXiv Detail & Related papers (2023-04-25T09:15:37Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Making Things Explainable vs Explaining: Requirements and Challenges
under the GDPR [2.578242050187029]
ExplanatorY AI (YAI) builds over XAI with the goal to collect and organize explainable information.
We represent the problem of generating explanations for Automated Decision-Making systems (ADMs) into the identification of an appropriate path over an explanatory space.
arXiv Detail & Related papers (2021-10-02T08:48:47Z) - Prompting Contrastive Explanations for Commonsense Reasoning Tasks [74.7346558082693]
Large pretrained language models (PLMs) can achieve near-human performance on commonsense reasoning tasks.
We show how to use these same models to generate human-interpretable evidence.
arXiv Detail & Related papers (2021-06-12T17:06:13Z) - Discrete Reasoning Templates for Natural Language Understanding [79.07883990966077]
We present an approach that reasons about complex questions by decomposing them to simpler subquestions.
We derive the final answer according to instructions in a predefined reasoning template.
We show that our approach is competitive with the state-of-the-art while being interpretable and requires little supervision.
arXiv Detail & Related papers (2021-04-05T18:56:56Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - Altruist: Argumentative Explanations through Local Interpretations of
Predictive Models [10.342433824178825]
Existing explanation techniques are often not comprehensible to the end user.
We introduce a preliminary meta-explanation methodology that identifies the truthful parts of feature importance oriented interpretations.
Experimentation strongly indicates that an ensemble of multiple interpretation techniques yields considerably more truthful explanations.
arXiv Detail & Related papers (2020-10-15T10:36:48Z) - Algorithmic Recourse: from Counterfactual Explanations to Interventions [16.9979815165902]
We argue that counterfactual explanations inform an individual where they need to get to, but not how to get there.
Instead, we propose a shift of paradigm from recourse via nearest counterfactual explanations to recourse through minimal interventions.
arXiv Detail & Related papers (2020-02-14T22:49:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.