Algorithmic Recourse: from Counterfactual Explanations to Interventions
- URL: http://arxiv.org/abs/2002.06278v4
- Date: Thu, 8 Oct 2020 15:15:33 GMT
- Title: Algorithmic Recourse: from Counterfactual Explanations to Interventions
- Authors: Amir-Hossein Karimi, Bernhard Sch\"olkopf, Isabel Valera
- Abstract summary: We argue that counterfactual explanations inform an individual where they need to get to, but not how to get there.
Instead, we propose a shift of paradigm from recourse via nearest counterfactual explanations to recourse through minimal interventions.
- Score: 16.9979815165902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning is increasingly used to inform consequential
decision-making (e.g., pre-trial bail and loan approval), it becomes important
to explain how the system arrived at its decision, and also suggest actions to
achieve a favorable decision. Counterfactual explanations -- "how the world
would have (had) to be different for a desirable outcome to occur" -- aim to
satisfy these criteria. Existing works have primarily focused on designing
algorithms to obtain counterfactual explanations for a wide range of settings.
However, one of the main objectives of "explanations as a means to help a
data-subject act rather than merely understand" has been overlooked. In
layman's terms, counterfactual explanations inform an individual where they
need to get to, but not how to get there. In this work, we rely on causal
reasoning to caution against the use of counterfactual explanations as a
recommendable set of actions for recourse. Instead, we propose a shift of
paradigm from recourse via nearest counterfactual explanations to recourse
through minimal interventions, moving the focus from explanations to
recommendations. Finally, we provide the reader with an extensive discussion on
how to realistically achieve recourse beyond structural interventions.
Related papers
- Explanation Hacking: The perils of algorithmic recourse [2.967024581564439]
We argue that recourse explanations face several conceptual pitfalls and can lead to problematic explanation hacking.
As an alternative, we advocate that explanations of AI decisions should aim at understanding.
arXiv Detail & Related papers (2024-03-22T12:49:28Z) - Clash of the Explainers: Argumentation for Context-Appropriate
Explanations [6.8285745209093145]
There is no single approach that is best suited for a given context.
For AI explainability to be effective, explanations and how they are presented needs to be oriented towards the stakeholder receiving the explanation.
We propose a modular reasoning system consisting of a given mental model of the relevant stakeholder, a reasoner component that solves the argumentation problem generated by a multi-explainer component, and an AI model that is to be explained suitably to the stakeholder of interest.
arXiv Detail & Related papers (2023-12-12T09:52:30Z) - Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs [58.620269228776294]
We propose a task-agnostic framework for resolving ambiguity by asking users clarifying questions.
We evaluate systems across three NLP applications: question answering, machine translation and natural language inference.
We find that intent-sim is robust, demonstrating improvements across a wide range of NLP tasks and LMs.
arXiv Detail & Related papers (2023-11-16T00:18:50Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Complementary Explanations for Effective In-Context Learning [77.83124315634386]
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts.
This work aims to better understand the mechanisms by which explanations are used for in-context learning.
arXiv Detail & Related papers (2022-11-25T04:40:47Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial
Contexts [12.552080951754963]
Existing and planned legislation stipulates various obligations to provide information about machine learning algorithms.
Many researchers suggest using post-hoc explanation algorithms for this purpose.
We show that post-hoc explanation algorithms are unsuitable to achieve the law's objectives.
arXiv Detail & Related papers (2022-01-25T13:12:02Z) - Directive Explanations for Actionable Explainability in Machine Learning
Applications [21.436319317774768]
This paper investigates the prospects of using directive explanations to assist people in achieving recourse of machine learning decisions.
We present the results of an online study investigating people's perception of directive explanations.
We propose a conceptual model to generate such explanations.
arXiv Detail & Related papers (2021-02-03T01:46:55Z) - Evaluating Explanations: How much do explanations from the teacher aid
students? [103.05037537415811]
We formalize the value of explanations using a student-teacher paradigm that measures the extent to which explanations improve student models in learning.
Unlike many prior proposals to evaluate explanations, our approach cannot be easily gamed, enabling principled, scalable, and automatic evaluation of attributions.
arXiv Detail & Related papers (2020-12-01T23:40:21Z) - Explainable Object-induced Action Decision for Autonomous Vehicles [53.59781838748779]
A new paradigm is proposed for autonomous driving.
It is inspired by how humans solve the problem.
A CNN architecture is proposed to solve this problem.
arXiv Detail & Related papers (2020-03-20T17:33:44Z) - Explaining Data-Driven Decisions made by AI Systems: The Counterfactual
Approach [11.871523410051527]
We consider an explanation as a set of the system's data inputs that causally drives the decision.
We show that features that have a large importance weight for a model prediction may not affect the corresponding decision.
arXiv Detail & Related papers (2020-01-21T09:58:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.