Synthesizing explainable counterfactual policies for algorithmic
recourse with program synthesis
- URL: http://arxiv.org/abs/2201.07135v1
- Date: Tue, 18 Jan 2022 17:16:45 GMT
- Title: Synthesizing explainable counterfactual policies for algorithmic
recourse with program synthesis
- Authors: Giovanni De Toni, Bruno Lepri, Andrea Passerini
- Abstract summary: We learn a program that outputs a sequence of explainable counterfactual actions given a user description and a causal graph.
An experimental evaluation on synthetic and real-world datasets shows how our approach generates effective interventions.
- Score: 18.485744170172545
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Being able to provide counterfactual interventions - sequences of actions we
would have had to take for a desirable outcome to happen - is essential to
explain how to change an unfavourable decision by a black-box machine learning
model (e.g., being denied a loan request). Existing solutions have mainly
focused on generating feasible interventions without providing explanations on
their rationale. Moreover, they need to solve a separate optimization problem
for each user. In this paper, we take a different approach and learn a program
that outputs a sequence of explainable counterfactual actions given a user
description and a causal graph. We leverage program synthesis techniques,
reinforcement learning coupled with Monte Carlo Tree Search for efficient
exploration, and rule learning to extract explanations for each recommended
action. An experimental evaluation on synthetic and real-world datasets shows
how our approach generates effective interventions by making orders of
magnitude fewer queries to the black-box classifier with respect to existing
solutions, with the additional benefit of complementing them with interpretable
explanations.
Related papers
- Optimising Human-AI Collaboration by Learning Convincing Explanations [62.81395661556852]
We propose a method for a collaborative system that remains safe by having a human making decisions.
Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations.
arXiv Detail & Related papers (2023-11-13T16:00:16Z) - Simple Steps to Success: A Method for Step-Based Counterfactual Explanations [9.269923473051138]
We propose a data-driven and model-agnostic framework to compute counterfactual explanations.
We introduce StEP, a computationally efficient method that offers incremental steps along the data manifold that directs users towards their desired outcome.
arXiv Detail & Related papers (2023-06-27T15:35:22Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Complementary Explanations for Effective In-Context Learning [77.83124315634386]
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts.
This work aims to better understand the mechanisms by which explanations are used for in-context learning.
arXiv Detail & Related papers (2022-11-25T04:40:47Z) - DisCERN:Discovering Counterfactual Explanations using Relevance Features
from Neighbourhoods [1.9706200133168679]
DisCERN is an effective strategy to minimise actionable changes necessary to create good counterfactual explanations.
We show how widely adopted feature relevance-based explainers can inform DisCERN to identify the minimum subset of "actionable features"
Our results demonstrate that DisCERN is an effective strategy to minimise actionable changes necessary to create good counterfactual explanations.
arXiv Detail & Related papers (2021-09-13T09:25:25Z) - Optimal Counterfactual Explanations in Tree Ensembles [3.8073142980733]
We advocate for a model-based search aiming at "optimal" explanations and propose efficient mixed-integer programming approaches.
We show that isolation forests can be modeled within our framework to focus the search on plausible explanations with a low outlier score.
arXiv Detail & Related papers (2021-06-11T22:44:27Z) - Loss Bounds for Approximate Influence-Based Abstraction [81.13024471616417]
Influence-based abstraction aims to gain leverage by modeling local subproblems together with the 'influence' that the rest of the system exerts on them.
This paper investigates the performance of such approaches from a theoretical perspective.
We show that neural networks trained with cross entropy are well suited to learn approximate influence representations.
arXiv Detail & Related papers (2020-11-03T15:33:10Z) - Can We Learn Heuristics For Graphical Model Inference Using
Reinforcement Learning? [114.24881214319048]
We show that we can learn programs, i.e., policies, for solving inference in higher order Conditional Random Fields (CRFs) using reinforcement learning.
Our method solves inference tasks efficiently without imposing any constraints on the form of the potentials.
arXiv Detail & Related papers (2020-04-27T19:24:04Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z) - Decisions, Counterfactual Explanations and Strategic Behavior [16.980621769406923]
We find policies and counterfactual explanations that are optimal in terms of utility in a strategic setting.
We show that, given a pre-defined policy, the problem of finding the optimal set of counterfactual explanations is NP-hard.
We demonstrate that, by incorporating a matroid constraint into the problem formulation, we can increase the diversity of the optimal set of counterfactual explanations.
arXiv Detail & Related papers (2020-02-11T12:04:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.