Beyond Individualized Recourse: Interpretable and Interactive Summaries
of Actionable Recourses
- URL: http://arxiv.org/abs/2009.07165v3
- Date: Wed, 28 Oct 2020 19:22:25 GMT
- Title: Beyond Individualized Recourse: Interpretable and Interactive Summaries
of Actionable Recourses
- Authors: Kaivalya Rawal, Himabindu Lakkaraju
- Abstract summary: We propose a novel model framework called Actionable Recourse agnostic (AReS) to construct global counterfactual explanations.
We formulate a novel objective which simultaneously optimize for correctness of the recourses and interpretability of the explanations.
Our framework can provide decision makers with a comprehensive overview of recourses corresponding to any black box model.
- Score: 14.626432428431594
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As predictive models are increasingly being deployed in high-stakes
decision-making, there has been a lot of interest in developing algorithms
which can provide recourses to affected individuals. While developing such
tools is important, it is even more critical to analyse and interpret a
predictive model, and vet it thoroughly to ensure that the recourses it offers
are meaningful and non-discriminatory before it is deployed in the real world.
To this end, we propose a novel model agnostic framework called Actionable
Recourse Summaries (AReS) to construct global counterfactual explanations which
provide an interpretable and accurate summary of recourses for the entire
population. We formulate a novel objective which simultaneously optimizes for
correctness of the recourses and interpretability of the explanations, while
minimizing overall recourse costs across the entire population. More
specifically, our objective enables us to learn, with optimality guarantees on
recourse correctness, a small number of compact rule sets each of which capture
recourses for well defined subpopulations within the data. We also demonstrate
theoretically that several of the prior approaches proposed to generate
recourses for individuals are special cases of our framework. Experimental
evaluation with real world datasets and user studies demonstrate that our
framework can provide decision makers with a comprehensive overview of
recourses corresponding to any black box model, and consequently help detect
undesirable model biases and discrimination.
Related papers
- Building Socially-Equitable Public Models [32.35090986784889]
Public models offer predictions to a variety of downstream tasks and have played a crucial role in various AI applications.
We advocate for integrating the objectives of downstream agents into the optimization process.
We propose a novel Equitable Objective to address performance disparities and foster fairness among heterogeneous agents in training.
arXiv Detail & Related papers (2024-06-04T21:27:43Z) - Fair Multivariate Adaptive Regression Splines for Ensuring Equity and
Transparency [1.124958340749622]
We propose a fair predictive model based on MARS that incorporates fairness measures in the learning process.
MARS is a non-parametric regression model that performs feature selection, handles non-linear relationships, generates interpretable decision rules, and derives optimal splitting criteria on the variables.
We apply our fairMARS model to real-world data and demonstrate its effectiveness in terms of accuracy and equity.
arXiv Detail & Related papers (2024-02-23T19:02:24Z) - Prediction without Preclusion: Recourse Verification with Reachable Sets [16.705988489763868]
We introduce a procedure called recourse verification to test if a model assigns fixed predictions to its decision subjects.
We conduct a comprehensive empirical study on the infeasibility of recourse on datasets from consumer finance.
arXiv Detail & Related papers (2023-08-24T14:24:04Z) - When Demonstrations Meet Generative World Models: A Maximum Likelihood
Framework for Offline Inverse Reinforcement Learning [62.00672284480755]
This paper aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent.
Accurate models of expertise in executing a task has applications in safety-sensitive applications such as clinical decision making and autonomous driving.
arXiv Detail & Related papers (2023-02-15T04:14:20Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - BRIO: Bringing Order to Abstractive Summarization [107.97378285293507]
We propose a novel training paradigm which assumes a non-deterministic distribution.
Our method achieves a new state-of-the-art result on the CNN/DailyMail (47.78 ROUGE-1) and XSum (49.07 ROUGE-1) datasets.
arXiv Detail & Related papers (2022-03-31T05:19:38Z) - Probabilistically Robust Recourse: Navigating the Trade-offs between
Costs and Robustness in Algorithmic Recourse [34.39887495671287]
We propose an objective function which simultaneously minimizes the gap between the achieved (resulting) and desired recourse invalidation rates.
We develop novel theoretical results to characterize the recourse invalidation rates corresponding to any given instance.
Experimental evaluation with multiple real world datasets demonstrates the efficacy of the proposed framework.
arXiv Detail & Related papers (2022-03-13T21:39:24Z) - Fairness-aware Summarization for Justified Decision-Making [16.47665757950391]
We focus on the problem of (un)fairness in the justification of the text-based neural models.
We propose a fairness-aware summarization mechanism to detect and counteract the bias in such models.
arXiv Detail & Related papers (2021-07-13T17:04:10Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Forethought and Hindsight in Credit Assignment [62.05690959741223]
We work to understand the gains and peculiarities of planning employed as forethought via forward models or as hindsight operating with backward models.
We investigate the best use of models in planning, primarily focusing on the selection of states in which predictions should be (re)-evaluated.
arXiv Detail & Related papers (2020-10-26T16:00:47Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.